DAC2025 SemiWiki 800x100

Defacto Celebrates 20th Anniversary @ DAC 2023!

Defacto Celebrates 20th Anniversary @ DAC 2023!
by Daniel Nenni on 07-05-2023 at 10:00 am

20ans signature clean

Defacto Technologies is a company that specializes in Electronic Design Automation (EDA) software and solutions. Defacto offers a range of EDA software solutions that help streamline and optimize various stages of the front-end design process. Their tools focus on chip design assembly and integration before logic synthesis where they manage jointly different design formats RTL, IPXACT, UPF, etc.

In preparation for DAC I had a conversation with Defacto CEO Dr. Chouki Aktouf. Before founding Defacto in 2003, Dr. Chouki Aktouf was an associate professor of Computer Science at the University of Grenoble – France, and the dependability research group leader. He holds a Ph.D. in Electric Engineering from Grenoble University.

I noticed this year at DAC is the 20th anniversary of the company, congratulations on your success!

It is a really important year for Defacto and July is an important month. The company was founded in July 2003.

During these 20 years we proved our added value in the SoC building process pre-synthesis. In particular in the reduction of the design cycles and the PPA optimization. Today, we are proud to count most of the top 20 semiconductors companies as regular users of Defacto’s SoC Compiler.

So, definitely, after 20 years, we are confident to say that for many front-end SoC designers our EDA tools become the “de facto” and their “SoC Compiler” in the early SoC design building process managing RTL, IPXACT and design collaterals!

We will celebrate the 20th year anniversary at DAC, where we will be having several announcements in terms of success stories and tool major capabilities. Several customers will be coming to our booth (#1541) to share their experience using our solutions and how they are benefiting from them.  Many events and surprises are also planned to celebrate properly this 20th anniversary!

In parallel Defacto announced a new Major Release of its SoC Design solution. Could you please elaborate?

Exactly, this is the 10.0 Major Release of Defacto’s SoC Compiler. This release will bring a lot of announcements in terms of features, capabilities, and better performance along with first customers statements and testimonies when using this Release for large SoCs. In summary, the main announcements we will be showing at DAC about this major release, beyond the maturity of the Defacto design solution, is how much it becomes easy to use by RTL designers and SoC design architects. We are simplifying the SoC building process pre-synthesis from user SoC design specifications to the generation of the whole package, RTL and design collaterals, ready for synthesis and for design verification. This will be the main addressed topic by the Defacto team at DAC.

As I remember, SoC Compiler is an integration tool at the front-end. What about help for back-end designers?

Absolutely! our EDA market positioning for decades is clear. Our design solutions help at the front-end when starting the SoC building process but the way we manage the RTL and design collaterals is not independent or uncorrelated from the back-end. Actually, back-end designers can provide the tool with physical design information. And then the tool will generate, for example, the top level of the RTL which is physically aware. Which means that this physically aware RTL and the related design collaterals can be directly synthesized which usually leads to better PPA results. In summary,  this connection between the front-end and the back-end is where back-end designers, and also SoC design architects find a unique value compared to other EDA tools.

Are the benefits mainly speeding up the design process? PPA? Or both?

Good question. Definitely, speeding-up, shortening the design cycles/process is key since we are providing high level of automation. But getting a better PPA is also an important expectation when using Defacto. What I just mentioned earlier for the physically aware SoC integration definitely impact the PPA. Synthesis and P&R EDA tools will definitely do a better job.

In addition, our solution also helps directly optimize PPA by managing RTL connectivity with feedthroughs. for. Also, during DFT coverage enhancement and test point insertion, our design solutions automates the process of exploring and inserting test points at RTL to ensure high coverage with the lower area overhead. So, in summary both PPA and design cycles are addressed when using our design solution.

Do you manage design collaterals like UPF and SDC compared to RTL?

This is a major difference between Defacto’s solution and competition. In summary, we don’t manage only the RTL when building the SoC and generating the top level. We consider at the same time RTL and the design collaterals. At the same time, we mean managing incoherency problems between the RTL database, the SDC database, the UPF database, the IP-XACT database, etc. Also generating missing views to speed up the SoC building process. In other words, the joint management of RTL and design collaterals in a unified way is what makes Defacto’s SoC Compiler unique.

I always knew the tool to integrate IPs and build the top level. Is it possible to generate the design for synthesis and simulation tool?

This is exactly what we do. Building, integrating, inserting IPs, inserting connections, these are the daily capabilities the tool provides to the user but actually, what we are also enabling is what I said at the beginning, the generation of RTL and design collaterals.

If you need to rely to the tool to translate a specification of an SoC to the Top level, this becomes possible. How? We can share with the users at DAC through demos, how the tool interoperates with IP configuration tools to shorten the path between the specification to the Top level generation. So, the generation is today key in the automation that is provided by our design solution.

We hear a lot about Python interest in EDA, do you provide a Python API?

This is quite funny because from the past years people started to come saying: “I am a designer but in my engineering school I was more familiar with Python than Tcl, can you help us?” So, the answer is YES, today we see more and more designer’s pick-up with Python, expecting the tool to be used in Python. Why? Because for them it is easier to script in Python.

We fully support Python and the way we handle Python is 100% object-oriented. For people who have a Python culture, they should visit our booth, they will like the examples that our team will share with them!

Do you provide any checking capabilities like linting?

Checking engines are underlying in our design solution. When you start building the chip, it’s not only about editing or integrating features. The tool must provide checks to make sure the building process is reliable and correct by construction. So, we have many checking capabilities basic linting for the RTL, and each of the design collaterals along with the coherency between them. Static signoff is also provided for DFT, clocking, …. And more importantly, all these checking capabilities can be customized and extended by the user.

After 20 years and the focus on SoC Integration are you still providing DFT within SoC Compiler?

You know, we started with DFT solution a long time ago but still DFT is part of the offer. Our DFT solution is among the most mature ones in the market. We don’t really overlap with DFT implementation tools. We provide an added value at RTL in terms of DFT signoff, planning, exploration. So, yes in summary we are still a key provider of DFT solutions, for both RTL designers and DFT experts.

To find more information about Defacto Technologies and meet them at DAC booth #1541 and check out their website!

Also Read:

WEBINAR: Design Cost Reduction – How to track and predict server resources for complex chip design project?

Defacto’s SoC Compiler 10.0 is Making the SoC Building Process So Easy

Using IP-XACT, RTL and UPF for Efficient SoC Design

Working with the Unified Power Format


Vision Transformers Challenge Accelerator Architectures

Vision Transformers Challenge Accelerator Architectures
by Bernard Murphy on 07-05-2023 at 6:00 am

vision transformer

For what seems like a long time in the fast-moving world of AI, CNNs and their relatives have driven AI engine architectures at the edge. While the nature of neural net algorithms has evolved significantly, they are all assumed to be efficiently handled on a heterogenous platform processing through the layers of a DNN: an NPU for tensor operations, a DSP or GPU for vector operations and a CPU (or cluster) managing whatever is left over.

That architecture has worked well for vision processing where vector and scalar classes of operation don’t interleave significantly with the tensor layers. A process starts with normalization operations (grayscale, geometric sizing, etc), handled efficiently by vector processing. Then follows a deep series of layers filtering the image through progressive tensor operations. Finally a function like softmax, again vector-based, normalizes the output. The algorithms and the heterogenous architecture were mutually designed around this presumed lack of interleaving, all the heavy-duty intelligence being handled seamlessly in the tensor engine.

Enter transformers

The transformer architecture was announced in 2017 by Google Research/Google Brain to address a problem in natural language processing (NLP). CNNs and their ilk function by serially processing local attention filters. Each filter in a layer selects for a local feature – an edge, texture or similar. Stacking filters accumulate bottom-up recognition ultimately identifying a larger object.

In natural language the meaning of a word in a sentence is not determined solely by adjacent words in the sentence; a word some distance away may critically affect interpretation. Serially applying local attention can eventually pick up weighting from a distance but such influence is weakened. Better is global attention, looking at every word in a sentence simultaneously where distance is not a factor in weighting, evidenced by the remarkable success of large language models.

While transformers are known best in GPT and similar applications they are also gaining ground rapidly in vision transformers, known as ViT. An image is linearized in patches (say 16×16 pixels) then processed as a string through the transformer with ample opportunity for parallelization. For each the sequence is fed through a series of tensor and vector operations in succession. This repeats for however many encoder blocks the transformer supports.

The big difference from a conventional neural net model is that here tensor and vector operations are heavily interleaved. Running such an algorithm is possible on a heterogenous accelerator but frequent context switching between engines would probably not be very efficient.

What’s the upside?

Direct comparisons seem to show ViTs able to achieve comparable levels of accuracy to CNNs/DNNs, in some cases maybe with better performance. However more interesting are other insights. ViTs may be biased more to topological insights in a figure rather than bottom-up pixel-level recognition, which might account for them being more robust to image distortions or hacks. There is also active work on self-supervised training for ViTs which could greatly reduce training effort.

More generally, new architectures in AI stimulate a flood of new techniques, already apparent in many ViT papers over just the last couple of years. Which means that accelerators will need to be friendly to both traditional and transformer models. That bodes well for Quadric, whose Chimera General-Purpose NPU (GPNPU) processors are designed to be a single processor solution for all AI/ML compute, handling image pre-processing, inference, and post-processing all in the same core. Since all compute is handled in a single core with a shared memory hierarchy, no data movement is needed between compute nodes for different types of ML operators. You can learn more HERE.

Also Read:

An SDK for an Advanced AI Engine

Quadric’s Chimera GPNPU IP Blends NPU and DSP to Create a New Category of Hybrid SoC Processor

CEO Interview: Veerbhan Kheterpal of Quadric.io


Semiconductors, Apple Pie, and the 4th of July!

Semiconductors, Apple Pie, and the 4th of July!
by Daniel Nenni on 07-04-2023 at 6:00 am

Happy 4th of July 1024x768

Semiconductors, apple pie, and the 4th of July are American traditions. You can read about the history of semiconductors in our book “Fabless: The Transformation of the Semiconductor Industry” and the history of Arm in our book “Mobile Unleashed: The Origin and Evolution of Arm processors in our Devices“. Rather than reading about apple pie just go and get some!

July 4th, also known as Independence Day, is a national holiday in the United States that commemorates the country’s independence from Great Britain. It is celebrated annually on July 4th and holds great historical significance. Here are some key points about July 4th from ChatGPT:

  1. Declaration of Independence: On July 4, 1776, the Second Continental Congress adopted the Declaration of Independence, which declared the American colonies’ separation from British rule. This document, drafted primarily by Thomas Jefferson, outlined the principles of liberty, equality, and self-governance that the United States was founded upon.
  2. Independence Day: July 4th marks the anniversary of the adoption of the Declaration of Independence and is considered the birthday of the United States. It is a federal holiday, and most businesses, government offices, and schools are closed to observe the occasion.
  3. Patriotic Symbols: During July 4th celebrations, you’ll commonly see patriotic symbols such as the American flag, which represents the unity and independence of the nation. Many people decorate their homes, public spaces, and even themselves with flags, bunting, and other patriotic decorations.
  4. National Spirit: Independence Day is a time when Americans come together to celebrate their shared values and the freedoms they enjoy. It evokes a sense of national pride and fosters a spirit of unity among people of different backgrounds.
  5. Festivities: July 4th celebrations typically include various festivities across the country. Fireworks displays are a highlight of the evening, with vibrant and elaborate shows taking place in many cities and towns. Parades, concerts, carnivals, and community events are also common, offering entertainment for people of all ages.
  6. Barbecues and Picnics: Many people celebrate July 4th with outdoor gatherings, such as barbecues or picnics. Grilling hamburgers, hot dogs, and other classic American dishes is a popular tradition. Families and friends often gather in parks or backyards to enjoy good food, games, and quality time together.
  7. Reflection and Appreciation: Independence Day is also an occasion for reflection on the country’s history and the ideals it represents. It is an opportunity to appreciate the sacrifices made by the founding fathers and to honor the men and women who have fought and continue to fight for the country’s freedom.

While the core elements of July 4th celebrations remain consistent, specific activities and events can vary depending on the location and individual preferences. It is a day to remember and celebrate the birth of the United States as an independent nation and to appreciate the values and freedoms that the country holds dear.

This year my family (wife, children, grandchildren) will be celebrating the 4th in our traditional way: Local parade, backyard BBQ, and watching fireworks on the water.

Happy 4th of July!

Also Read:

TSMC Redefines Foundry to Enable Next-Generation Products

Semiconductor CapEx down in 2023

Samsung Foundry on Track for 2nm Production in 2025

Intel Internal Foundry Model Webinar


Computational Imaging Craves System-Level Design and Simulation Tools to Leverage AI in Embedded Vision

Computational Imaging Craves System-Level Design and Simulation Tools to Leverage AI in Embedded Vision
by Kalar Rajendiran on 07-03-2023 at 10:00 am

Typical Pipelines

Aberration-free optics are bulky and expensive. Thanks to high-performance AI-enabled processors and GPUs with abundant processing capabilities, image quality nowadays relies more on high computing power tied to miniaturized optics and sensors. Computational imaging is the new trend in imaging and relies on the fusion of computational techniques and traditional imaging to improve image acquisition, processing and visualization. This trend has become increasingly important with the rise of smartphone cameras and involves the use of algorithms, software and hardware components to capture, manipulate and analyze images. It results in improved image quality and enhanced visual information and additionally enables meaningful data extraction which is critical for embedded vision applications.

While there are several advantages from computational imaging, there are many challenges that need to be addressed to enjoy the full potential. The design and simulation tools used by optical designers, electronics engineers, and AI software engineers are often specialized for their respective domains. This creates siloes, hindering collaboration and integration across the entire imaging pipeline and results in suboptimal system performance.

A system-level design and simulation approach that considers the entire imaging system would optimize image quality, system functionality and performance (cost, size, power consumption, latency…). It would require integrating optical design, image sensor and processor design, image processing algorithms and AI models. Synopsys recently published a whitepaper that discusses how the gaps in computational imaging design and simulation pipelines can only be overcome with system-level solutions.

Leveraging AI Algorithms to Improve Computational Imaging Pipeline

Image Signal Processors (ISPs) process raw data from image sensors and perform various tasks to enhance image quality. Traditional ISPs are designed for specific functions and are hardwired for cost efficiency, limiting their flexibility and adaptability to different sensor classes. AI-based image processing utilizing neural networks (NN) shows promise in supplementing or replacing traditional ISPs for improving image quality.

Supplement or Replace Traditional ISPs

For example, a noise filter used in ISPs can enhance image quality but may discard crucial information present in the raw data. By analyzing chromatic aberration effects before digital signal processing (DSP), depth data contained in the raw sensor data can be indirectly extracted. This depth data can then be utilized by AI-based algorithms to reconstruct a 3D representation of a scene from a 2D image, which is not possible with current ISPs. In cases where the primary objective is for computer vision functions to interpret image content using machine learning rather than enhancing perceived quality for human viewing, working with raw data becomes advantageous. Utilizing raw data allows for more accurate object classification, object detection, scene segmentation, and other complex image analyses. In such cases, the presence of an ISP designed for image quality becomes unnecessary.

New Possibilities for Digital Imaging Systems

NNs excel in tasks such as denoising and demosaicing, surpassing the capabilities of traditional ISPs. They can also support more complex features like low-light enhancement, blur reduction, Bokeh blur effect, high dynamic range (HDR), and wide dynamic range. By embedding knowledge of what a good image should look like, NNs can generate higher resolution images. Combining denoising and demosaicing into an integrated process further enhances image quality. Additionally, NN-based demosaicing enables the use of different pixel layouts beyond the conventional Bayer layout, opening up new possibilities for digital imaging systems.

Cheaper Lenses Provide More Accurate Object Detection

NNs can produce better results for certain tasks, such as object detection and depth map estimation, when processing images captured by “imperfect” lenses. As an example, the presence of chromatic aberrations caused by imperfect lenses adds additional information to the image, which can assist the NN in identifying objects and estimating depth.

Co-designing Lens Optics with AI-based Reconstruction Algorithms

While smartphone-based ultra-miniaturized cameras have eclipsed Digital Single Lens Reflex (DSLR) cameras in the market, they face the limits of optics. Researchers at Princeton have explored the use of metalenses, which are thin, flat surfaces that can replace bulky curved lenses in compact imaging applications. They co-designed a metalens with an AI algorithm that corrects aberrations, achieving high-quality imaging with a wide field of view.

The key aspect of this co-design is the combination of a differentiable meta-optical image formation model and a novel deconvolution algorithm leveraging AI. These models are integrated into an end-to-end model, allowing joint optimization across the entire imaging pipeline to improve image quality.

Synopsys Solutions for Designing Imaging Systems

Synopsys offers tools to address the requirements of the entire computational imaging system pipeline. Its optical design and analysis tools include CODE V, LightTools, and RSoft Photonic Device Tools for modeling and optimizing optical systems. The company’s Technology Computer-Aided Design (TCAD) offers a comprehensive suite of products for process and device simulation as well as for managing simulation tasks and results.

Synopsys also offers a wide range of IP components and development tools to design and evaluate the ISP and computer vision (CV) blocks. These IP components include the MIPI interface, the ARC® VPX family of vector DSPs, and the ARC VPX family of Neural Processing Units (NPUs).

Synopsys ARC MetaWare MX Toolkit provides a common software development tool chain and includes MetaWare Neural Network SDK and MetaWare Virtual Platforms SDK. The Neural Network SDK automatically compiles and optimizes NN models while the Virtual Platforms SDK can be used for virtual prototyping.

Synopsys Platform Architect ™ provides architects and system designers with SystemC™  TLM-based tools and efficient methods for early analysis and optimization of multicore SoC architectures.

Summary

Computational imaging relies more than ever on high computing power tied to miniaturized optics and sensors rather than standalone and bulky but aberration-free optics. Promising system co-design and co-optimization approaches can help unleash the full potential of computational imaging systems by decreasing hardware complexity while keeping computing requirements at a reasonable level.

Synopsys offers design tools for the entire computational imaging pipeline spanning all domains from assisted driving systems in automotive, computer vision-based robots for smart manufacturing or high-quality images for mixed reality.

To access the whitepaper, click here. For more information, contact Synopsys.

Also Read:

Is Your RTL and Netlist Ready for DFT?

Synopsys Expands Agreement with Samsung Foundry to Increase IP Footprint

Requirements for Multi-Die System Success


A preview of Weebit Nano at DAC – with commentary from ChatGPT

A preview of Weebit Nano at DAC – with commentary from ChatGPT
by Daniel Nenni on 07-03-2023 at 6:00 am

Weebit Amir Regev Demo Screenshot
Weebit VP of Technology Development Amir Regev

Weebit Nano, a provider of advanced non-volatile memory (NVM) IP, will be exhibiting at the Design Automation Conference (DAC) this month. As part of this briefing I shared some of the basic the details with ChatGPT to see how it would phrase things. Here is some of what it suggested: “You won’t want to miss out on the epic experience awaiting you at our booth. It’s going to be a wild ride filled with mind-blowing tech and captivating demonstrations that will leave you in awe!”

ChatGPT is still learning but one thing it got right is that Weebit is showing a couple of innovative NVM demonstrations. The first is a demonstration of some of the benefits of Weebit ReRAM, a silicon-proven NVM technology that has ultra-low power consumption, high retention even at high temperatures, fast access time, high tolerance to radiation and electromagnetic interference (EMI), and numerous other advantages.

The demonstration uses Weebit’s first IP product, Weebit ReRAM IP in SkyWater Technology’s S130 process. For the demo, the ReRAM module is integrated into a subsystem with a RISC-V microcontroller (MCU), system interfaces, SRAM, and peripherals. The demo highlights the lower power consumption of Weebit ReRAM compared to typical flash memory. It also highlights the technology’s faster Write speed, which is largely due to its Direct Program/Erase capability and byte addressability. Unlike flash, which must access entire sectors of data every time it erases or writes, ReRAM only programs the bits that need to be programmed.

Weebit’s second demo is a bio-inspired neuromorphic computing demo in which Weebit ReRAM runs inference tasks using CEA-Leti’s Spiking Neural Network (SNN) algorithms. ChatGPT seemed particularly enthusiastic about this demo, saying, “Step into a realm where science fiction becomes reality as [this] mind-bending technology showcases the power of mimicking the human brain. It’s like a sci-fi movie come to life!”

This may sound over the top, but it is pretty exciting stuff. Commonly today, neural networks are simulated using traditional digital processors and accelerators, but this is inefficient and power hungry. A more efficient approach is neuromorphic computing, which makes it possible to emulate the brain’s natural operation, consuming orders of magnitude less power than today’s simulations. Because a ReRAM cell has physical and functional similarities to a biological brain synapse, it’s a natural candidate for implementing neuromorphic computing.

Visit the Weebit Nano booth #2224 at DAC to check out our demos and meet some of our execs and technologists. To arrange an in-person meeting email info@weebit-nano.com.

About Weebit Nano (ChatGPT)

Weebit Nano is an Israeli semiconductor company that specializes in the development and commercialization of next-generation memory technology. The company was founded in 2014 and is headquartered in Hod Hasharon, Israel.

Weebit Nano focuses on the development of ReRAM (Resistive Random Access Memory) technology, which is a type of non-volatile memory that has the potential to replace existing memory technologies like Flash and DRAM. ReRAM offers advantages such as faster read and write speeds, lower power consumption, and higher density compared to traditional memory technologies.

Weebit Nano’s ReRAM technology is based on silicon oxide materials and utilizes a cross-point array architecture. This allows for the stacking of multiple layers of memory cells, enabling high-density memory solutions. The company’s technology has potential applications in various fields, including consumer electronics, artificial intelligence, Internet of Things (IoT), and data storage.

Also Read:

Weebit ReRAM: NVM that’s better for the planet

How an Embedded Non-Volatile Memory Can Be a Differentiator

CEO Interview: Coby Hanoch of Weebit Nano


Podcast EP169: How Are the Standards for the Terabit Era Defined?

Podcast EP169: How Are the Standards for the Terabit Era Defined?
by Daniel Nenni on 06-30-2023 at 10:00 am

Dan is joined by Priyank Shukla of Synopsys and Kent Lusted of Intel.

Priyank Shukla is a Sr. Staff Product Manager for the Synopsys High-Speed SerDes IP portfolio. He has broad experience in analog, mixed-signal design with strong focus on high performance compute, mobile and automotive SoCs.

Kent Lusted is a Principal Engineer focused on Ethernet PHY Standards within Intel’s Network and Edge Group. Since 2012, he has been an active contributor and member of the IEEE 802.3 standards development leadership team. He continues to work closely with Intel Ethernet PHY debug teams to improve the interoperability of the many generations of SERDES products (10 Gbps, 25 Gbps, 50 Gbps and beyond). He is currently the electrical track leader of the IEEE P802.3df 400 Gb/s and 800 Gb/s Ethernet Task Force as well as the electrical track leader of the IEEE P802.3dj 200 Gb/s, 400 Gb/s, 800 Gb/s, and 1.6 Tb/s Ethernet Task Force.

Dan explores the process of developing high-performance Internet standards and supporting those standards with compliant IP. The relationships between the IEEE and other related communication standards are discussed. The complex, interdependent process of developing and validating new products against emerging standards is explored.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


TSMC Redefines Foundry to Enable Next-Generation Products

TSMC Redefines Foundry to Enable Next-Generation Products
by Mike Gianfagna on 06-30-2023 at 6:00 am

TSMC Redefines Foundry to Enable Next Generation Products

For many years, monolithic chips defined semiconductor innovation. New microprocessors defined new markets, as did new graphics processors, and cell-phone chips. Getting to the next node was the goal, and when the foundry shipped a working part victory was declared. As we know, this is changing. Semiconductor innovation is now driven by a collection of chips tightly integrated with new packaging methods, all running highly complex software. The implications of these changes are substantial. Deep technical skills, investment in infrastructure and ecosystem collaboration are all required. But how does all of this come together to facilitate the invention of the Next Big Thing? Let’s look at how TSMC redefines foundry to enable next-generation products.

What Is a Foundry?

The traditional scope of a foundry is wafer fabrication, testing, packaging, and delivery of a working monolithic chip in volume. Enabling technologies include a factory to implement a process node, a PDK, validated IP and an EDA design flow. Armed with these capabilities, new products are enabled with new monolithic chips. All this worked quite well for many decades. But now, the complexity of new product architectures, amplified by a software stack that is typically enabling AI capabilities, demands far more than a single, monolithic chip. There are many reasons for this shift from monolithic chip solutions and the result is a significant rise in multi-die solutions.

Much has been written about this shift in the innovation paradigm it enables. In the interest of time, I won’t expand on that here. There are many sources of information that explain the reasons for this shift. Here is a good summary of what’s happening.

The bottom line of all this is that the definition of product innovation has changed substantially. For many decades, the foundry delivered on the technology needed to drive innovation – a new chip in a new process. The requirements today are far more complex and include multiple chips (or chiplets) delivering various parts of the new system’s functionality. These devices are often accelerating AI algorithms. Some are sensing the environment, or performing mixed signal processing, or communicating with the cloud. And others are delivering massive, local storage arrays.

All this capability must be delivered in a dense package to accommodate the required form factor, power dissipation, performance, and latency of new, world-changing products. The question to pose here is what has become of the foundry? Delivering the enabling technology for all this innovation requires a lot more than in the past. Does the foundry now become part of a more complex value chain, or is there a more predictable way?  Some organizations are stepping up. Let’s examine how TSMC redefines foundry to enable next-generation products.

The Enabling Technologies for Next Generation Products

There are new materials and new manufacturing methods required to deliver the dense integration required to enable next-generation products. TSMC has developed a full array of these technologies, delivered in an integrated package called TSMC 3DFabric™.

Chip stacking is accomplished with a front-end process called TSMC-SoIC™ (System on Integrated Chips). Both Chip on Wafer (CoW) and Wafer on Wafer (WoW) capabilities are available. Moving to back-end advanced packaging, there are two technologies available. InFO (Integrated Fan-Out) is a chip-first approach that provides redistribution layer (RDL) connectivity, optionally with local silicon interconnect. CoWoS® (Chip on Wafer on Substrate) is a chip-last approach that provides a silicon interposer or an RDL interposer with optional local silicon interconnect.

All of this capability is delivered in one unified package. TSMC is clearly expanding the meaning of foundry. In collaboration with IP, substrate and memory suppliers, TSMC also provides an integrated turnkey service for end-to-end technical and logistical support for advanced packaging. The ecosystem tie-in is a critical ingredient for success. All suppliers must work together effectively to bring the Next Big Thing to life. TSMC has a history of building strong ecosystems to accomplish this.

Earlier, I mentioned investment in infrastructure. TSMC is out in front again with an intelligent packaging fab. This capability makes extensive use of AI, robotics and big data analytics. Packaging used to be an afterthought in the foundry process. It is now a centerpiece of innovation, further expanding the meaning of foundry.

Toward the Complete Solution

All the capabilities discussed so far bring us quite close to a fully integrated innovation model, one that truly extends what a foundry can deliver. But there is one more piece required to complete the picture. Reliable, well-integrated technology is a critical element to successful innovation, but the last mile for this process is the design flow. You need to be able to define what technologies you will use, how they will be assembled and then build and verify a model of your semiconductor system and verify it will work before building it.

Accomplishing this requires the use of tools from several suppliers, along with IP and materials models from several more. It all needs to work in a unified, predictable way. For the case of advanced multi-chip designs, there are many more items to address. The choice of active and passive dies, how they are connected, both horizontally (2.5D) and vertically (3D) and how they will all interface to each other are just a few of the new items to consider.

I was quite impressed to see TSMC’s announcement at its recent OIP Ecosystem Forum to address this last mile problem. If you have a few minutes, check out Jim Chang’s presentation. It is eye-opening.

The stated mission for this work is:

  • Find a way to modularize design and EDA tools to make the 3DIC design flow simpler and efficient
  • Ensure standardized EDA tools and design flows are compliant with TSMC’s 3DFabric technology
3Dblox Standard

With this backdrop, TSMC introduced the 3Dblox™ Standard. This standard implements a language that provides a consistent way specify all requirements for a 2.5/3D design. It is an ambitious project that unifies all aspect of 2.5/3D design specification, as shown in the figure.

Thanks to TSMC’s extensive OIP ecosystem, all the key EDA providers support the 3Dblox language, making it possible to perform product design in a unified way, independent of a specific tool flow.

This capability ties it all together for the product designer. The Next Big Thing is now within reach, since TSMC redefines foundry to enable next-generation products.

Also Read:

TSMC Doubles Down on Semiconductor Packaging!

TSMC Clarified CAPEX and Revenue for 2023!

TSMC 2023 North America Technology Symposium Overview Part 3

 


Is Your RTL and Netlist Ready for DFT?

Is Your RTL and Netlist Ready for DFT?
by Daniel Payne on 06-29-2023 at 10:00 am

Synopsys Test Family ready for DFT

I recall an early custom IC designed at Wang Labs in the 1980s without any DFT logic like scan chains, then I was confronted by Prabhu Goel about the merits of DFT, and so my journey on DFT began in earnest. I learned about ATPG at Silicon Compilers and Viewlogic, then observability at CrossCheck where I met Jennifer Scher, now she’s at Synopsys. We talked last week by video along with Synopsys Product Manager Ramsay Allen, who previously worked at UK IP Vendor Moortec, another SemiWiki client acquired by Synopsys. Test expert and R&D Director, Chandan Kumar also joined the call. Over the years Synopsys has both acquired and developed quite a broad range of EDA and IP focused on testability, so I’d say yes, they are ready for DFT.

Our discussion centered on the TestMAX Advisor tool and how it helps on testability issues that can be addressed early at the RTL stage, like:

  • DFT violation checks – ensures RTL is scan ready
  • ATPG coverage estimation – does RTL design achieve fault coverage goals
  • Test robustness – reliability in presence of glitches, Xs, edge inconsistencies
  • Test Point selection – finds hard-to-test areas
  • Connectivity validation – DFT connections at SoC assembly

The focus of this interview however was the latest test robustness and reliability capabilities that Advisor provides in the form of glitch monitoring and X capture.

Glitches

A digital circuit that produces glitches on certain nets can cause temporary errors, something to be avoided in making an IC operate robustly and reliably. Three classes of glitches can be identified automatically by TestMAX Advisor:

  • Clock Merge
  • Reset Glitch
  • DFT Logic Glitch

Here’s an example logic cone for each type of glitch:

In functional mode the designer needs to ensure that a single clock passes through the Clock Gating Cells by controlling the enabled pins, but then in test mode only one clock signal can propagate. The above example shows how two clock signals combine to create a clock merge glitch, which needs to be found and fixed before tape out.

Every violation detected by Synopsys TestMAX Advisor includes the RTL source code line number to consider changing, so designers know what is causing the issue. Tool users can even define any logic path between two points in their design to search for glitches. Glitches are painful to find, especially if they aren’t found until late in the logic design cycles or even during silicon debug. Glitches can be triggered on rising or falling edges of internal signals, so it’s paramount to discover these early in the design process when changes are much easier to make. The automated checking understands the unateness of each logic path.

Another example of glitch detection was shown when a signal called test_mode would transition.

Glitches due to Mode Transition

The actual error report for this glitch was:

Clock(s) and 1 Clock Enable(s) logic reconverges at or near test.clk_en‘.(Count of reconvergence start points = ‘1’ reconvergence end = ‘test.clk_en‘)[Affects ‘1’ Flip-flops(s)]

The final type of glitch detection was for buses driven by tri-state buffers, where clock edge inconsistencies and bus contention were caught.

Summary

RTL design and debug is a labor-intensive process, so having proper automation tools like Synopsys TestMAX Advisor are an insurance policy against re-spins caused by testability issues like glitches and Xs in an IC design. Early warning on the DFT robustness is a smart investment that pays off in the long wrong by improving the chances for a first silicon success. Design engineers run Synopsys TestMAX Advisor on every level of their hierarchical design, including the final, full-chip level.

Designers save time by using an automated checking tool, instead of relying upon manual detection methods.

For more information on Synopsys TestMAX products, please, visit the website.

Related Blogs


Unique IO & ESD Solutions @ DAC 2023!

Unique IO & ESD Solutions @ DAC 2023!
by Daniel Nenni on 06-29-2023 at 6:00 am

DAC photo Certus Semiconductor

The semiconductor industry continues to drive innovation and constantly seeks methods to lower costs and improve performance. The advantages of custom I/O libraries versus free libraries can be seen as cost-savings or, more importantly, new markets, new customers, and new business
opportunities.

At DAC 2023, Certus Semiconductor will share the advantages of high-performance I/O libraries and having the opportunity to collaborate on new ideas, incorporating unique features that will open new markets and new opportunities for your company.

Certus Semiconductor is a Unique IO & ESD Solution Company

Certus has assembled several of the world’s foremost experts in IO and ESD design to offer clients the ability to affordably tailor their IO libraries into the optimal fit for their products.

Certus expertise cross all industries. They have tailored IO and ESD libraries for low power and wide voltage ranges, and RF low cap ESD targeting the IoT, wireless and consumer electronics markets. There are IO Libraries customized for  flexible interface, multi-function, and high performance that target the FPGA and high performance computing markets. Certus expertise also includes radiation  hardened, high reliability and high-temperature IO libraries for the aerospace, automotive and industrial markets. Certus leverages this expertise to work directly with you – that means meeting with your architects, marketing team, circuit & layout designers and reliability engineers to ensure that the Certus IO and ESD solutions provide the most efficient and competitive solutions for your products and target markets.

Stephen Fairbanks, CEO of Certus Semiconductor has stated, “Our core advantages is our ability to truly work with our customers, understand their product goals and customer applications  and then to help them create IO and ESD solutions that give their products a true market differentiation and competitive advantage.  All our repeat business has been born out of these types of collaborations.“

Certus has silicon-proven libraries in a variety of foundry processes. These can be licensed off-the-shelf or can be customized for your application, and are available as full libraries or on a cell-by-cell basis.

In addition to these processes, Certus has consulted on designs in many others and can be contracted for development in any of them. Our foundry experience, includes all major foundries such as Samsung, Intel, TowerJazz, DongBu HiTek, UMC, pSemi, Silanna, Lfoundry, Silterra, TSI, XFab,Vanguard and many others.The Design Automation Conference (DAC) is the premier event devoted to the design and design automation of electronic chips and systems. DAC focuses on the latest methodologies and technology advancements in electronic design.  The 60th DAC will bring together researchers, designers, practitioners, tool developers, students and vendors.

Certus is one of the  more than 130 companies supporting this industry leading event and they invite you to meet with the Certus I/O and ESD experts on the exhibit floor. You can contact Certus HERE to schedule a meeting at booth #1332. I hope to see you there!

Also Read:

The Opportunity Costs of using foundry I/O vs. high-performance custom I/O Libraries

CEO Interview: Stephen Fairbanks of Certus Semiconductor

Certus Semiconductor releases ESD library in GlobalFoundries 12nm Finfet process

 


Semiconductor CapEx down in 2023

Semiconductor CapEx down in 2023
by Bill Jewell on 06-28-2023 at 2:00 pm

Semiconductor Capex 2021 2022 2023

Semiconductor capital expenditures (CapEx) increased 35% in 2021 and 15% in 2022, according to IC Insights. Our projection at Semiconductor Intelligence is a 14% decline in CapEx in 2023, based primarily on company statements. The biggest cuts will be made by the memory companies, with a 19% drop. CapEx will drop 50% at SK Hynix and 42% at Micron Technology. Samsung, which only increased CapEx by 5% in 2022, will hold at about the same level in 2023. Foundries will decrease CapEx by 11% in 2023, led by TSMC with a 12% cut. Among the major integrated device manufacturers (IDMs), Intel plans a 19% cut. Texas Instruments, STMicroelectronics and Infineon Technologies will buck the trend by increasing CapEx in 2023.

Companies which are significantly cutting CapEx are generally tied to the PC and smartphone markets, which are in a slump in 2023. IDC’s June forecast had PC shipments dropping 14% in 2023 and smartphones dropping 3.2%. The PC decline largely affects Intel and the memory companies. The weakness in smartphones primarily impacts TSMC (with Apple and Qualcomm as two of its largest customers) as well as the memory companies. The IDMs which are increasing CapEx in 2023 (TI, ST, and Infineon) are more tied to the automotive and industrial markets, which are still healthy. The three largest spenders (Samsung, TSMC and Intel) will account for about 60% of total semiconductor CapEx in 2023.

The high growth years for semiconductor CapEx tend to be the peak growth years for the semiconductor market for each cycle. The chart below shows the annual change in semiconductor CapEx (green bars on the left scale) and annual change in the semiconductor market (blue line on the right scale). Since 1984, each significant peak in semiconductor market growth (20% or greater) has matched a significant peak in CapEx growth. In almost every case, the significant slowing or decline in the semiconductor market in the year following the peak has resulted in a decline in CapEx in one or two years after the peak. The exception is the 1988 peak, where CapEx did not decline the following year but was flat two years after the peak.

This pattern has contributed to the volatility of the semiconductor market. In a boom year, companies strongly increase CapEx to increase production. When the boom collapses, companies cut CapEx. This pattern often leads to overcapacity following boom years. This overcapacity can lead to price declines and further exacerbate the downturn in the market. A more logical approach would be a steady increase in CapEx each year based on long-term capacity needs. However, this approach can be difficult to sell to stockholders. Strong CapEx growth in a boom year will generally be supported by stockholders. But continued CapEx growth in weak years will not.

Since 1980, semiconductor CapEx as a percentage of the semiconductor market has averaged 23%. However, the percentage has varied from 12% to 34% on an annual basis and from 18% to 29% on a five-year-average basis. The 5-year average shows a cyclical trend. The first 5-year average peak was in 1985 at 28%. The semiconductor market dropped 17% in 1985, at that time the largest decline ever. The 5-year average ratio then declined for nine years. The average eventually returned to a peak of 29% in 2000. In 2001 the market experienced its largest decline ever at 32%. The 5-year average then declined for twelve years to a low of 18% in 2012. The average has been increasing since, reaching 27% in 2022. Based on our 2023 forecasts at Semiconductor Intelligence, the average will increase to 29% in 2023. 2023 will be another major downturn year for the semiconductor market. Our Semiconductor Intelligence forecast is a 15% decline. Other forecasts are as low as a 20% decline. Will this be the beginning of another drop in CapEx relative to the market? History shows this will be the likely outcome. Major semiconductor downturns tend to scare companies into slower CapEx.

The factors behind CapEx decisions are complex. Since a wafer fab currently takes two to three years to build, companies must project their capacity needs several years into the future. Foundries account for about 30% of total CapEx. The foundries must plan their fabs based on estimates of the capacity needs of their customers in several years. The cost of a major new fab is $10 billion and up, making it a risky proposition. However, based on past trends, the industry will likely see lower CapEx relative to the semiconductor market for the next several years.

Also Read:

Steep Decline in 1Q 2023

Electronics Production in Decline

Automotive Lone Bright Spot