SNPS1670747138 DAC 2025 800x100px HRes

Thanks for the Memories

Thanks for the Memories
by Bill Jewell on 12-12-2024 at 8:00 am

Semiconductor Market Average Change 2024

The December 2024 WSTS forecast called for strong 2024 semiconductor market growth of 19%. However, the strength is limited to a few product lines. Memory is projected to grow 81% in 2024. Logic is expected to grow 16.9%. The micro product line should show only 3.9% growth, while discretes, optoelectronics, sensors and analog are all projected to decline. If the memory product line is excluded, the WSTS forecast for the rest of the semiconductor market in 2024 is only 5.8%.

The strength of memory in 2024 is also reflected in semiconductor company revenues. Revenue for the first three quarters of 2024 compared to a year earlier show gains of 109% for both Samsung Memory and SK Hynix. Micron Technology is up 78% and Kioxia is up 54%. The strongest growth among major semiconductor companies is from Nvidia, up 135%. Nvidia strength is due to its AI processors. Nvidia’s revenues also include memory purchases, adding to its revenue.

The robust memory growth is largely driven by memories for AI applications. Prices for memory have increased in 2024, especially for DRAM. Trend Force estimated average DRAM prices will be up 53% in 2024. Thus, one application, AI, is accounting for most of the growth of the semiconductor market in 2024. The revenues for the first three quarters of 2024 compared to the same three quarters of 2023 show a 97% gain for memory companies and a 135% gain for Nvidia. The total semiconductor market was up 19.9% for this period. Excluding the memory companies, the remainder of the semiconductor market was up only 6.8%. If both the memory companies and Nvidia are excluded, the rest of the semiconductor market declined 10.5%

Several major semiconductor companies experienced revenue declines in 1Q 2024 through 3Q 2024 versus a year earlier. STMicroelectronics and Analog Devices were each down 24%. Texas Instruments, Infineon Technologies, NXP Semiconductors, and Renesas Electronics also declined. These companies largely depend on the automotive and industrial sectors, which have been weak in 2024. Companies heavily dependent on the smartphone market showed revenue increases, with Qualcomm’s IC business up 10% and MediaTek up 25%. Among computer dependent companies, Intel was flat, and AMD was up 10%. Broadcom counts on AI for a significant portion of its revenues. Its calendar 3Q results have not yet been released, but it should be up about 47%.

Thus, except for AI and memory, the semiconductor market has been weak in 2024. Our Semiconductor Intelligence forecast of 6% growth in the semiconductor market in 2025 assumes some strengthening of core markets of PCs, smartphones, automotive and industrial. The rapid growth rates of memory and AI in 2024 should be significantly lower in 2025.

Memory has long exacerbated the cycles of the semiconductor industry. The chart below shows annual change in the semiconductor market based on WSTS data through 2023 and the WSTS forecast for 2024. Total semiconductor is compared with memory and semiconductor excluding memory. While the memory market has shown extremes of 102% growth and a 49% decline, the market excluding memory has been somewhat more stable, ranging from plus 42% to minus 26%. In the last ten years, the memory market change has ranged from plus 81% in 2024 to minus 33% in 2023 while the market excluding memory has ranged from plus 25% to minus 2%.

Over the last forty years, whenever the memory market has grown over 50%, it has seen a significant deceleration or a decline in the following year. In the six times this has occurred prior to 2024, the memory market has declined in the following year four times. In two cases the market has seen positive but significantly slower growth the following year and declines two years after the peak. These trends are driven by basic supply and demand for a commodity product. Memory prices and production rise when supply is below demand. When supply is above demand, production and prices fall. Thus, we should expect a significant downturn in the memory market either in 2025 or 2026.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Slowing in 2025

AI Semiconductor Market

Asia Driving Electronics Growth


CEO Interview: Mikko Utriainen of Chipmetrics

CEO Interview: Mikko Utriainen of Chipmetrics
by Daniel Nenni on 12-12-2024 at 6:00 am

Chipmetrics Founders
Mikko Utriainen – Founder – Chief Executive Officer, Feng Gao – Founder – Chief Technology Officer, Pasi Hyttinen – Founder – Chief Data Officer

Tell us about your company?
Chipmetrics is a Finnish company specializing in metrology solutions for high aspect ratio 3D chips, especially 3D NAND and 3D DRAM with an eye on GAAFET and CFET structures. We are a young but mature company founded in 2019 with over 40 customers worldwide, and we’re looking to further scale up our international business efforts in the coming years.

This far, in addition to our home market in northern Europe, we’ve found success in Japan and we’re also represented in other key markets such as South Korea, Taiwan and USA.

What problems are you solving?
We provide test chips that enable companies to develop next-generation 3D semiconductors, speeding up associated R&D and process control workflows with quick access to high-quality data.

With our various test chips, engineers can check the quality of their film deposits with an ellipsometer or other conventional surface analysis tools near-instantaneously, rather than sending them away for analysis which can take weeks. This gives access to high-quality data quickly, which in its turn cuts down on development time of 3D chips.

What application areas are your strongest?
We have a unique position in metrology solutions for film deposition, be it ALD or CVD. Considering that ALD was originally invented in Finland, it feels extra special to be able to offer Finnish solutions as ALD technology is becoming more and more relevant with 3D semiconductors.

What keeps your customers up at night?
Challenges with conformal deposition, gap-fill deposition and selective deposition, the latter especially in DRAM. High-quality films and quality control are also at the forefront of our clients’ minds. As ever, they also wish to speed up their time to market and lower the process temperatures.

What does the competitive landscape look like and how do you differentiate?
We believe we have a unique niche with our test chips and metrology solutions. That said, we do jack into the ALD landscape with batch systems, where our test structures are needed.

What we do with our PillarHall family of test chips is speed up process control screening without diminishing accuracy. It is, to our knowledge, the only practical way to measure film quality, properties, microstructure, the elemental composition of the sidewall in a high aspect ratio cavity on a wafer level.

What new features/technology are you working on?
This summer, we launched the latest iteration of the PillarHall, LHAR5. It allows for metrology of high-aspect ratio chips with gap heights as low as 100 nanometers. At the same time, we launched the ASD-1 chip, for Area Selective Deposition, in our quest to offer a complete lineup of 3D metrology chips.

We also value collaboration with our clients, meaning we’re busy ideating and iterating with them and coming up with custom concepts and test structure wafers for them.

How do customers normally engage with your company?
We have well defined products for sale. Interested parties can easily approach us with a short RFQ email through our website and we reply fast. Our clients are typically recurring with orders, and we believe they come back in part due to our high service level. Also our product development is based on the repeat order customer feedback and needs. We are also active in digital marketing, trade shows and scientific conferences. Those are important forums for startups in the semiconductor field, and for us especially the AVS and SEMI events have been useful. Lastly, the core Chipmetrics team consists of industry veterans with strong personal networks that we tap into, and on top of this we have local representatives in key markets.

Also Read:

CEO Interview: GP Singh from Ambient Scientific

CEO Interview: Ollie Jones of Sondrel

CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions


How I learned Formal Verification

How I learned Formal Verification
by Daniel Nenni on 12-11-2024 at 10:00 am

Bing Xue

Bing Xue is a dedicated Formal Verification Engineer at Axiomise, with a strong academic and professional foundation in hardware verification. He completed his PhD at the University of Southampton, where he conducted cutting-edge research on Formal Verification, RISC-V, and the impact of Single Event Upsets. Bing is proficient in RISC-V, System Verilog, and Formal Verification tools such as Cadence Jaspergold, and is skilled in Python and Linux, bringing a versatile and analytical approach to his work.

How I learned FV

I had no idea what Formal Verification (FV) was when I started my PhD,.  I spent six months exploring related papers, books, websites and open-source projects, as well as watching videos to learn about FV and System Verilog Assertion (SVA).  However, I faced several challenges during that time.

Some resources, despite being labelled as FV-focused, primarily discussed simulations. Others were too abstract, providing no practical details while some were too theoretical, presenting modelling and proving algorithms without real-world applications. After six months of study, I had a basic overview of FV but still didn’t know how to apply it to my project.

It took me another three months of hands-on practice with simple RISC-V designs to make progress. During that time, I made many mistakes and had to invest significant effort to understand and fix them. Searching for quality FV learning resources was time-consuming, and extracting the accurate information was even more challenging. I always thought that if I had access to well-structured FV courses, including theories, demonstrations, and practical labs with real world-designs, I could have completed my project faster and with better results.

Why Axiomise FV courses

I finished Axiomise FV courses last month. I believe they are the best courses for freshers and verification engineers. I wish I had discovered them earlier as they would have made a significant difference in my research journey.

FV is more than model checking

Most of the resourses I found provided only a general overview, covering the definition and history of FV.   These resources mainly focused on model checking, but FV is not just model checking!

The Axiomise FV courses cover not only model checking but also theorem proving and equivalence checking.  During my project, I mainly used model checking to evaluate fault tolerance and hardware reliability.  After completing the course, I was inspired to use equivalence checking to achieve improvement in my work.

Theory

I learned FV theories from books and papers.  These theories include transforming designs and specifications into mathematical models and formulas and proving formal properties with various (such as BDD- and SAT-based) algorithms.  However, are these theories truly essential for all verification engineers?

Given that formal tool can handle much of the modelling and proving, it is clear that verification engineers should focus more on why, when and how to use FV.  This is exactly what Axiomise FV courses emphasize.   These courses help verification engineers save their valuable time by focusing on the most critical and applicable concepts, rather than overwhelming them with unnecessary details.

Formal Techniques

A ‘smart’ formal testbench, composed of high-quality formal properties, significantly contributes to better performance by reducing run time and overcoming state explosion threshold.  But how can we develop formal properties with high qualities?

The Axiomise FV courses answer this question clearly: by applying formal (problem reduction) techniques to develop ‘smart’ formal testbenches.  These techniques, such as abstraction, invariants, assume-guarantee, decomposition, case splitting, scenario splitting, black-boxing, cut-pointing and mutation, are explained in detail within the course; accompanied by codes and examples for a deeper understanding.

What sets the course apart is the inclusion of step-by-step demos and labs that help learners master these problem reduction techniques.  All the other resources I found fail to explain formal techniques in such an easy-to-understand manner.  In my previous project, I didn’t apply all these techniques, which led to some inclusive results when verifying multipliers and dividers.  Now, I know effectively how to apply these methods to improve my project.

Demos and Labs

When learning to develop formal testbenches, I often wished for more high-quality demos and labs. Unfortunately, the resources I found typically offered either overly simplistic examples, like a basic request-and-acknowledge handshake protocol, or non-generalized designs, such as a specific meaningless hardware module.

I really enjoy the demos and labs in the FV courses. I could see their careful selection of designs used for demonstrations. For instance, the courses present FIFO, a fundamental structure in electronics and computing, as demonstration. Two brilliant abstraction-based methods are presented to exhaustively verify a FIFO: Two-Transaction and Smart Tracker.  Another valuable example is using invariants for scalable proof and bug hunting.

All serial designs, such as processors and memory sub systems, which are challenging to verify, can be represented and verified as FIFOs.  The FV courses also provide multiple demos and labs, such as variable packet design and micro-cache to demonstrate this concept.

From the FV courses, I strongly believe verification engineers can acquire all knowledge and skills required to formally verify complex designs.

The Complete FV flow: ADEPT

Most resourses agree that FV can be used for exhaustive verification, but the question is: how?  What is the overall process of FV?  How can one verify the correctness of a formal testbench? When is it appropriate to sign off?  These were questions I struggled with early on, as I couldn’t find any detailed standards or guidance. It took me considerable time to investigate and eventually realized that coverage was the key to answer these questions.

Axiomise addressed these challenges by developing ADEPT, the first industrial FV flow which clearly states the flow of FV sign-off.  The FV courses also introduce formal coverage.  Coverage in FV is more comprehensive than that in simulations.  These insights are invaluable for conducting efficient and confident FV workflows.

Benefits

Axiomise’s vision is to make formal normal and the FV courses effectively address three major misunderstandings about FV:

  1. FV is not a mystery. With the training from Axiomise, all engineers (whether they are design engineers or verification engineers) can (and should) use FV in all stages.
  2. FV is not a magic wand. A high-quality formal testbench is essential for effective bug hunting and exhaustive proof.  The FV courses provide all the necessary knowledge and skills to develop and evaluate such formal testbenches.
  3. Learning FV is not hard. Following the FV courses, even beginners can smoothly transition into formal verification engineers.
Summary

In summary, the Axiomise FV courses are an invaluable resource for anyone looking to master formal verification. I sincerely recommend the FV courses to all design and verification engineers.

Also Read:

The Convergence of Functional with Safety, Security and PPA Verification

An Enduring Growth Challenge for Formal Verification

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®


Accellera 2024 End of Year Update

Accellera 2024 End of Year Update
by Bernard Murphy on 12-11-2024 at 6:00 am

logo accellera min

From my viewpoint, standards organizations in semiconductor design always looked like they were “sharpening the saw”: further polishing/refining what we already have but not often pushing on frontiers. Very necessary of course to stabilize and get common agreement in standards but equally always seeming to be behind the innovation curve. Given the recent trend to prominent new technologies, particularly through system vendors getting into chip design, it is encouraging to realize that organizations like Accellera have already jumped (cautiously 😀) on opportunities to push on those frontiers. Standards are again acknowledging innovation in the industries they serve.

Progress in 2024

Here I’m just going to call out a few of the topics that particularly interest me, no slight intended to other standards under the Accellera umbrella.

Portable Test and Stimulus (PSS), defining a framework for system level verification, is one of these frontiers; the state space for defining system-level tests is simply too vast to be manageable with a bottom-up approach to functional verification. PSS provides a standard framework to define high-level system-centric tests, monitors, randomization, the kind of features we already know and love in UVM but here abstracted to system level relevance.

Coverage is such a feature, already provided in the standard but now with an important extension in the 3.0 update. RTL coverage metrics obviously don’t make sense at a system level. Randomization and coverage measurement should be determined against reasonable use-cases – sequences of actions and data conditions – otherwise coverage metrics may be misleading. PSS 3.0 introduces behavioral coverage to meet these needs.

You may remember one of my earlier blogs on work towards a Federated Simulation Standard (FSS). Quick summary: the objective is to be able to link together simulators in the EDA domain with simulators outside that domain, say for talking to edge sensors, drivetrain MCUs and other devices around the car, all communicating through CAN or automotive Ethernet. Similar needs arise in aircraft simulations.

This requires standards for linking to proprietary instruction set simulators and other abstracted models to enable an OEM/Tier1 to develop and test software against a wide range of scenarios. An obvious question is how this standard will fit with the Arm-sponsored SOAFEE standard. As far as I can see SOAFEE seems to be mostly about interoperability and cloud-native support for the software layer of the stack, still leaving interoperability at the hardware and EDA level less defined. That’s where I suspect FSS will concentrate first. FSS is still at the working group and user group stage, no defined release date yet, but Lu says that pressure from the auto companies will force quick progress.

Expected in 2025

I have always been interested in progress on mixed signal standards. Analog and RF are becoming more entangled with digital cores in modern designs. For example, sensing demands periodic calibration to adjust for drift, DDR PHYs must align between senders and receivers, and RF PHYs now support analog beamforming guided by digital feedback. All of which must be managed through software/digital controlled interfaces into the analog functionality.

Software-digital-analog verification is a more demanding objective than allowed for by traditional co-simulation solutions, which increases the importance of real-number modelling (RNM) methods and UVM support. Lu tells me that the UVM-MS working group now has a standard ready for board approval which he sees likely to happen after the holidays.

There was a complication in achieving this goal in as far as it requires (in some areas) extension to the System Verilog (SV) standard. SV is under control of IEEE rather than Accellera and IEEE update standards update only on a 5-year cycle. However, IEEE and Accellera work together closely and Accellera is busy defining those extensions in a backward compatible way. This effort is expected to complete fairly soon at which point it will be donated back to IEEE for consideration on their next update to the SV standard.

This all sounds complicated and still a long way off, but it seems that those Accellera recommendations are more or less guaranteed to be accepted into the next IEEE update. Tentatively (not an official statement) vendors and users might be able proceed much sooner with more comprehensive UVM-MS development once tools, IPs, etc are released to the interim standard.

Finally, Accellera is actively looking for new areas where it can contribute in support of the latest technologies. One area Lu mentioned is AI, though it seems discussion at this stage is still very tentative, not yet settled into any concrete ideas.

DVCon International Perspectives

DVCon, under the auspices of Accellera, is already well established in the US, Europe and India. Recently conferences launched in China, then Japan and then in Taiwan. Each of these offers a unique angle. Europe is big in system level verification and automotive given local interest in aerospace and the car industry. India is very strong in verification as many multinationals with Indian sites have developed teams with strengths in this area. (I can confirm this; I see quite a lot of verification papers coming out of India.)

Japan has a lot of interest in board-level design simulation, whereas Chinese interests cut across all domains. (I can also confirm this. Many research papers I review for the Innovation in Verification blog series come out of China.) DVCon activity in Taiwan is quite new and Accellera has chosen to collocate with related conferences like RISC-V. Good stuff. Wider participation and input can only strengthen standards.

Overall – good progress and I’m happy to see that Accellera is pushing on those frontiers!


Electrical Rule Checking in PCB Tools

Electrical Rule Checking in PCB Tools
by Daniel Payne on 12-10-2024 at 10:00 am

HyperLynx DRC min

I’ve known about DRC (Design Rule Checking) for IC design, and the same approach can also be applied to PCB design. The continuous evolution of electronics has led to increasingly intricate PCB designs that require Electrical Rule Checking (ERC) to ensure that performance goals are met. This complexity poses several challenges in design verification, often resulting in errors, inefficiencies, and increased costs. This blog post examines these challenges and introduces HyperLynx DRC, an EDA tool from Siemens, to address them.

Modern electronic products demand enhanced functionality and performance, directly impacting the complexity of PCB design and verification. The use of complex components, high-speed interfaces, and advanced materials requires thorough PCB checks to guarantee optimal performance and reliability. This level of complexity often stretches the capabilities of traditional verification methods. 

Several factors contribute to the challenges in PCB design and verification:

  • Error-Prone Processes: The intricate nature of complex PCBs makes the design process susceptible to errors. Oversights and mistakes during layout, component placement, and routing can compromise product functionality and reliability. Undetected errors lead to revisions, rework, and possibly complete redesigns, impacting project timelines and budgets.
  • Infrequent Checks: The labor-intensive nature of PCB checking processes discourages frequent checks throughout the design cycle. Delays in verification lead to accumulated errors and inconsistencies, making fixes challenging and time-consuming.
  • Late-Stage Error Detection: Detecting design errors in later stages of development is inefficient, leading to more modifications, increased development time and costs, and delayed time-to-market. This is particularly critical in industries with rapid technological advancements.
  • Simulation Challenges: Traditional signal and power integrity simulations involve analyzing numerous objects, including nets, planes, and area-fills. Collecting simulation models and running simulations for each object is labor-intensive and time-consuming, often exceeding the benefits gained.
HyperLynx DRC

To face these challenges, Siemens developed HyperLynx DRC, a rule-based checker that identifies potential PCB design errors using geometrical calculations. The key features are:

  • Predefined Rules: The software comes with over 100 predefined rules addressing various aspects of PCB design, including signal integrity, power integrity, electromagnetic interference, electrostatic discharge, analog circuits, creepage, clearance, and IC package-specific checks.
  • Efficient Embedded Engines: HyperLynx DRC utilizes various embedded engines, such as the geometry engine, graph engine, field solver, and creepage engine, for efficiently checking diverse technical challenges.
  • Management of False Violations: The tool provides a feature for managing false violations, allowing users to create object lists, apply rules to specific objects, and eliminate unnecessary checks, significantly reducing checking time.
  • Enhanced Filtering Capability: HyperLynx DRC enables the creation of object lists manually or automatically, offering filtering capabilities to focus on relevant objects.

The extensive capabilities of HyperLynx DRC can lead to long rule-based geometrical run times for large and complex designs. To address this, HyperLynx DRC provides the area-crop function, allowing users to isolate and analyze specific areas of the design. 

The area-crop function streamlines the verification process through:

  • User-Friendly Interface: Users can quickly specify an area by selecting nets or components using a wizard.
  • Automated Cropping: The wizard automatically crops the design with predefined merging from the selected objects and creates a new project for checking.

This function enables users to concentrate on specific design areas, reducing complexity, enhancing accuracy and speeding up run times during verification.

Case Study

MediaTek, a leading semiconductor company, used HyperLynx DRC’s area-crop function on a highly complex board. The board specifications were:

  • Layout file size: Over 300 MB
  • Layers: Over 40
  • Layout size: Over 22000 mil * 16000 mil
  • Components: Over 16,000
  • Nets: Over 11,000

The area-crop function was used as follows:

  • Segmentation of the Board: The board was divided into four sections using vertical and horizontal virtual cuts, creating top-left, top-right, bottom-left, and bottom-right areas. Two additional overlap zones were added at the intersecting regions to ensure thoroughness.
  • Accelerated Verification: Checking each section individually significantly reduced the overall run time, particularly for the complex GND signal Long Stub rule.
  • Reduced Complexity: Dividing the board into smaller sections simplified the intricate GND nets, enhancing performance and allowing for efficient error identification and resolution.
PCB layout with four areas selected

The implementation of the area-crop function yielded impressive results:

  • Time Reduction: Total checking time was reduced from 33 hours, 51 minutes, 53 seconds to just 44 minutes, a big time savings.
  • Enhanced Efficiency and Precision: Focusing on segmented areas allowed for more precise verification, ensuring design reliability and integrity without compromising the project timeline.
  • Optimized Resource Allocation: Large time savings and enhanced focus enabled optimized resource allocation, ensuring critical areas received proper scrutiny and facilitated a smoother design refinement process.
Run Times per area under Long Stub rule
Conclusion

HyperLynx DRC’s area-crop function is a powerful tool for PCB design verification. By enabling focused verification, reducing complexity, and significantly accelerating the checking process, HyperLynx DRC ensures project success and meets the challenges of modern PCB designs. This innovative solution ensures advancements in electronic products are characterized by reliability, precision, and efficiency.

Read the complete, 12-page white paper online.

Related Blogs

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools
by Mike Gianfagna on 12-10-2024 at 6:00 am

Synopsys Brings Multi Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

There is ample evidence that technologies such as high-performance computing, next-generation servers, and AI accelerators are fueling unprecedented demands in data processing speed with massive data storage, lower latency, and lower power. Heterogeneous system integration, more commonly called 2.5 and 3D IC design, promises to address these demands. As there is no “free lunch”, these new design approaches create similarly unprecedented demands associated with manufacturability and cost. It turns out the solution to this dilemma requires a combination of advanced design tools and purpose-built IP. One company stands apart with deep technology in both areas. Let’s explore how Synopsys brings multi-die integration closer with its 3DIO IP Solution and 3DIC tools.

Framing the Problem

There are two fundamental challenges to be met in order to bring heterogeneous system integration closer to reality – packaging and interconnect. Let’s examine the key requirements of each.

The need to process massive quantities of data is a driver for advanced packaging. There are many approaches here. 2.5D and 3D packaging have gained popularity as prominent solutions. In the 2.5D approach, two or more chips are side by side with an interposer connecting them. The interposer acts as a high-speed communication interface, creating greater flexibility to combine functions in one package.

For 3D IC, chips are connected with vertical stacking. This improves performance and functionality, allowing the integration of chiplets with multiple layers. A key trend is to shrink the bump pitch between the chiplets.  This improves interconnect distances and related parasitics.

All of these new design requirements and advanced packaging approaches have given rise to a significant change in interconnect strategies from traditional copper uBUMP to the most advanced uBUMP using 40um pitch, scaling even further down to 10um.

For 2.5D design, the connection between chips is made through redistribution layers on the interposer.  The distance between chips is usually around 100um. For 3D, the use of vertical stacking allows for direct connection between two chips, reducing the distance to less than 40um. The result is a much smaller substrate.

With this approach, IO no longer needs to be placed at the edge of the chip. Also, by using hybrid bond technology the vertical connection between chips is even tighter. Hybrid bonding connects dies in packages using tiny copper-to-copper connections (<10um).  

Synopsys has released an informative technical bulletin on all these trends. A link is coming. The figure below is taken from that document and shows these significant scaling trends.

Addressing the Problem

Taming these design challenges requires a combination of advanced EDA tools and specialty IP. Together, these two approaches form a winning design approach. Synopsys is well-known for its 2.5/3D design tools. Its 3D IC Compiler is a key enabler for multi-die integration. It turns out the design methodology required spans many disciplines. More on that in a moment. First, let’s examine how Synopsys brings multi-die integration closer with its 3DIO IP Solution.

This IP is specially tuned for multi-die heterogeneous integration, enabling the optimal balance of power, performance and area to address the packaging demands of 3D stacking. It turns out the 3DIO IP enables faster timing closure as well.

To better understand how it works, here are the key components of the solution:

  • Synopsys 3DIO includes a synthesis friendly Tx/Rx cell compatible with Synopsys standard cell libraries and a configurable charge device model for optimal ESD protection. As the number of IO channels increases, the optimized Synopsys 3DIO solution leverages the automatic place and route environment to place and route the IOs directly on the BUMP. The solution supports both 2.5D and 3D packaging using uBUMP and hybrid BUMP. The Synopsys 3DIO cell supports a high data rate and offers the lowest power solution, with an optimal area that fits within the hybrid BUMP area.
  • Synopsys Source Synchronous 3DIO (SS3DIO) extends the synthesizable 3DIO cell solution with a clock forwarding functionality to aid in lower bit error rate and ease timing closure between dies. The SS3DIO offers scalability to create custom-sized macros with optimal PPA and ESD. The TX, RX, and clock circuits support matched data and clock path, with data launched at the transmitting clock edge and captured at the corresponding receiving clock edge.
  • Synopsys Source Synchronous 3DIO PHY is a 64-bit hardened PHY module with inbuilt redundancy, optimized for the highest performance. The 3DIO PHY with CLK forwarding reduces bit error rate and eases implementation along with optimal placement of POWER/CLK/GND BUMP.

The figure below, also taken from the Synopsys technical bulletin provides an overview of how the Synopsys 3DIO IP Solution helps a with a variety of design challenges.

With new packaging technologies and increased density of interconnects, there is a significant rise in the IO channels for a given die area. The corresponding decrease in IO channel length increases performance but gives rise to the need for a more streamlined interface. The Synopsys 3DIO IP Solution provides a way to implement tunable, integrated multi-die design structures.

To Learn More

Addressing the challenges of heterogeneous system integration requires a combination of advanced design tools and IP that is optimized for this new design style. Synopsys provides strong offerings in both areas. As mentioned, a cornerstone of tools is the Synopsys 3DIC Compiler. You can learn more about Synopsys 3DIC Compiler here.  In the area of overall design flow, there is an excellent webinar that Synopsys recently presented with Ansys that delves into all the aspects of multi-die design. You can catch the replay of that webinar here.

You can access the technical bulletin that provides more detail on Synopsys 3DIO Solution here. And you can explore more about this IP, including access to the Synopsys 3DIO IP Solution datasheet on the Synopsys website here.  And that’s how Synopsys brings multi-die integration closer with its 3DIO IP Solution and 3DIC tools.


What is Wrong with Intel?

What is Wrong with Intel?
by Daniel Nenni on 12-09-2024 at 10:00 am

Intel Inside

One of the most popular topics on the SemiWiki forum is Intel, which I understand. Many of us grew up with Intel, some of us have worked there, and I can say that the vast majority of us want Intel to succeed. The latest Intel PR debacle is the abrupt departure of CEO Pat Gelsinger. To me this confirms the answer to the question, “What is wrong with Intel?”. But first let’s look at the big picture.

Hopefully we can all agree that AI will change the world over the next decade. It has already started, AI has made it down from the cloud into our cars, homes, laptops and phones. AI is also being weaponized and is a critical technology powering conflicts around the world. Clearly the leaders of the AI race will also be the leaders of the new AI infused world.

We should also be able to agree as to the importance of semiconductor technology and the importance of controlling the semiconductor supply chain. The pandemic fueled semiconductor shortages should still be fresh in our minds and if you think it could not happen again you are wrong.

When speaking about semiconductors you can separate them into two categories: logic and memory. Currently the majority of the leading-edge logic chips come from Taiwan (TSMC) with Intel and Samsung a distant second and third. The majority of the memory chips come from South Korea (Samsung and SK Hynix) with a distant third being U.S. based Micron and relatively new memory chip makers in China. To be clear, without memory logic is useless and without logic memory is useless.

My personal mantra has always been to plan for the worst and hope for the best so you will not be disappointed. Best case is that Taiwan and South Korea continue as they have for the last 50+ years. Worst case is that they won’t and the semiconductor supply chain is fractured and life as we know it is over. We may not go back to prehistoric days but to the younger generations it will seem like it.

There are two companies that are critical to semiconductor manufacturing in the United States: Intel (logic) and Micron (memory). Both are semiconductor legends, and both are critical to the survival of semiconductor manufacturing in the United States.

We can discuss Micron another time but a recent podcast with Intel’s Dr. Tahir Ghani (Mr. Transistor) reminded me of how important Intel is to the semiconductor industry. This week I am at IEDM, the premier semiconductor conference that showcases semiconductor innovation, and Intel is again front and center. This is a much longer technology discussion so I will simply say that Intel is critical to the semiconductor industry and the United States of America. If you think otherwise post it in the SemiWiki forum and thousands of working semiconductor professionals will explain it to you in painful detail.

This brings us back to the question: What is wrong with Intel? In my opinion Intel has had the worst board of directors in the history of the semiconductor industry. The hiring/firing of the three previous CEOs is a clear example. Seriously, we are talking about 20 years of bad decisions that have destroyed a semiconductor legend.

I posted a blog about The Legend of Intel CEOs in 2014 and updated it after Pat Gelsinger was hired and I will have to update it yet again. To me the bad board decisions started when they hired Paul Otellini (finance/sales/marketing) and then made an even worse pick with Brian Krzanich (manufacturing). The firing of Krzanich was even worse. How could the board have not properly vetted a man whose entire career was at Intel? The stated reason for the firing was absolute nonsense. Krzanich was the worst Intel CEO of all time and that is why he was fired. I would liken it to Pat Gelsinger’s “refirement” announcement. Why are board of directors allowed to reimagine CEO firings? They are heavily compensated employees of a publicly traded company. Intel pays these people millions of dollars in salary and stock options to safeguard the company. Where are the activist investors now?!?!

I also questioned the hiring of Robert Swan (finance) as Intel CEO. As it turns out Swan signed the megadeal with TSMC that saved the company from the Intel 10nm debacle and he was later fired for it. I do believe that if Swan stayed as CEO Intel would be fabless which is a very bad idea for the reasons stated above.

In regards to Pat Gelsinger, I was a big fan at the beginning but I told my contacts at Intel that the strategy should be to “speak softly and carry a big stick”. Intel’s culture has been based on being a virtual monopoly for so many years it really got the best of them. Making overly optimistic statements is a very risky proposition. At some point in time those statements will come back to haunt you unless you have the revenue to back them up. Intel did not, so Pat was out, just my opinion.

Let’s be clear, Intel is an IDM foundry like Samsung. TSMC is a pure-play foundry with hundreds of customers and partners collaborating on R&D and business strategy. No one company is going to compete with that. If you compare Intel Foundry to Samsung Foundry you get a very favorable result. The challenging TSMC head-to-head strategy has been tried before (Samsung and Globalfoundries) and billions of dollars were wasted. How did a seasoned board of directors allow this to happen?

As for the rumors of Intel being acquired, in my opinion Broadcom is the only company that qualifies. I’m confident Hock Tan could turn Intel around. I do not know how the finances would work but Hock’s management style would get Intel back into the green without a doubt.

Selling off the manufacturing part of Intel is ridiculous. Do you really think Intel Design can compete with Nvidia or even AMD without intimate ties to manufacturing? I was really excited when Intel signed the agreement with TSMC because it was a head-to-head design shootout with AMD, Nvidia, and Intel on the same process technology for the very first time. You tell me how that turned out. Are the new Intel products disruptive? The entire leading edge semiconductor industry is using TSMC N3. Will Intel really be relevant without manufacturing?

The quick fix for Intel is to be acquired by Broadcom. Bringing back Pat 2.0 and replacing the board of directors is another option. A third option is for the U.S. Government to step in and make semiconductor manufacturing a priority. Maybe Elon Musk can help Intel sort things out (kidding/not kidding).

Bottom line: Some very difficult decisions have to be made by some very qualified people. Take a look at the current Intel Board of Directors and convince me that they are the right ones to do it. You have an open invitation to be a guest on our podcast or post a written response to this blog.

I started SemiWiki 14 years ago to give semiconductor professionals a voice, a platform to participate in conversations for the greater good of the semiconductor industry. Let’s use it to help Intel become an industry leader again.


Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)

Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)
by Kalar Rajendiran on 12-09-2024 at 6:00 am

Synopsys SLM Solution Components

As industries become more reliant on advanced technologies, the importance of ensuring the reliability and longevity of critical systems grows. Failures in components, whether in autonomous vehicles, high performance computing (HPC), healthcare devices, or industrial automation, can have far-reaching consequences. Predicting and preventing failures is essential, and technologies like Digital Twins and Silicon Lifecycle Management (SLM) are key to achieving this. These tools provide the ability to monitor, analyze, and predict failures, thereby improving the dependability, and performance of systems.

“The reliability, availability, and serviceability (RAS) of complex systems such as data center infrastructure has never been more complex or critical,” said Jyotika Athavale, director of Engineering Architecture at Synopsys. “By integrating silicon health with digital twin simulations, we unlock powerful new capabilities for predictive modeling. This enables technology leaders to optimize system design and performance in new, impactful ways.”

Athavale addressed this topic during a talk she delivered at the Supercomputing Conference 2024 recently. She leads quality, reliability and safety research, pathfinding, standards and architectures for SLM solutions across RAS sensitive application domains.

Why Digital Twins Are Good for Prognostics

A Digital Twin is a virtual replica of a physical asset, created by combining real-time sensor data with simulation models. Digital twins enable continuous monitoring of system health and provide valuable insights for prognostics, which is the process of predicting future failures. By simulating different scenarios, digital twins can predict Remaining Useful Life (RUL), helping operators plan maintenance or replacements before a failure occurs. RUL refers to the time a device or component is expected to function within its specifications before failure. This proactive approach reduces downtime and optimizes system resources.

Types of Failures in Modern Systems

Failures in modern systems are categorized into permanent, transient and intermittent faults. Permanent faults, such as Time-Dependent Dielectric Breakdown (TDDB), Negative Bias Temperature Instability (NBTI), and Hot Carrier Injection (HCI), occur over time and lead to errors resulting in failure. In contrast, transient faults are temporary disruptions caused by external factors like radiation, which do not result in lasting damage.

In sub-20nm process technologies, degrading defects continue to evolve into the useful life phase of the bathtub curve, leading to issues like Silent Data Corruption (SDC), which can go unnoticed until critical failure occurs.

Why Failures Are Increasing

Despite technological advancements, failures are rising due to several factors. As devices shrink in size and increase in complexity, they become more vulnerable to failure. Smaller transistors, particularly below 20nm, are more susceptible to intrinsic wearout. Moreover, the demand for higher performance leads to greater stress on semiconductors. With interconnected systems in critical applications, even a single failure can have serious consequences, making predictive maintenance even more essential.

“To keep pace with these challenges, it’s essential to shift from reactive to predictive maintenance strategies,” said Athavale. “By integrating real-time monitoring and predictive insights at the silicon level, we can better manage the complexities of modern systems, helping avoid  potential failures and make maintenance more manageable..”

How to Monitor Silicon Health

Monitoring the health of semiconductor devices is crucial for identifying early signs of degradation. With embedded monitors integrated during the design phase, data on key performance metrics—such as voltage, temperature, and timing—can be continuously collected and analyzed. Silicon Lifecycle Management (SLM) systems include PVT monitors to track process, voltage, and temperature variations, path margin monitors to ensure signal paths remain within safe operating margins, and clock delay monitors to detect timing deviations. SLM also includes in-field analytics, which enables real-time monitoring and proactive decision-making throughout the device lifecycle.

Analyzing and Predicting Failures

Once the data is collected, it is analyzed to detect potential failures. Prognostic systems use advanced algorithms to analyze degradation patterns, such as those caused by TDDB, NBTI, and HCI, to predict when a component might fail. Predicting RUL is vital for managing system reliability, as early identification of failure allows for corrective actions like maintenance or replacement before the failure occurs.

RUL Prediction Using Synopsys SLM Data Solution

Synopsys’ SLM solution enables accurate RUL predictions through advanced monitoring and analytics, ensuring predictive maintenance and enhanced device reliability.

Key components of the Synopsys SLM solution include SLM PVT Monitors, which track process, voltage, and temperature variations to assess wear; SLM Path Margin Monitors, which detect timing degradation in critical paths; SLM Clock Delay Monitors, which identify clock-related performance anomalies; and SLM In-Field Analytics, which analyzes real-time data to predict failure trends.

The benefits of RUL prediction with Synopsys SLM include predictive maintenance, optimized reliability vs. performance, lifecycle and end-of-life planning, outlier detection, and catastrophic failure prevention. Corrective actions based on RUL analysis can include early decisions on recalls, implementing lifetime-extending mitigation strategies, and transitioning devices to a safe state to prevent further damage. Synopsys SLM provides actionable insights to minimize downtime, extend device lifespan, and ensure reliable performance throughout the lifecycle of semiconductor devices.

Summary

The combination of digital twins and Silicon Lifecycle Management (SLM) provides a powerful approach to managing the health and reliability of semiconductor devices. By enabling continuous monitoring, accurate failure prediction, and timely corrective actions, these technologies offer organizations tools to improve dependability, optimize performance, and reduce downtime. As electronic systems grow more complex and mission-critical, digital twins and SLM are becoming essential for predictive maintenance, ensuring long-term system reliability, and preventing costly failures.

Also Read:

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)

Synopsys-Ansys 2.5D/3D Multi-Die Design Update: Learning from the Early Adopters

 


Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor

Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor
by Daniel Nenni on 12-08-2024 at 6:00 am

Dan is joined by Dr. Tahir Ghani, Intel senior fellow and director of process pathfinding in Intel’s Technology Research Group. Tahir has a 30-year career at Intel working on many innovations, including strained silicon, high-K metal gate devices, FinFETs, RibbonFETs, and backside power delivery (BSPD), among others. He has filed more than 1,000 patents over his career at Intel and was honored as Intel’s 2022 Inventor of the Year. He has the nickname of “Mr. Transistor” since he’s passionate about keeping Moore’s Law alive.

In this very broad discussion, Tahir outlines the innovations over the past 60 years of Moore’s Law and how these advances will pave the way to a trillion transistor device in this decade. Tahir explains how transistor scaling, interconnect advances, chiplet-based design and advanced packaging all work together to keep Moore’s Law scaling alive and continue to deliver exponential increases in innovation.

Tahir will present an invited paper at a special session of the upcoming 70th IEDM called The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead. IEDM will be held from December 7-11, 2024 in San Francisco.  You can learn more about IEDM and register to attend here. His presentation will be Tuesday, December 10 at 2:20 PM. Tahir also reviews several other significant Intel papers that will be presented at IEDM.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen

Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen
by Daniel Nenni on 12-06-2024 at 10:00 am

Dan is joined by Dieter Therssen, CEO of Sigasi. Deiter started his career as a hardware design engineer, using IMEC’s visionary tools and design methodologies in the early days of silicon integration. Today, being CEO of Sigasi, a fast-growing, creative technology company is a perfect fit for Dieter. Having worked in that space for several companies, and well-rooted in the field of semiconductors, he forever enjoys the magic of a motivated team.

Dan explores the changing landscape of semiconductor design with Dieter. The demands of higher complexity and multi-technology systems are discussed. The impact of AI and specifically generative AI are also explored with a view toward how the unique front-end design tools offered by Sigasi can move technology forward.

ASIC/FPGA design and safety/security requirements are also reviewed in this broad discussion. Dieter explains how Sigasi is helping these trends and also discusses the new and unique community version of the Sigasi tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.