SiC Forum2025 8 Static v3

Micron Chip & Memory Down Cycle – It Ain’t Over Til it’s Over Maybe Longer and Deeper

Micron Chip & Memory Down Cycle – It Ain’t Over Til it’s Over Maybe Longer and Deeper
by Robert Maire on 10-01-2023 at 6:00 pm

china 800 pound gorilla
  • The memory down cycle is longer/deeper than many thought
  • The recovery will be slower than past cycles- a “U” rather than “V”
  • AI & new apps don’t make up for macro weakness
  •  Negative for overall semis & equip- Could China extend downcycle?
Micron report suggests a longer deeper down cycle than expected

The current memory downcycle started in the spring of 2022, over a year ago with Micron first reporting weakness. We had suggested that the current memory downturn would be longer & deeper than previous downturns given the unique circumstances and was roundly criticized as too pessimistic.

It now looks like the memory downturn will last at least two years (if not longer) and its clearly worse and longer than most prior cycles. It seems fairly clear that there will be no recovery in 2023, as we are already past the peak season for memory sales, and at best maybe sometime in 2024.

Typically memory peaks in the summer prior to the fall, busy selling season of all things electronic. We then go through a slow Q1 due to post partum depression after the holiday sales coupled with Chinese holidays in Q1. Thus it looks like summer 2024 is our next opportunity for better pricing.

The problem is that “analysts” always kick the can down the road in 6 month increments saying that things will get better in H1 or better in H2 etc;. So don’t listen to someone who now says an H1 recovery in 2024 as its just another kick of the can, without hard facts to back it up.

A “Thud” rather than a “Boing”- sounds of the cycle

The last memory downcycle several years ago seemed more like a “one quarter wonder” with things quickly bouncing back to normal after a short spending stop by Samsung.

This leads investors to believe that we were in a “V” shaped bottom when its obviously a “U” or worse yet “L” shaped bottom.

The down turn this time is not just over supply created by over spend but it is also coupled with reduced demand due to macro issues.

We have cut back on supply by holding product off market in inventory, slowing down fabs and cutting capex none of which can fix the demand issue. Perhaps the bigger problem is that product held off market needs to be eventually sold and factories running at less than full capacity beg to be turned back up to full capacity to increase utilization and profitability so any near term uptick in demand will quickly be offset by the existing excess capacity thus slowing a recovery.

We haven’t even started talking about all the potential increase in capacity related to Moore’s law density increases that increases the number of bits per wafer produced due to just ongoing technology improvements.

Bottom line:  There is a ton of excess memory capacity with weak demand to sop it all up, it’s gonna take a while.

China can and will likely stifle a memory recovery

The other 800lb gorilla problem that most in the industry haven’t spoken about is China’s entry into the memory market and what that will do to the current down cycle and the resulting market share impacts.

Most in the industry look at the supply/demand balance in memory chips as a static market share model. But its not.

China has been spending tons of money, way more than everyone else, on semiconductor equipment. Not just for foundry and trailing edge but for memory as well. While China is not a big player in memory right now they are spending their way into a much bigger role.

All that equipment shipped into China over the last few years will eventually come on line and further increase the already existing oversupply in memory chips.

Many would argue that China is not competitive in memory due to higher cost less efficient technology but we would argue that China is not a semi- rational player like Samsung or Micron and will price its product at whatever money losing price they need to to gain market share and crush competition. Kind of like what Samsung has done in the memory market but only with state sponsored infinite money behind it.

China is a “wild card” in the memory market that could easily slow or ruin any recovery in the memory market and take share from more rational players or weaker players, such as Micron, who don’t have the financial resources to lose as much money to survive.

In short, China can screw up any potential memory chip recovery and delay it further.

AI and other new apps are not enough to offset weakness & oversupply

High bandwidth memory needed for AI applications is obviously both hot and under supplied. Capacity will shift to high bandwidth memory but not enough to reduce the currently very oversupplied market. The somewhat limited supply of AI processors will also limit high bandwidth memory demand because there aren’t enough processors available and you are not going to buy memory if you can’t get processors.

$7B in capex keeps Micron treading water

Micron talked about $7B in capex for 2024 which likely is just enough to keep their existing fabs at “maintenance” levels.

With the current excess capacity in the memory market coupled with technology based capacity improvements and the threat of China, building new fabs in Boise or New York is a distant dream as it would be throwing gasoline on an already raging bonfire of excess capacity.

We don’t see a significant change in capex on the horizon and most will continue to be maintenance spend.

Both Huawei/SMIC and Micron go “EUV-less” into next gen chips

Further proof of the ability to continue on the Moore’s Law path without EUV has recently been provided by Micron.

It would appear that the latest and greatest memory chip, the LPDDR5 16GB D1b device, which made its debut in the IPhone 15 was made without $150M EUV tools just like the 7NM Huawei/SMIC chip.

Where there’s a will there’s a way…….Micron has always been a bunch of very cheap and very resourceful people who think outside the box and they have done so with this latest generation device without EUV that others are using.

In this case, doing it without EUV at Micron likely means producing it at lower cost

Link to article on EUV-less 16GB D1b chip from Micron

This just underscores our recent article about China’s ability to skirt around the semiconductor sanctions that ban EUV. They will be able to do it in memory as well.

The Stocks

Obviously this is not great news for the stock of Micron. We were even somewhat surprised that there wasn’t a worse reaction and the broader semiconductor market was positive today.

Memory oversupply/demand weakness is coincident with broader semiconductor malaise. The weak capex predictions are certainly a negative for the chip equipment providers.

For Micron specifically we remain concerned about continued losses and what that does to their balance sheet and ability to recover when the time comes. They are certainly burning through a lot of cash and if we do the math, aren’t going to have a lot left at the end of the downcycle assuming we get an end to the downcycle soon (which is not clear)

There is an old joke about Micron that if you totaled up all the profits and losses over the life of the company it would be a negative number. We haven’t revisited that exercise of late but wonder where we are…..and getting worse.

We don’t see any reasonable reason to own the shares of Micron especially at current levels. The stock is well off the bottom yet business is not and we don’t have a definitive recovery in sight.

Risks remain quite high, China is a risk to Micron in several ways and their financial strength, which is important in the chip business, is dwindling fast.

At this point there are less risky semiconductor investments, even at higher valuations, that seem more comfortable.

But then again, Micron stock has never been for the faint of heart…for a reason

Also Read:

Has U.S. already lost Chip war to China? Is Taiwan’s silicon shield a liability?

ASML-Strong Results & Guide Prove China Concerns Overblown-Chips Slow to Recover

SEMICON West 2023 Summary – No recovery in sight – Next Year?


Podcast EP185: DRAM Scaling, From Atoms to Circuits with Synopsys’ Dr. Victor Moroz

Podcast EP185: DRAM Scaling, From Atoms to Circuits with Synopsys’ Dr. Victor Moroz
by Daniel Nenni on 09-29-2023 at 10:00 am

Dan is joined by Dr. Victor Moroz, a Synopsys Fellow engaged in a variety of projects on leading edge modeling Design-Technology Co-Optimization. He has published more than 100 technical papers and over 300 US and international patents. Victor has been involved in many technical committees and is currently serving as an Editor of IEEE Electron Device Letters.

Dan discusses the challenges of advanced DRAM scaling with Victor, who explores the strategies being used today and what is on the horizon. Victor discusses the aspects of process, package design and stress analysis, security, and next-generation structures and materials and how to address those challenges with advanced design tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Stephen Rothrock of ATREG

CEO Interview: Stephen Rothrock of ATREG
by Daniel Nenni on 09-29-2023 at 6:00 am

Stephen Rothrock ATREG

Stephen Rothrock founded ATREG in 2000 to help global advanced technology companies divest and acquire infrastructure-rich manufacturing assets. Over the last 25 years, his firm has completed more than 100 transactions, representing over 40% of all global operational wafer fab sales in the semiconductor industry for operational, warm, and cold shells. Prior to founding ATREG, Stephen established Colliers International’s Global Corporate Services initiative and headed the company’s U.S. division based in Seattle, Wash. Before that, he worked as Director for Savills International commercial real estate brokerage in London, UK, also serving on the UK-listed property company’s international board. He also spent four years near Paris, France working for an international NGO.

Tell us about how ATREG came about.
In the late 90s, Japan was heavily divesting from U.S. semiconductor manufacturing assets due to falling memory prices and the high exchange rate of the Yen against the U.S. Dollar. Having had some cleanroom experience through work I had done with AT&T in Europe, several Japanese companies approached me when I was with Colliers International asking if I could help them divest some of their wafer fabs located in the Pacific Northwest. That’s how I ended up selling two 200mm fabs, including Matsushita Puyallup, WA and Fujitsu Gresham, OR, to Microchip. Then we sold a facility for Sony down in Eugene, OR and NEC invited us to sell its 200mm facility in Scotland. After closing these fab transactions for Japanese companies, I recognized a gap in the market and decided to create a special internal division named Advanced Technology Real Estate Group (ATREG), dedicated exclusively to transactions focused on infrastructure-rich semiconductor cleanroom and manufacturing assets. We realized that if we sold a facility with an operational tool line, workforce, and an ongoing supply agreement, there would be a market for wafer fab divestment and acquisition services to other chip makers at a time when the industry was consolidating, not just in Asia, but also in the U.S. and Europe. The business took off through assignments with IBM, Infineon, Micron, and a number of Silicon Valley firms such as Maxim. Eventually, I spun the division out of Colliers International and ATREG was born.

What factors do you attribute ATREG’s success to?
After operating for 25 years, ATREG is still the only premier global firm in the world dedicated to initiating, brokering, and executing the exchange of advanced technology cleanroom manufacturing assets. ATREG has served as an objective intermediary in the transfer of over $30 billion in assets so far, acting as an indispensable conduit for the growth of its partners and the industry as a whole. There was a real need to help advanced technology companies with their global manufacturing disposition strategies because they didn’t know where to start.

As we continued to conduct fab transactions, we collected significant data on global cleanroom assets and critical deal points. Most companies didn’t have the internal staff, knowledge, or ability to allocate the time and resources necessary to execute these types of transactions. Trust and integrity were key to discussing these very sensitive issues given the financial and balance sheet effect. Over time, ATREG has built trusted relationships with many of the high-level C-suite executives in the semiconductor industry to facilitate these transactions. Our key objective is to work hand in hand with sellers and buyers alike to find the right asset strategy while simultaneously retaining as much human capital as possible when fabs change hands. CEOs call us when they need to respond to ever-changing market conditions and adjust their manufacturing strategy to reposition themselves in the global marketplace, ensure capacity, and meet customer needs.

What does your competitive landscape look like and how do you differentiate?
What ATREG offers is unique and there isn’t a firm like us anywhere else in the world. We are the go-to partner in the semiconductor industry to identify opportunities, find creative solutions, and drive competitive demand for the exchange of holistic advanced technology facilities. We facilitate the comprehensive sale and purchase of everything our clients need to be fully operational from day one – including supply agreements, human capital, tool lines, and intellectual property. We have an entrenched ability to evolve amid ever-changing global market conditions, based on 25 years of global experience. We are also very committed to human capital retention – and the significant value it adds – across all the transactions we are involved in.

What things keep your customers up at night?
The semiconductor manufacturing industry is a multifaceted, highly complex and competitive environment, subject to constant geopolitical tensions and unexpected global events (pandemics, natural disasters, etc.) Chip makers bear a lot on their shoulders. On top of having to keep up with the latest technological advances to meet ever-pressing customer demand, they need to accelerate time-to-market while keeping manufacturing costs down. The situation has worsened since the Covid pandemic, with costs and lead times spinning out of control. Add other considerations such as the labor shortage to staff greenfield fabs, protecting intellectual property, supply chain issues, sustainability compliance, or shorter product cycles, all of which impact manufacturing assets. That’s when ATREG comes in to alleviate some of that load by providing expert advice on how to address some of these strategic issues.

What are the strategic benefits of selling and buying brownfield fabs for chip makers?
On the sell side, they include fabless or fab-lite strategic initiatives​, gross margin pressure, and underutilization whose root cause is often demand based. In addition, we have products coming to their end of life, CapEx requirements to continue advancing technology capabilities (there are infrastructure limitations for a site to move from 200mm to 300mm), and consolidation into other fabs, often for cleanroom shell sales. Examples include onsemi who needed to consolidate its U.S. fab portfolio to come out of low-margin businesses. Buying the East Fishkill 300mm fab gave the company 2.5 times more capacity. In Asia, Allegro MicroSystems sold a cleanroom in Thailand to consolidate all of its production to its site in the Philippines where it had extra space.

On the buy side, what makes fabs valuable is different from the driving force behind a disposition. Core reasons include geopolitical de-risking, allocation and manufacturing control, scaling geometry requirements, and product demand. Examples include Diodes and their desire for increased internal manufacturing control, Texas Instruments and scaling geometry requirements, or VIS and increased demand for products requiring additional capacity​.

What are some of the most notable fab transactions that have taken place recently?
On August 31st, Bosch announced the completion of the acquisition of TSI Semiconductors’ operational 200mm fab in Roseville, CA. Following a retooling phase beginning in 2026, the company will start producing its first SiC chips on 200mm wafers. Attracting one of Europe’s largest manufacturers to U.S. soil who has only ever produced front-end chips in Germany is a massive win for the U.S. semiconductor industry as Bosch plans to invest $1.5 billion in the Roseville site over the next few years. In Europe, the German government just granted their approval for the sale of Elmos’ Dortmund fab to U.S. company Littelfuse. In June, both companies had signed a purchase agreement for a net purchase price of approximately €93 million. In both these transactions facilitated by ATREG on behalf of TSI and Elmos respectively, both buyers committed to continue to employ both fabs’ staff, saving hundreds of jobs in an already extremely tight labor market.

What is the best advice you would give U.S. chip makers to ensure a successful manufacturing strategy in 2023 and beyond?
If there was one piece of advice I could give U.S. semiconductor manufacturers to ensure capacity and supply chain resilience, it would be that they leave no stone unturned by looking at all semiconductor manufacturing options at their disposal. Greenfield fabs with support from CHIPS Act funding is one avenue, but it will take years before these new facilities yield wafers at volume. Until permit, certification, and entitlement procedures are reformed in the U.S., this will be a cumbersome process. Plus the competition for those public funds will be fierce. The other alternative to consider is brownfield. These facilities are obviously few and far between at any one time, but as chip makers who wish to go fab-light or fabless transition their production out to foundries, some operational fab assets will become available on the market all over the world and there might just be one out there that’s right for you. E.g. companies in compound semi, GaN, GaA, SiC, and MEMS want fabs and greenfield is not necessarily the answer for them because they are too long to yield.

Also Read:

CEO Interview: Dr. Tung-chieh Chen of Maxeda

CEO Interview: Koen Verhaege, CEO of Sofics

CEO Interview: Harry Peterson of Siloxit

Breker’s Maheen Hamid Believes Shared Vision Unifying Factor for Business Success


AI for the design of Custom, Analog Mixed-Signal ICs

AI for the design of Custom, Analog Mixed-Signal ICs
by Daniel Payne on 09-28-2023 at 10:00 am

high sigma verifier min

Custom and  Analog-Mixed Signal (AMS) IC design are used when the highest performance is required, and using digital standard cells just won’t meet the requirements. Manually sizing schematics, doing IC layout, extracting parasitics, then measuring the performance only to go back and continue iterating is a long, tedious approach. Siemens EDA has been offering EDA tools that span a wide gamut, including: High Level Synthesis, IC design, IC verification, physical design, physical verification, manufacturing and test, packaging, electronic systems design, electronic systems verification and electronic systems manufacturing. Zooming into the categories of IC design and IC verification is where tools for Custom IC come into focus, like the Solido Design Environment.  I had a video conference with Wei Tan, Principal Product Manager for Solido to get an update on how AI is being used.

Designing an SoC at 7nm can cost up to $300 Million, and 5nm can reach $500 Million, so having a solid design and verification methodology is critical to the financial budget, and the goal of first pass silicon success. With each smaller process node the number of PVT corners required for verification only goes up.

The general promise of applying AI to the IC design and verification process is to improve or reduce the number of brute-force calculations, assist engineers to be more productive, and to help pinpoint root causes for issues like yield loss. Critical elements of using AI in EDA tools include:
  • Verifiability- the answers are correct
  • Usability -non-experts can use the tools without a PhD in statistics
  • Generality – it works on custom IC, AMS, memory and standard cells
  • Robustness – all corner cases work properly
  • Accuracy – same answers as brute-force methods
Wei talked about three levels of AI, with the first being Adaptive AI which accelerates an existing process using AI techniques, the next level as Additive AI that retains previous model answers in new runs, and the final level of Assistive AI to help circuit designers be more productive with new insights while using generative AI.
Solido has some 15 years of applying AI techniques to EDA tools used by circuit designers at the transistor level. For Monte Carlo simulations using Adaptive AI there’s up to a 10,000X speedup so you can get 3 to 6+ sigma results at all corners that matches brute-force accuracy. Here’s an example of Adaptive AI where a 7.1 sigma verification that required 10 trillion brute-force simulations only used 4,000 simulations, or 2,500,000,00X faster with SPICE accuracy.
High-Sigma Verifier

The Solido Design Environment also scales well in the cloud to speed up simulation runs using AWS or Azure vendors to meet peak demands.

An example of Additive Learning employs AI model reuse for when there are multiple PDK revisions and you want to characterize your entire standard cell library for each new PDK version. The traditional approach would require 600 hours to do the initial PVT runs using Monte Carlo, covering five revisions.
Traditional PVTMC jobs
With AI model reuse this scenario takes much less time to complete, also saving many MB to GB of data saved on disk.
AI model reuse, saves time
Assistive AI is applied to the sizing of transistors and identifies optimization paths to improve PPA, determines the optimal sizing of transistors to achieve the target PPA goals, and has friendly reports to visual the progress. You can expect your IC team to save days to weeks of engineering time by using AI-assisted optimization.
Assistive AI for circuit sizing

Summary

Custom and AMS IC designers can now apply AI-based techniques in their EDA tool flows during both design and verification stages. Adaptive AI speeds up brute-force Monte Carlo simulation, Additive learning uses retained AI models to speed up runs, and Assistive AI is applied to circuit optimization and analysis.
Yes, you still need circuit designers to envision transistor-level circuits, but they won’t have to wait so long for results when using EDA tools that have AI techniques under the hood.

Related Blogs


Keysight EDA 2024 Delivers Shift Left for Chiplet and PDK Workflows

Keysight EDA 2024 Delivers Shift Left for Chiplet and PDK Workflows
by Don Dingee on 09-28-2023 at 8:00 am

Chiplet PHY Designer

Much of the recent Keysight EDA 2024 announcement focuses on high-speed digital (HSD) and RF EDA features for Advanced Design System (ADS) and SystemVue users, including RF System Explorer, DPD Explorer (for digital pre-distortion), and design elements for 5G NTN, DVB-S2X, and satcom phased array applications. Two important new features in the Keysight EDA 2024 suite may prove crucial in EDA workflows for chiplets and PDKs (process design kits).

A quick introduction to chiplet interconnects

Chiplets are the latest incarnation of modular chip design tracing back through multi-chip module (MCM), system-in-package (SiP), package-on-package (PoP), and others, targeting improved cost-effective design, performance, yield, power efficiency, and thermal management. Chiplets decompose what would otherwise be a complex SoC, with an expensive and maybe unrealizable single-die solution, into smaller pieces designed and tested independently and then packaged together. Chip designers can grab chiplets from different process nodes in a heterogeneous approach – say, putting a 3nm digital logic chiplet alongside a 28nm mixed-signal chiplet.

Until recently, there has been no specification for die-to-die (D2D) interconnects, leaving chiplet designers with two significant challenges. First is the speed of today’s interconnects, often with gigabit clocks, where the bit error rate (BER) starts creeping up enough to affect performance. Second is the difficulty of modeling and simulating interconnects in digital EDA tools, usually in a do-it-yourself approach, trying to match precise time-domain measurements of eye patterns from high-speed oscilloscopes.

UCIe (Universal Chiplet Interconnect Express) fills the gap for D2D interconnects. It defines three layers: a PHY layer with data paths on physical bumps grouped into lanes by signal exit ordering; a D2D adapter coordinating link states, retries, power management, and more; and a protocol layer building on CXL and PCIe specifications. The Standard Package (2D) drives low-cost, long-reach (up to 25mm) interconnects. Advanced Package (2.5D) variants optimize performance on short-reach (less than 2mm) interconnects with tighter bump pitch, enabling improved BER at higher transfer rates. Bump maps and signal exit routing, combined with scalable diagonal bump pitch requirements, ensure that a UCIe-compliant chiplet places on a substrate with controlled interface characteristics, making interoperable connections possible.

 

 

 

A shift left with Chiplet PHY Designer for UCIe

Keysight EDA teams have been working on modeling and simulating HSD interfaces aligned with industry specifications for some time. Their first major product release was ADS Memory Designer with an IBIS-AMI modeler for DDR5/LPDDR5/GDDR7 memory interfaces with statistical and single-ended bus bit-by-bit simulations. Its rigorous and genuine JEDEC compliance testing handles over 100 test IDs with the same test algorithm found in the Keysight Infinium oscilloscope family.

According to Hee-Soo Lee, DDR/SerDes Product Owner and HSD Segment Lead at Keysight, the HSD R&D squad leveraged four years of effort developing Memory Designer in the creation of Chiplet PHY Designer, the industry’s first chiplet interconnect simulation tool ready for introduction as part of ADS 2024 Update 1.0 in the Keysight EDA 2024 suite. “We saw an opportunity to speed up designs using chiplets by simulating a chiplet subsystem, from one D2D PHY through interconnect channels to another D2D PHY, much earlier in the cycle,” says Lee. “Chiplet PHY Designer precisely computes a voltage transfer function (VTF) to ensure specification compliance and analyzes system BER down to 1e-27 or 1e-32 levels.” Chiplet PHY Designer can also measure eye height, eye width, skew, mask margin, and BER contour.

 

 

 

 

 

 

 

 

Keysight teams adapted the single-ended bus simulation technology to deal with the single-ended signaling and forwarded clocking used in UCIe. They then incorporated the UCIe signal naming convention and connection rules for handling smart wiring in the schematic. “After placing two dies along with interconnect channels, we can now tell Chiplet PHY Designer to make the automated wiring connections between chiplet components, and the design is ready for simulation right away,” continues Lee. The upcoming November 2023 release of Chiplet PHY Designer puts Keysight ahead of competing EDA environments for chiplet design. Interestingly, Lee hints support for Bunch of Wires (BoW) and Advanced Interface Bus (AIB) is coming in future releases.

Adapt existing PDK models to new process specifications

Creating accurate and high-quality transistor models can be time-consuming and affect the on-time delivery of PDKs. “In the traditional modeling approach, extracting a transistor model card from mass measurement data takes at least several days, often weeks,” says Ma Long, Manager of Device Modeling and Characterization at Keysight.

Keysight IC-CAP now incorporates a new product for model recentering, where models from prior processes are adjusted using figure-of-merits (FOMs) on a new process. “The biggest challenge is addressing the trend plots in real-time, simulating data points for different geometries and temperatures,” says Long. “From threshold voltage, cutoff frequency, and other FOMs, modeling engineers can modify an existing model to new specifications and save 70% compared to traditional step-by-step model extraction.”

 

 

 

 

 

 

 

 

Earlier model quality check reduces later iterations

Keysight has a full-featured model quality assurance (QA) tool, MQA, used for final SPICE model library sign-off and documentation. A newly developed light version of MQA, QA Express, is now integrated into Keysight Model Builder (MBP), allowing modeling engineers to apply a quick model QA check during parameter extraction.

Binning model QA is complicated and can also take days or weeks, and issues showing up late in the process can send teams back to the beginning. “QA Express gives easy-to-use, quick results providing a high-confidence check,” Long continues. A faster result is beneficial when simulators toss warnings over parameter effective ranges or bin discontinuity is detected. QA Express enables modeling engineers to find QA issues earlier with one-click ease.

 

 

 

 

 

 

 

Learn more at the Keysight EDA 2024 product launch event

Keysight has packed many new capabilities into the Keysight EDA 2024 release. For a brief introduction to trends in the EDA market driving these improvements, watch the video featuring Keysight EDA VP and GM Niels Faché below.

To help current and future users understand the latest enhancements in Keysight EDA 2024, including workflows for chiplets and PDKs, Keysight is hosting an online product introduction event on October 10th and 11th for various time zones.

Registration page:

Keysight EDA 2024 Product Launch Event

Press release for Keysight EDA 2024:

Keysight EDA 2024 Integrated Software Tools Shift Left Design Cycles to Increase Engineering Productivity

Also Read:

Version Control, Data and Tool Integration, Collaboration

Keysight EDA visit at #60DAC

Transforming RF design with curated EDA experiences


Assertion Synthesis Through LLM. Innovation in Verification

Assertion Synthesis Through LLM. Innovation in Verification
by Bernard Murphy on 09-28-2023 at 6:00 am

Innovation New

Assertion based verification is a very productive way to catch bugs, however assertions are hard enough to write that assertion-based coverage is not as extensive as it could be. Is there a way to simplify developing assertions to aid in increasing that coverage? Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Towards Improving Verification Productivity with Circuit-Aware Translation of Natural Language to SystemVerilog Assertions. The paper was presented in the First International Workshop on Deep-Learning Aided Verification in 2023 (DAV 2023). The authors are from Stanford.

While a lot of attention is paid to LLMs for generating software or design code from scratch, this month’s focus is on generating assertions, in this case as an early view into what might be involved in such a task. The authors propose a framework to convert a natural language check into a well-formed assertion in the context of the target design which a designer can review and edit if needed. The framework also provides for formally checking the generated assertion, feeding back results to the designer for further refinement. The intent looks similar to prompt refinement in prompt-based chat models, augmented by verification.

As a very preliminary paper our goal this month is not to review and critique method and results but rather to stimulate discussion on the general merits of such an approach.

Paul’s view

A short paper this month – more of an appetizer than a main course, but on a Michelin star topic: using LLMs to translate specs written in plain English into SystemVerilog assertions (SVA). The paper builds on earlier work by the authors using LLMs to translate specs in plain English into linear temporal logic (LTL), a very similar problem, see here.

The authors leverage a technique called “few shot learning” where an existing commercial LLM such as GPT or Codex is asked to do the LTL/SVA translation, but with some additional coaching in its input prompt: rather than asking the LLM to “translate the following sentence into temporal logic” the authors ask the LLM to “translate the following sentence into temporal logic, and remember that…” followed by a bunch of text that explains temporal logic syntax and gives some example translations of sentences into temporal logic.

The authors’ key contribution is to come up with the magic text to go after “remember that…”. A secondary contribution is a nice user interface to allow a human to supervise the translation process. This interface presents the user with a dialog box showing suggested translations of sub-clauses in the sentence and asks the user to confirm these sub-translations before building up the final overall temporal logic expression for the entire sentence. Multiple candidate sub-translations can be presented in a drop-down menu, with a confidence score for each candidate.

There are no results presented in the SVA paper, but the LTL paper shows results on 36 “challenging” translations provided by industry experts. Prior art correctly translates only 2 of the 36, where the authors’ approach succeeds on 21 of 36 without any user supervision and on 31 of 36 with user supervision. Nice!

Raúl’s view

The proposed framework, nl2sva, “ultimately aims to utilize current advances in deep learning to improve verification productivity by automatically providing circuit-aware translations to SystemVerilog Assertions (SVA)”. It is done by extending a recently released tool, nl2spec, to interactively translate natural language to temporal logic (SVA are based on temporal logic). The framework requires an LLM (they use GTP-4) and a Model checker (they use JasperGold). The LLM reads the circuits in System Verilog and the assertions in natural language, and generates SVAs plus circuit meta information (e.g., module names, input and output wire names) and sub-translations in natural language. These are presented to the developer to evaluate and the SVA are run through a model checker to evaluate. The authors describe how the framework is trained (few-shot prompting) and include two complete toy examples (Verilog listings) and show a correctly generated SVA for each of them (“Unless reset, the output signal is assigned to the last input signal”).

As pointed out, this is preliminary work. Using AI to generate assertions seems a worthy enterprise. It is a hard problem in the sense that it involves translation; we briefly hit translation back in July when reviewing Automatic Code Review using AI; translation is a hard problem to score, often done with the BLEU score (bilingual evaluation understudy which evaluates quality of machine translation on a scale of 0-100) involving human evaluation. The authors use GPT-4 stating that they have “up to 176 billion parameters” and “supports up to 8192 tokens of context memory”, which is limiting. Using GPT-5 (1.76 trillion parameters, not clear why they quote only 8192 tokens) will remove these limits.

In any case, this is a really easy paper, with a paragraph long introduction to both SVA and to LLMs, with two complete toy examples – fun to read!

Also Read:

Cadence Tensilica Spins Next Upgrade to LX Architecture

Inference Efficiency in Performance, Power, Area, Scalability

Mixed Signal Verification is Growing in Importance


Podcast EP184: The History and Industry-Wide Impact of TSMC OIP with Dan Kochpatcharin

Podcast EP184: The History and Industry-Wide Impact of TSMC OIP with Dan Kochpatcharin
by Daniel Nenni on 09-27-2023 at 2:00 pm

Dan is joined by Dan Kochpatcharin, Dan joined TSMC in 2007. Prior to his current role heading up the Design Infrastructure Management Division, Dan led the Japan customer strategy team, the technical marketing and support team for the EMEA region in Amsterdam and was a part of the team leading the formation of the TSMC Open Innovation Platform. Prior to TSMC, Dan worked at Chartered Semiconductor both in the US and Singapore and LSI Logic.

The history of TSMC ecosystem collaboration is reviewed, starting with the first reference flow work in 2001. TSMC’s OIP Ecosystem has been evolving for the past 15 years and Dan provides an overview of the activities and impact of this work. Ecosystem-wide enablement of 3DIC design is also discussed with a review of the TSMC 3DFabric Alliance and 3Dblox.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Version Control, Data and Tool Integration, Collaboration

Version Control, Data and Tool Integration, Collaboration
by Daniel Payne on 09-27-2023 at 10:00 am

SoC Complexity min

As a follow up from my #60DAC visit with Simon Rance of Keysight I was invited to their recent webinar, Unveiling the Secrets to Proper Version Control, Seamless Data and Tool Integration, and Effective Collaboration. Karim Khalfan, Director of Solutions Engineering, Data & IP Management was the webinar presenter.

Modern SoC devices can contain hundreds of semiconductor IP blocks that could contain subsystems for: CPU, GPU, Security, Memory, Interconnect, NoC, and IO. Keeping track of such a complex system of subsystems requires automation.

SoC Complexity

Version Control

The goals of a version control tool for SoC design are to capture objects used in a release, ensure data security, resolve conflicts from multi-user check-ins, maintain design handoffs using labels, and being able to revert to a stable version of the system. Access controls define which engineers can read or modify the system, something that is required for military projects through ITAR compliance. Authorized engineers can check out IP like hardware or software, work on a branch, resolve conflicts with other team members, then merge changes when ready by checking in or committing

Designers with version control can update specific objects, go back in time to revert previous versions and use labels to assist in communicating with their team what each update is about. Modern version control tools should allow both command line and GUI modes to suite the style of each project.

Reuse and Traceability

The first diagram showed just how much IP that it can take to design a system, so being able to re-use trusted IP from internal or external sources is required, along with being able to trace where each IP block came from along with it’s version history. Industries like aerospace and automotive have requirements to archive their designs over a long period of time, so having thorough documentation is key to understanding the BOM.

IP developers need to know who who is using each IP block, and IP users need to be informed when any changes or updates have been made to an IP block. The legal department needs to know how each IP bock is licensed, and how many of each block is being actively used in designs. The Design Data Management tool from Keysight is called SOS. A traceability report should show where each IP block is being used across a global scale, by version, and by geography. If two versions of the same IP are referenced in the same project, then a conflict should be detected and reported.

IP by Geography

Storage Optimization

SoC design sizes continue to increase in size, so how the data is stored and accessed becomes an issue.

Design # of Files File Size
12 Bit ADC 25K 150GB
Mixed Signal Sensor 100K 250GB
PDK 300K 800GB
Processor 500K 1,500GB

With a traditional storage approach there is a physical copy of the data per user, so for a team of five engineers there would be five copies of the data. Each new engineer grows the disk space linearly, requiring more network storage.

The Keysight SOS approach instead uses a centralized work area, then design files in a users work area are symbolic links to a cache, except for files to be edited. This creates an optimized use of network storage, saving on disk space for the team. Creating a new user work area is quite fast.

SOS storage

Team & Site Collaboration

Without remote sharing of IP blocks, your engineering team may be working on the wrong version of data, wasting time trying to locate the golden data, using stale data that is out of sync, or even handing off data to another geography that is out of date. Keysight recommends using labels as tags to communicate between team members, and also using tags to represent milestones in the IC design process. In the following diagram there’s a mixed-signal design flow with tags and labels being used to ensure that the correct versions are being used by each engineer.

Mixed-signal design flow using tags and labels

Once the design methodology is established, then each geography can work concurrently sharing data through the repository and cache system. SOS supports automatic synching data across sites, so there is fast access to data at each remote site. Even remote updates are performed quickly just like at the primary site, as the traffic of data is reduced, and this approach also works in cloud-based EDA tool flows. Remote design centers and cloud users are both supported, as the data management is built in.

Integration

Over many years the Keysight SOS tool has been integrated with the most popular EDA software vendor flows.

  • MathWorks
  • Siemens
  • Synopsys
  • Keysight
  • Cadence
  • Silvaco
  • Empyrean

These are all native integrations, so the data management and version control are consistent across projects, groups and geographies. The SOS tool runs under Windows or Linux, has a web interface, and can also be run from the command line. Here’s what the SOS interface looks like to a Cadence Virtuoso user:

SOS commands in Cadence Virtuoso

Summary

Having an integrated data management tool within your favorite EDA flow will help your design team’s productivity, as it automatically synchs your data around the world to ensure that all members are accessing the correct IP blocks. Using a tagging methodology to promote data once it gets completed will communicate to everyone on a team what state each block is in. All of your IP reuse will now have traceability to more easily audit data.

Version control has gone beyond just the simple check-in, checkout and update cycles, as advanced flows need to also support variations of experiments or branches. The archived webinar is online now here.

Related Blogs


WEBINAR: Emulation and Prototyping in the age of AI

WEBINAR: Emulation and Prototyping in the age of AI
by Daniel Nenni on 09-27-2023 at 8:00 am

mimic 128

Artificial Intelligence is now the primary driver of leading edge semiconductor technology and that means performance is at a premium and time to market will be measured in billions of dollars of revenue. Emulation and Prototyping have never been more important and we are seeing some amazing technology breakthroughs including a unified platform from Corigine.

How does an innovative and unified platform for Prototyping and Emulation deliver never seen speeds, truly fast enough for system software development?

How is Debug made possible with powerful capabilities to shorten validation times by orders of magnitude?

Is Push-Button automation for real? And how can scalability go from 1 to 128 FPGA’s on the fly?

To answer these questions, please join the Corigine coming up webinar. We will showcase the new MimicPro-2 platform architected and designed from the ground up by the pioneers of Emulation and Prototyping.

LIVE WEBINAR: Emulation and Prototyping in the age of AI
October 18, 9:00am PDT

The complexity of hardware and software content increases the need of faster emulation and prototyping capacity to achieve the hardware verification and software development goals. Identifying the best balance of emulation and prototyping hardware capacity for SoC design teams is always challenging. This is why Corigine made the best effort to determine the upfront unified prototyping and emulation platform.

Corigine’s team hailing from the days of Quickturn and developing generations of Emulation and Prototyping at the big EDA companies has architected a unified new platform. The new platform breaks barriers in a space that has not been keeping pace with the needs of AI, Processors and Communications SoCs that need software running at the system performance levels…pre-silicon. And as the shift-left push shortens R&D cycles, enormous innovations in debug are necessary, with the kind of backdoor access and system view transparency that is near-impossible with legacy emulation and prototyping platforms. A new Corigine platform will be unveiled in this webinar to showcase and demo what is achievable

Why attend?

In this webinar, you will gain insights on:

  • What new levels are achievable for software and hardware teams for
    • Pre-silicon emulation performance
    • Debug capabilities as never seen before
    • Multi-user accessibility and granularity
  • What is next on the prototyping and emulation roadmap
LIVE WEBINAR: Emulation and Prototyping in the age of AI
October 18, 9:00am PDT
Speakers:

Ali Khan
The VP Business and Product Development at Corigine. He has over 25 years of experience building startups and running businesses, with particular expertise in the semiconductor sector. Before joining Corigine, Ali was Director of Product Management at Marvell, Co-Founder of Nexsi System, and Server NIC Product Manager at 3Com. Ali obtained a bachelor’s degree in Electrical Engineering from UT Austin and MBA from Indiana University.

Mike Shei
The VP Engineering at Corigine. Mike has over 30 years of experience on emulation/prototyping tools.

About Corigine
Corigine is a leading supplier of FPGA prototyping cards and systems to shift left R&D schedules. Corigine delivers EDA tools, IPs and networking products. Corigine has worldwide R&D centers and offices in US, London, South Africa, and China. https://www.corigine.com/

Also Read:

Speeding up Chiplet-Based Design Through Hardware Emulation

Bringing Prototyping to the Desktop

A Next-Generation Prototyping System for ASIC and Pre-Silicon Software Development


Power Supply Induced Jitter on Clocks: Risks, Mitigation, and the Importance of Accurate Verification

Power Supply Induced Jitter on Clocks: Risks, Mitigation, and the Importance of Accurate Verification
by Daniel Nenni on 09-27-2023 at 6:00 am

Jitter Analysis

In the realm of digital systems, clocks play a crucial role in synchronizing various components and ensuring smooth flow of logic propagation. However, the accuracy of clocks can be significantly affected by power supply induced jitter. Jitter refers to the deviation in the timing of clock signals with PDN noise compared to ideal periodic timing. This essay explores the risks associated with power supply induced jitter on clocks, strategies to mitigate its impact, and the crucial role of accurate verification in maintaining reliable clock performance.

Infinisim JitterEdge is a specialized jitter analytics solution, designed to compute power supply induced jitter of clock domains containing millions of gates at SPICE accuracy. It computes both period and cycle-to-cycle jitter at all clock nets, for all transitions using large milli-second power-supply noise profiles. Customers use Infinisim jitter analysis during physical design iterations and before final tape-out to ensure timing closure.

Understanding Power Supply Induced Jitter

Power supply induced jitter occurs when fluctuations or noise in the power supply voltage affect the timing of a clock signal. In digital systems, clock signals are typically generated by phase-locked loops (PLLs) or delay-locked loops (DLLs). PLL jitter is added to the PDN jitter to compute total clock jitter.

Risks of Power Supply Induced Jitter

  1. Timing Errors: The primary risk associated with power supply induced jitter is the introduction of timing errors. These errors can lead to setup and hold violations resulting in synchronization errors between different components
  2. Increased Bit Error Rates (BER): Jitter-induced timing errors can result in data transmission issues, leading to a higher BER in communication channels. This can degrade the overall system’s reliability and performance.
  3. Reduced Signal Integrity: Jitter can cause signal integrity problems, leading to crosstalk, data corruption, and other noise-related issues, especially in high-speed digital systems.
  4. Frequency Synthesizer Instability: In systems that rely on frequency synthesizers for clock generation, power supply induced jitter can cause the synthesizer to become unstable, leading to unpredictable system behavior.

Mitigating Power Supply Induced Jitter

To minimize the impact of power supply induced jitter on clocks, several mitigation strategies can be employed:

  1. Quality Power Supply Design: Implementing a robust and well-designed power supply system is crucial. This includes the use of decoupling capacitors, voltage regulators, and power planes to reduce noise and fluctuations in the supply voltage.
  2. Filtering and Isolation: Incorporate filtering mechanisms to remove high-frequency noise from the power supply. Additionally, isolate sensitive clock generation circuits from noisy power sources to limit the propagation of jitter.
  3. Clock Buffering and Distribution: Utilize clock buffers to distribute the clock signal efficiently and accurately. Proper buffering helps to isolate the clock signal from the original source, reducing the impact of jitter.
  4. Clock Synchronization Techniques: Implement clock synchronization techniques that enable multiple components to share a common reference clock, mitigating potential timing discrepancies.
  5. Minimize Load and Crosstalk: Reduce the capacitive load on clock distribution networks and minimize crosstalk between clock and data signals to maintain signal integrity.

Importance of Accurate Verification

Accurate verification of power supply induced jitter is essential for several reasons:

  1. System Reliability: Accurate verification ensures that the system meets timing requirements, reducing the risk of errors and malfunctions caused by jitter-induced timing variations.
  2. Performance Optimization: By understanding the extent of jitter in the system, designers can optimize clock generation and distribution, maximizing performance without compromising reliability.
  3. Compliance with Standards: Many industries and applications have specific timing requirements, such as in telecommunications or safety-critical systems. Accurate verification ensures compliance with these standards.
  4. Cost and Time Savings: Early identification and mitigation of power supply induced jitter during the verification process save time and resources, preventing potential issues during product deployment.

Conclusion

Power supply induced jitter on clocks poses significant risks to the accurate operation of digital systems. Mitigation strategies, including quality power supply design, filtering, and proper clock distribution, are essential for reducing jitter’s impact. Accurate verification of power supply induced jitter is crucial to maintaining system reliability, optimizing performance, and ensuring compliance with industry standards. By understanding and addressing this challenge, designers can create more robust and dependable digital systems capable of meeting the demands of modern technology.

Characterization is a common technique used in the analysis of clock jitter and involves measuring and quantifying the variations in a clock signal’s timing. Characterization is often used to describe the process of measuring and analyzing the behavior of a signal or component to understand its performance characteristics. In the context of clock jitter, characterization-based tools measure the statistical distribution of jitter values, determine key metrics such as RMS jitter and peak-to-peak jitter, and analyzing how different factors in the design contribute to jitter.

For designs at 7nm and below nodes, where a sub-pico-second level of jitter needs to be identified, a more accurate approach is needed. Running a full circuit simulation at the transistor-level along with parasitic interconnect can provide SPICE accurate Jitter analysis and help identify sources of jitter and their impact. Infinisim’s ClockEdge’s jitter capability provides the accuracy needed to model clock jitter and its effects.

If you have questions feel free to contact the clock experts at Infinisim here: https://infinisim.com/contact/

Also Read:

Clock Verification for Mobile SoCs

CTO Interview: Dr. Zakir Hussain Syed of Infinisim

Clock Aging Issues at Sub-10nm Nodes