SILVACO 073125 Webinar 800x100

2023 Retrospective. Innovation in Verification

2023 Retrospective. Innovation in Verification
by Bernard Murphy on 01-25-2024 at 6:00 am

Innovation New

As usual in January we start with a look back at the papers we reviewed last year. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. We’re planning on starting a live series this year to debate ideas and broader topics and to get your feedback. Details to follow!

The 2023 Picks

These are the blogs we posted through the year, sorted by popularity. We averaged 12.7k engagements per blog, a meaningful increase from last year which we take as an indication that you continue to enjoy our reviews of current research in verification. The leader was no surprise, applying LLMs to automated code review at almost 17k engagements. A close second uses ML to develop model abstractions. In fact the top 4 blogs in 2023 were on all on applications of AI/ML. Petri nets made an appearance again this year, here for validating rapidly evolving DRAM protocols. Using dedicated hardware for speculation in simulation, and a method to find anomalies rounded out the list. The retrospective for 2022 did about as well as usual but was overshadowed by interest in other papers through the year. It is a safe bet we will be looking at more applications of AI/ML in 2024!

Automated Code Review. Innovation in Verification
ML-Guided Model Abstraction. Innovation in Verification
Deep Learning for Fault Localization. Innovation in Verification
Assertion Synthesis Through LLM. Innovation in Verification
Better Randomizing Constrained Random. Innovation in Verification
Petri Nets Validating DRAM Protocols. Innovation in Verification
Developing Effective Mixed Signal Models. Innovation in Verification
ML-Based Coverage Acceleration. Innovation in Verification
Speculation for Simulation. Innovation in Verification
2022 Retrospective. Innovation in Verification
Anomaly Detection Through ML. Innovation in Verification
Information Flow Tracking at RTL. Innovation in Verification

Paul’s view

Another year flies by, and 49 papers read since we started the blog in November 2019! Back then we were thinking it would be a great way to bring together our verification community and show our appreciation for continued investment in verification research at academic institutions around the world.

What I didn’t predict was how reading all these papers would inspire new investments and innovations at Cadence. Writing this blog as has taught me that even at an executive level in engineering, staying connected to ground level research and reading papers regularly is good for business. So thank you readers, and thank you Bernard!

No surprise that our top 3 hits last year were all papers on using AI in verification, one on AI to automate code review (link), one on AI to help find bugs more quickly in high level SimuLink models of mixed-signal devices (link), and one on using AI to automatically identify which line of source code is the root cause of a test failure (link). We absolutely need to continue to invest in research here both in academia and in the commercial world. Somehow, over the next decade we need to find our next 10x in verification productivity, and it’s most likely to come from AI.

That said, my personal shout out from 2024 is not AI related. It’s for two papers in logic simulation: one on parallelizing simulation using speculative execution of the event queue (link), and the other on improving distribution quality of randomized inputs in constrained random tests using clever hashing functions (link). I call these “engine-level” innovations –making the building blocks inside EDA tools fundamentally better. We also need to continue research and innovation here. These two papers were very innovative but had nothing to do with AI. Let’s not forget to keep investing in non-AI related innovation as well.

Raúl’s view

Writing this retrospective during the holidays inevitably collides with one of humankind’s necessities which can be elevated to an art: eating. Reviewing restaurants perhaps shares enough with reviewing papers to justify ratings such as ★★★ exceptional, worth a special journey, ★★ excellent, worth a detour, ★ high quality, worth a stop, and 😋 exceptionally good at moderate prices. Paul already stated that our September review was a “Michelin star topic”. I will continue in this vein, using your preferences (number of views), dear readers, as the yardstick.

While last year’s blog was largely about cool algorithms, this year’s was about AI/ML and Software (SW). The top three ★★★ papers were all about verification of SW using AI/ML. The top rated blog (July) was about code review with generative AI, the second (November) dealt with testing and verifying SW for Cyber-Physical Systems using surrogate AI models, and the third (May) was about detecting and fixing bugs in Java augmenting with AI classifiers. Two of these three papers use large datasets from GitHub for training. Such data is not available publicly for hardware (HW) design; which is arguably different enough from SW to at least raise the question whether these results can/will be replicated for HW. Nevertheless, looking at what the SW community is doing about verification is certainly a source of inspiration.

The next three papers, ranked with ★★, are an eclectic collection of AI/ML, a very cool algorithm, and Petri-Nets. All deal with verification in EDA. September’s paper was a preview on using a LLM (GPT-4) and a model checker (JasperGold) to translate English into System Verilog Assertions (SVA). The next one (June) addressed how to sample the solution space for constrained random verification uniformly (meeting the constraints) – a cool algorithm for a hard problem, back from 2014. The last contribution in this group (April) extended Petri Nets for the verification of JEDEC DDR specifications; it is educational both on JEDEC specs and Petri Nets, and uncovers one timing violation.

Papers 7-9, ranked with ★, deal with analog design verification, CPU verification and parallel SW execution. In October we reviewed an invited paper to the IEEE open journal of the Solid-State Circuits Society, besides being a good tutorial on analog design and validation, the main contribution consists of replacing analog circuit models by functional models to accelerate Spice simulation by 4 orders of magnitude. February’s paper was about using DNNs to improve random instruction generators in CPU verification, showing a reduction of “the number of simulations by a factor of 2 or so” in a simple example (IBM Northstar, 5 instructions). March brought us the complete design of a HW accelerator to implement the Spatially Located Ordered Tasks (SLOT) execution model to exploit parallelism and speculation, and for applications that generate tasks dynamically at runtime.

Which leaves us with two 😋 recipients. In August we reviewed a paper from 2013 which pioneered k-means clustering (2013) for post silicon bug detection. And in December we looked at a very important topic, security verification using IFT (Information Flow Tracking) and it’s extension from gate level to RTL. Not surprisingly, December’s contribution got the least hits as our readers were probably facing the dilemma described initially.

Ratings can be arbitrary at times, all these contributions are star worthy and advance the state of the art. We can be grateful for an active, international research community in academia and industry tackling really hard problems. As of my personal preferences, you can guess…


AI and SPICE Circuit Simulation Applications

AI and SPICE Circuit Simulation Applications
by Daniel Payne on 01-24-2024 at 10:00 am

Figure 1 min

Can you name the EDA vendor that first used AI starting 15 years ago for circuit designers using SPICE simulators? I can remember that vendor, it was Solido, now part of Siemens EDA, and I just read their 8 page paper on how they look at the various levels of AI being used in EDA to help IC designers work smarter and faster than using manual methods.

Custom designs including cell, memory and analog IP libraries require SPICE simulations to be run across many Process, Voltage and Temperature (PVT) combinations as well as local variation to be fully verified to target yield, such as 3, 4, 5, 6 sigma, or higher. In addition, timing models used by logic synthesis and static timing analysis tools also require many SPICE simulations for .lib modeling and validation, especially with statistical variation included in Liberty Variation Format (LVF) sections of .libs. These tasks need millions or billions of SPICE simulations, and may take weeks to complete.

Solido technology uses an adaptive AI approach that uses SPICE simulations to get initial results, selects sample points, simulates more tail-end points, then self-verifies and adapts as needed, with results matching brute-force Monte Carlo methods in a fraction of the time.

Any EDA tool that uses AI must meet a criteria to be trusted, like can it be verified, is it accurate compared to a reference, will it work in general on all my designs, is it strong enough to save me time and effort, and can it be used by an engineering team. You can also think about the maturity level of your EDA tool with AI features.

  • Level 0 – no AI approach, SPICE with brute-force Monte Carlo.
  • Level 1 – partially reliable AI, where it works on some cells, but not all.
  • Level 2 – partially reliable AI, with self-verification and acceptable accuracy.
  • Level 3 – adaptive, accuracy-aware AI, where low accuracy results are replaced by higher accuracy results through more data collection, improving models automatically.
  • Level 4 – full production AI that works for all cells, all corner cases, all the time.

Here’s an EDA tool approach for the Level 3 of AI maturity:

AI Maturity

This automated methodology produces accurate results very quickly, yet doesn’t require manual intervention. Reaching the level 1 of AI takes days, level 2 will take months, level 3 requires years, and level 4 will require decades of developer years to attain.

Solido Design Environment has a feature for high-sigma verification, where AI speeds up SPICE runs by an order of magnitude, yet the accuracy is full SPICE. Engineers can reach 6 sigma verification results in much less time versus brute-force methods. Using the High-Sigma Verifier approach showed a speed improvement of 4,000,000X faster than brute-force in a cell example. With old methods an engineering team wouldn’t even consider high sigma verification, because the runtimes would be too slow.

Furthermore, additive AI enables Solido Design Environment to re-use AI models from one run to help further speed-up subsequent runs, accelerating incremental verification tasks by up to an additional 100X.

Solido Design Environment

To create and verify Liberty (.lib) models with AI, an engineer would run Solido Generator which produces new PVT corner .libs using existing PVT corners as anchor data, and Solido Analytics to fully validate .libs, including detecting outliers and potential issues in.lib data automatically. Both these tools are part of Solido Characterization Suite. The AI techniques here reduce .lib production and validation time from weeks to just hours of run time.

Solido Analytics

The roadmap for AI techniques with Solido tools includes Assistive AI, where generative AI will help engineers find and choose design optimization options.

Summary

Solido has a 15 year history of applying AI techniques to circuit designers for high-sigma verification and cell characterization, giving them verification results in much shorter run times. Ask your EDA vendors what their experience is in applying AI methods to their tools and try to see what level of AI maturity is being offered. Reaching a level 3 or level 4 AI maturity requires decades of development effort.

Read the 8 page article at Siemens EDA.

Related Blogs


2024 Semiconductor Cycle Outlook – The Shape of Things to Come – Where we Stand

2024 Semiconductor Cycle Outlook – The Shape of Things to Come – Where we Stand
by Robert Maire on 01-24-2024 at 6:00 am

Semiconductor Industry Outlook 2024
  • What kind of recovery do we expect, if any, after 2 down years?
  • What impact will China have on the recovery of mature market chips?
  • What will memory recovery look like? Will we return to stupid spend?
  • Stock selection ever more critical in tepid recovery
Chip stocks have rocketed but the industry itself, not so much, “Anticipation….is keeping me waiting”

You wouldn’t know that the semiconductor industry has been in the doldrums for two years and more from the look of semiconductor stocks but that’s the reality.

The stock market seems to always be a leading indicator of future performance but then again the stocks have been pricey all through the down cycle seemingly anticipating a recovery that was always delayed.

The question now at hand is if 2024 will finally be the recovery that everyone has been anticipating?

So far the signs look OK but certainly not what we would call great and in no way back to the very heady days of crazy spending and expectations.

The very high spend that the industry saw to build capacity after the Covid induced shortage clearly overshot the runway by quite a bit which resulted in the overcapacity induced downcycle that has lasted over 2 years now.

We think chip makers will likely be a bit “gun shy” about spending capex given the length of the downturn.

We saw that TSMC is projecting “flattish” spend for 2024 and projects such as Arizona are pushed out or going slow on purpose.

TSMC not doing a buy High NA buy from ASML will also keep their capex under control.

Intel is spending at a reasonable clip but far from overspending, and appears to be more selective related to technology rather than capacity.

We certainly don’t expect Samsung to bounce back in memory spend as memory capacity is still off line and not fully back to 100% utilization. The primary spend we see out of Samsung is again technology driven not capacity driven

Technology spend without capacity spend is a muted up cycle

The semiconductor industry is importantly more than just a singular supply/demand capacity driven cycle.

The secondary cycle, though not as big as the capacity cycle, is the technology cycle. We obviously go through technology nodes and new fabs which create a separate wave of spend in parallel to the overall capacity driven spend.

We expect much of the spend in 2024 to be technology driven rather than capacity related and thus to be lower in amplitude.

Intel is spending on technology as is TSMC. Samsung and other memory makers have to keep up with technology node transitions even while keeping capacity off market. They need to keep up with technology to remain competitive on a Moore’s Law basis which drives basic costs in the memory business.

Essentially, technology spending remains almost a constant, though variable, while capacity spend has big swings.

We would temper expectations for a rip roaring Capacity spend in 2024

We don’t see a huge potential jump in demand for either memory or logic in 2024 that would bring back full fledged capacity spending.

While AI remains the near term focus and driver of the industry at the margin, AI alone is not enough to get the entire industry back in gear at full speed.

High bandwidth memory is great but far from enough to soak up all the excess memory capacity especially since retooling is needed to convert capacity to high bandwidth production. Memory makers are going to have to be careful not to overshoot HBM memory demand that may be more limited by AI logic chips capacity and availability.

We still need a more broad based macro-economic recovery to push demand for PCs, servers & wireless which are far and away the majority of the market.

The China Syndrome

It is still unclear what the impact of the $40B worth of semiconductor equipment tools bought by China in 2023 will have on the chip making market.

Obviously they are not all on line and productive,  just yet. The question is, when they come on line what the impact will be?

There are already signs of weakening foundry pricing at the trailing edge where China plays as China wants to put equipment and all its new fabs to work and take market share.

$40B is an awful lot of equipment and likely doubly so as its not relatively expensive bleeding edge equipment which suggests that the $40B represents even a larger bump in capacity since its mostly at trailing edge.

It obviously does not include big ticket items like $150M EUV tools or even expensive DUV immersion tools.

So this is a very significant bump in capacity as it is all concentrated on lower cost, mature nodes.

Second tier foundries will get squeezed

We remain concerned that second tier foundries such as Global Foundries and UMC etc… will likely get squeezed between China catching up and cutting prices to gain market share at the low end and TSMC lowering pricing to keep market share. Both China and TSMC have significant cost advantages over mid range foundries.

The main way to avoid this will be to try to lock in business from customers who don’t want to do business with China for what ever reason. GloFo has done a good job of this but the vast majority of chip customers just care about price, price and delivery.

China will likely be on of the biggest factors keeping a lid on the rate of recovery in the semiconductor industry in 2024. While it has no impact on the leading edge , we need to remember that the vast majority of semiconductor units is for mature technologies that China already serves and can and will impact that large market.

We know what Chinese competition did to the LED and Solar panel markets.

Stock selectivity matters

We think there will be more differentiation in the performance of semiconductor companies going forward into 2024, so stock selection will matter more as not all stories will rise with the same tide.

We still like the ASML story. One of the few true tech monopolies in the market. The High NA roll out story that will present positive news flow which will overshadow and China restrictions.

We like TSMC as the main beneficiary of the AI revolution as well as near term demand from both Apple and Intel. They are cautiously spending and more immune from Chinese competition in the trailing edge. They are still the best chip maker in the world hands down.

Samsung is more of a mixed story as its foundry offerings still don’t fully measure up to anywhere near TSMC and memory will likely have a slow recovery as demand is still not huge. Pricing has moved off a long term bottom for memory but not a strong bounce just yet. It feels more like the restrictions in capacity finally had an impact rather than a return of strong demand. If this is correct and memory prices are better due to holding capacity off line its not going to be a super recovery.

Still HBM remains a bright spot although limited

We might be more inclined to look at SK Hynix as a pure memory play as opposed to Samsung which is under performing in foundry.

In general we would be more selective in purchases of stocks as many are already overbought and many of those are overbought for no good reason and could see weakness as reality of differentiation sets in.

The Stocks

We think overall this earnings season will be positive for chip stocks as we expect many management teams will talk about a brighter outlook for 2024 even though it remains more of a hope than reality.

The dream of AI is still one of the biggest drivers of the outlook for a recovery and so far AI hasn’t hit any major bumps that would see it slow down.

Equipment spending recovery will be slower as compared to chip producers as there is still not huge demand for capacity in either memory or general foundry (other than China spending)

There is still a very wild card of geo-politics and China/Taiwan. The pot of tensions continues to boil, perhaps on the back burner rather than the front burner as the rhetoric has been dialed down a notch or two. We haven’t heard a lot out of Gina Raimondo and there haven’t been any recent major military exercises.

The stocks still feel overbought as the S&P broke back into record territory. Perhaps its just P/E expansion as everyone likes to believe rather than over exuberance that investors fear.

I guess we will find out if earnings season can support the stocks resurgence.

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Is Intel cornering the market in ASML High NA tools? Not repeating EUV mistake

AMAT- Facing Criminal Charges for China Exports – Overshadows OK Quarter

The Coming China Chipocalypse – Trade Sanctions Backfire – Chips versus Equipment


2024 Outlook with John O’Donnel of yieldHUB

2024 Outlook with John O’Donnel of yieldHUB
by Daniel Nenni on 01-23-2024 at 10:00 am

yieldHUB Team

yieldHUB is a SaaS company that provides yield management and comprehensive data analysis for semiconductor (IDMs and fabless) companies around the world. SemiWiki has been working with yieldHUB for the past three years doing blogs, webinars, and podcasts with great success. John O’Donnel spent 18 years at Analog Devices before founding yieldHUB in 2005. The last 18 years John has assembling a team of yield professionals all over the world.  yieldHUB is based in Ireland with employees located in Europe, the UK, the US, and Asia.

Tell us a little bit about your company.
We are an Irish company that was founded in 2005. yieldHUB provides a unique collaborative platform for semiconductor yield improvement and defect detection. We further differentiate ourselves by our speed of implementation, value for money, innovation and excellent customer support. Our customers range from large IDMs to fast-growing startups and SMEs around the world.

What was the most exciting high point of 2023 for your company?
There were several high points. This included achieving record processing speeds to ensure tens of thousands of datalogs could be processed into the database for a customer within a day. Also, exceeding 99.9% server availability across all our customers in the year. We attended several events including ITC.

What was the biggest challenge your company faced in 2023?
A challenge in 2023 was supporting and continuing to develop our current eponymous platform, while at the same time designing, resourcing and developing our next platform.

How is your company’s work addressing this biggest challenge?
We recruited more support engineers and data scientists and re-trained developers in the new technologies required for our next platform.

What do you think the biggest growth area for 2024 will be, and why?
In 2024, we anticipate that the biggest growth area for yieldHUB will be in automotive chip companies and in AI chip companies. We don’t just help them with yield improvement and defect detection, but also in sustainable manufacturing. Sustainability is a key topic at board level in every company, including in every semiconductor company. So improving the speed of test manufacturing is expected to be a high-growth area for us in 2024.

How is your company’s work addressing this growth?
We have been developing specific modules for analyzing test time, idle time and Overall Equipment Effectiveness (OEE).

What conferences did you attend in 2023 and how was the traffic?
In 2023, we attended a couple of GSA events including WISH. We exhibited at ITC (International Test Conference), PSECE, and events hosted by NMI (UK industry body) and MIDAS (Irish industry body). While foot traffic was somewhat lower than expected at ITC, our debut at PSECE in the Philippines garnered significant attention with high foot traffic. Follow us on LinkedIn and sign up to our newsletter for exclusive updates.

Will you attend conferences in 2024? Same or more?
Looking ahead to 2024, we are enthusiastic about attending and exhibiting at several global events, including those we participated in during 2023. Our commitment to these conferences remains strong as they provide us with invaluable opportunities to connect with industry peers, showcase our innovations, and stay at the forefront of semiconductor yield management.

 Final comments?
As we step into 2024, yieldHUB is primed for another year of innovation, growth, and service excellence. We thank our clients, partners, and team members for their unwavering support, and we look forward to further advancing the field of semiconductor yield management.

Also Read:

Must-attend webinar event: How better collaboration can improve your yield

It’s Always About the Yield

The Six Signs That You Need a Yield Management System


Blending Finite Element Methods and ML

Blending Finite Element Methods and ML
by Bernard Murphy on 01-23-2024 at 6:00 am

FEM mesh min

Finite element methods for analysis crop up in many domains in electronic system design: mechanical stress analysis in multi-die systems, thermal analysis as a counterpart to both cooling and stress analysis (eg warping) and electromagnetic compliance analysis. (Computational fluid dynamics – CFD – is a different beast which I might cover in a separate blog.) I have covered topics in this area with another client and continue to find the domain attractive because it resonates with my physics background and my inner math geek (solving differential equations). Here I explore a recent paper from Siemens AG together with the Technical Universities of Munich and Braunschweig.

The problem statement

Finite element methods are techniques to numerically solve systems of 2D/3D partial differential equations (PDEs) arising in many physical analyses. These can extend from how heat diffuses in a complex SoC, to EM analyses for automotive radar, to how a mechanical structure bends under stress, to how the front of a car crumples in a crash.

For FEM, a mesh is constructed across the physical space as a discrete framework for analysis, finer grained around boundaries and especially rapidly varying boundary conditions, and more coarse-grained elsewhere. Skipping the gory details, the method optimizes linear superpositions of simple functions across the mesh by varying coefficients in the superposition. Optimization aims to find a best fit within some acceptable tolerance consistent with discrete proxies for the PDEs together with initial conditions and boundary conditions through linear algebra and other methods.

Very large meshes are commonly needed to meet acceptable accuracy leading to very long run times for FEM solutions on realistic problems, becoming even more onerous when running multiple analyses to explore optimization possibilities. Each run essentially starts from scratch with no learning leverage between runs, which suggests an opportunity to use ML methods to accelerate analysis.

Ways to use ML with FEM

A widely used approach to accelerate FEM analyses (FEAs) is to build surrogate models. These are like abstract models in other domains – simplified versions of the full complexity of the original model. FEA experts talk about Reduced Order Models (ROMs) which continue to exhibit a good approximation of the (discretized) physical behavior of the source model but bypass the need to run FEA, at least in the design optimization phase,  though running much faster than FEA.

One way to build a surrogate would be to start with a bunch of FEAs, using that information as a training database to build the surrogate. However, this still requires lengthy analyses to generate training sets of inputs and outputs. The authors also point out another weakness in such an approach. ML has no native understanding of the physics constraints important in all such applications and is therefore prone to hallucination if presented with a scenario outside its training set.

Conversely, replacing FEM with a physically informed neural network (PINN) incorporates physical PDEs into loss function calculations, in essence introducing physical constraints into gradient-based optimizations. This is a clever idea though subsequent research has shown that while the method works on simple problems, it breaks down in the presence of high frequency and multi-scale features. Also disappointing is that the training time for such methods can be longer than FEA runtimes.

This paper suggests an intriguing alternative, to combine FEA and ML training more closely so that ML loss-functions train on the FEA error calculations in fitting trial solutions across the mesh. There is some similarity with the PINN approach but with an important difference: this neural net runs together with FEA to accelerate convergence to a solution in training. Which apparently results in faster training. In inference the neural net model runs without need for the FEA. By construction, a model trained in this way should conform closely to the physical constraints of the real problem since it has been trained very closely against a physically aware solver.

I think my interpretation here is fairly accurate. I welcome corrections from experts!


Chiplet Summit 2024 Preview

Chiplet Summit 2024 Preview
by Daniel Nenni on 01-22-2024 at 10:00 am

Chiplet Summit 2024 Logo

The second annual Chiplet Summit is coming up and if it is anything like the first one it will not disappoint. Chiplets are a disruptive semiconductor technology that are already being used by the top semiconductor companies like Intel, Nvidia, AMD and others. These companies design their own chiplets so they are blazing the trail for us all.

The next phase of adoption will be commercial chiplets designed by outside sources (IP and ASIC companies) for all to use. Some say the chiplet market could quickly reach $6B and I definitely see that happening. The opportunity for IP and ASIC companies to sell die is just the start. I see a huge upside here, absolutely!

The Chiplet Summit is February 6-8th at my favorite location the Santa Clara Convention Center. The introduction and the keynotes have been posted:

2024 will be a growth year – especially for generative AI and chiplets! Start it off right by meeting with the leading chiplet executives and technologists at the 2nd annual Chiplet Summit. You will hear the latest ideas and breakthroughs, see the new products, learn about generative AI acceleration, and exchange ideas with the industry’s innovators.

We have a new venue with lots of room for conversations, meetings, demonstrations, and posters. Please join us for pre-conference tutorials (including new ones on working with foundries and AI in chiplet design), a superpanel on accelerating generative AI applications, our popular “Chat with the Experts” event, presentations, and exhibits.

Chiplet Summit is the place where the entire ecosystem meets to share ideas across disciplines and keep the chiplet industry moving ahead. Please join us at this must-attend event!

You will hear about industry trends at keynotes from Applied Materials, Synopsys, Micron, Alphawave Semi, Hyperion Technologies, and the Open Compute Project:

Enabling an Open Chiplet Ecosystem at the Package Level

Brian Rea, UCIe™ Consortium

About UCIe™ Consortium: The UCIe Consortium is an industry consortium dedicated to advancing UCIe™ (Universal Chiplet Interconnect Express™) technology, an open industry standard that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level. UCIe Consortium is led by key industry leaders Advanced Semiconductor Engineering, Inc. (ASE), Alibaba, AMD, Arm, Google Cloud, Intel Corporation, Meta, Microsoft Corporation, NVIDIA, Qualcomm Incorporated, Samsung Electronics, and Taiwan Semiconductor Manufacturing Company. For more information, visit www.UCIexpress.org.

Multi-Die Systems Set the Stage for Innovation

Abhijeet Chakraborty, VP Engineering, Synopsys

Abstract: So far, the only design teams able to handle multi-die systems are bleeding-edge ones accustomed to breaking new ground with every step. Now the ecosystem is providing the tools, IP, standards, connectivity, and manufacturing needed to allow many more teams to switch to this new approach. Multi-die systems are now the mainstream and open up innovation in AI, security, transaction systems, virtual reality, and other areas. They continue the trend established by Moore’s Law to provide more compute power, more memory and storage, and faster I/O in less space and at lower cost.

Creating the Connectivity Required for AI Everywhere

Tony Chan Carusone, CTO, Alphawave Semi

Abstract: All major semiconductor companies now use chiplets for developing devices at leading-edge nodes. This approach requires a die-to-die interface within packages to provide very fast communications. Such an interface is particularly important for AI applications which are springing up everywhere, including both large systems and on the edge. AI requires high throughput, low latency, low energy consumption, and the ability to manage large data sets. The interface must handle needs ranging from enormous clusters requiring optical interconnects to portable, wearable, mobile, and remote systems that are extremely power-limited. It must also work with platforms such as the widely recognized ChatGPT and others that are on the horizon. The right interface with the right ecosystem is critical for the new world of AI everywhere.

New Packaging Technology Accelerates Major Compute Tasks

Sam Salama, Hyperion Technologies

Abstract: Many rapidly emerging compute applications (especially generative AI) need vast computing power and memory capacity. A new 3D packaging technology (QCIA) offers a highly economical solution. It allows larger packages, much higher power dissipation (up to 1000 watts per package), and substrates that exceed 100 mm by 100 mm (beyond the limitations of silicon interposers and without the warpage issues). For example, a single package could hold compute and SRAM devices plus many high- bandwidth memory (HBM) stacks for AI acceleration. Even more should be possible soon as research into new technologies employing < 1-micrometer line/space redistribution layers and panel-processing technologies for bigger packages continues. The development of materials for systems with even higher power dissipation is also ongoing. The QCIA technology can both help meet thermal challenges and deliver fine-pitch connections. It can provide some of the smaller-better-cheaper progress that Moore’s Law can no longer offer.

Creating a Vibrant Open Chiplet Economy

Bapi Vinnakota, Open Compute Project

Abstract: Chiplets have arrived as the way to design very large chips at leading-edge nodes. But how can we take full advantage of the drop-in approach they offer, allowing designers to easily include existing designs at older nodes, IP, and chiplets from outside sources? The OCP believes that an open chiplet economy is the way to go. It will serve the needs of chiplet creators, ASIC designers, and those providing support such as design tools, test facilities, and professional services. Such an economy requires standards, tools, and best practices. The OCP is already pursuing projects that standardize design models, help establish 3rd party testing, improve supply chain methods, define best practices for assembly, and create a standard high- performance, low-power die-to-die interface. The open chiplet economy will benefit large and small organizations alike, and will create huge opportunities for economic growth worldwide.

Many of the Chiplet Summit exhibitors are on SemiWiki so I will definitely be there. 2024 will be a big semiconductor growth year and that will include conference attendance, my opinion.

Also Read:

How Disruptive will Chiplets be for Intel and TSMC?

Synopsys Geared for Next Era’s Opportunity and Growth

Will Chiplet Adoption Mimic IP Adoption?


Luc Burgun: EDA CEO, Now French Startup Investor

Luc Burgun: EDA CEO, Now French Startup Investor
by Lauro Rizzatti on 01-22-2024 at 6:00 am

Luc Burgun

When we last saw Luc Burgun’s name in the semiconductor industry, he was CEO and co-founder of EVE (Emulation and Verification Engineering), creator of the ZeBu (Zero Bugs) hardware emulator. EVE was acquired by Synopsys in 2012.

After the acquisition, Luc moved out of EDA and became an investor. Join me as I catch up with Luc and learn more about his activities and investments and what’s interesting to him today.

What did you do once EVE became part of Synopsys?
After the acquisition, Synopsys offered me an opportunity to join the team. Even though Synopsys is a great company to work for, two years later, I craved for a change. An opportunity to join a startup designing FPGA-based market data processing systems for the financial market as CEO accelerated my departure. NovaSparks, that’s the startup’s name, is still in business and growing. Recently, we entered the APAC market opening and staffing an office in Bangkok, Thailand.

As you indicated in the intro, I’m also an investor.

Did you consider doing another startup?
I must say that the idea resonated with me at the time, but the NovaSparks opportunity cut short my planning.

Is there one investment area currently more interesting to you than others?
I am in favor of differentiation. I do not put all my eggs in one basket. Roughly speaking, I split my investments evenly in three buckets: real estate, private equities and startups. For private equities, I consult with my network of financial advisors. Over the years, I established a network of trusted advisors in the private equity community who possess a broad and deep knowledge of the market.

As for investing in startups, my focus is on French small enterprises mostly doing B2B in semiconductor, AI and financial trading.

What made you decide to invest in French startups?
Par-bleu!* I am a Frenchman (smiling).

On a more serious note, I always thought that France enjoys first-class engineers in virtue of a top-notch education taught in excellent universities. However, high-tech investment is a high-risk proposition that fits well with Silicon Valley but not with France.

Semiconductor development, and AI even more, requires rather substantial investments before reaching a return on investment, and sometimes institutional investors don’t see that. They may make an initial investment, but then they realize that it will be necessary to continue to invest and they drop out.

My success at EVE was valuable training for me and I learned a lesson. When I invest, I am in for a long run.

*English translation: Of course!

Where is your investment focus now?
So far, I invested in several high-tech startups mostly in the semiconductor business. Not all have been successful. On average, they have generated a substantial return on investment. Some of them are still in business, boosting my expectations for a higher payoff down the road.

Is there a company that stands out? Why?
Among the active startups, obviously, NovaSparks is my #1. Then I look forward to VSORA, a startup that devised an exciting and novel semiconductor architecture ideal for processing a spectrum of leading-edge AI applications. It has been implemented on two families of devices.

The Tyr family addresses autonomous driving (AD) vehicles at Levels 4 and 5; the Jotunn family delivers generative AI (GenAI) acceleration. Both applications are demanding in terms of computing power, measured in multiple Petaflops. High throughput in absolute terms is just one of several critical requirements. As critical is the efficiency of the processing cores. Today, the most popular AI computing core is the GPU, created decades ago to accelerate graphics rendering. When applied to AI algorithmic acceleration, the GPU efficiency drops dramatically. In processing AI algorithms like transformers, the GPU efficiency hovers around 1%. The VSORA architecture is 50x more efficient. Other attributes include low latency and low power consumption. For edge application, low cost is essential.

Why do you consider VSORA such an important investment?
Because I believe in their creation and in the team behind it. I have known the team since 2002 when EVE’s headquarters shared the same building with DibCom, the precursor to VSORA.

To put in perspective, my trust let me highlight the main attributes of the VSORA architecture.

The Tyr device boasts up to 1,600 Teraflops at efficiencies of 60% or more. It can process the most advanced AD algorithms like transformers and retentive nets, realizing perception stage contextual awareness in less than 20msec. The Tyr1 has a peak power consumption of only 10W.

The Jotunn8 Generative AI accelerator delivers up to six Petaflops, with efficiency in the of 50% for very large and comples LLMs like GPT-4, consuming a maximum of 180 Watts.

VSORA’s attributes have been confirmed in early customer evaluations.

That is only part of what makes a successful product. Another is the unique VSORA development software, built from the ground up along with the creation of the hardware. Porting new complex algorithms like incremental transformers onto the VSORA computing processors is a straightforward process. Users only deal with the algorithmic language, never having to bother with low level code such as RTL. The tight integration of the software with the hardware optimizes the hardware resources based on customer profiling without manual tuning and simplifies the entire design process, reducing cost and time to market.

A VSORA device can be deployed rapidly and efficiently with highly competitive overall performance.

What advice do you offer startup founders?
As we say in French, I like “to bring the stone to the building.” My advice is twofold. First, I like to coach and motivate startup founders, especially in time of stress. Second, I am available to help them by getting involved in some very specific projects when necessary. It could be business, marketing, legal, HR, Finance or even M&A. Any aspect where founders are not comfortable.

Also Read:

Long-standing Roadblock to Viable L4/L5 Autonomous Driving and Generative AI Inference at the Edge

The Corellium Experience Moves to EDA

EDA Product Mix Changes as Hardware-Assisted Verification Gains Momentum


Podcast EP204: A Broad View of Design Architectures and the Role of the NoC with Arteris’ Michal Siwinski

Podcast EP204: A Broad View of Design Architectures and the Role of the NoC with Arteris’ Michal Siwinski
by Daniel Nenni on 01-19-2024 at 10:00 am

Dan is joined by Michal Siwinski, Chief Marketing Officer at Arteris. He brings more than two decades of technology-based strategy, marketing and growth acceleration. Prior to joining Arteris, he served as Corporate Vice President of Marketing and Business Development at Cadence, and through his leadership, the company’s brand and digital image was transformed. Mr. Siwinski directed the strategy and marketing in Cadence’s entry into the intellectual properties, embedded and system domains. Prior to Cadence, he worked at Verplex Systems on new product innovation and Mentor Graphics on IP and SoC design services.

In this far-reaching discussion, Dan explores new chip design architectures with Michal. The unique design requirements for trends such as AI, inference, and multi-die design are discussed in detail. The pivotal role of specialized IP such as the Arteris NoC is also discussed, with an assessment of future trends and impact.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


2024 Outlook with Chris Morrison of Agile Analog

2024 Outlook with Chris Morrison of Agile Analog
by Daniel Nenni on 01-19-2024 at 6:00 am

Agile Analog image

Agile Analog focuses on providing analog and mixed-signal IP (Intellectual Property) solutions for the semiconductor industry. They specialize in developing analog and mixed-signal designs that can be integrated into various electronic devices and systems.  Composa™ is their unique technology that automatically generates analog IP to your exact specifications. Applications include; HPC, IoT, AI, automotive and aerospace.

Tell us a little bit about yourself and your company.
I am an electronics engineer with over 15 years’ experience of working in the semiconductor industry, delivering innovative analog, digital, power management and audio solutions. In that time I have been fortunate to have had some really interesting technical, product and project roles, including during my 10 years at Dialog Semiconductor (acquired by Renesas). Now, I am the Director of Product Marketing at Agile Analog, the customizable analog IP company.

At Agile Analog, we are transforming the world of analog IP with our highly configurable, multi-process analog IP technology. The company has developed a unique way to automatically generate analog IP that meet the customer’s exact specifications, for any foundry and on any process, from legacy nodes right up to the leading edge. We provide a wide-range of novel analog IP solutions and subsystems, covering data conversion, power management, IC monitoring, security and always-on IP. Applications include; data centers/HPC (High Performance Computing), IoT, AI, quantum computing, automotive and aerospace.

What was the most exciting high point of 2023 for your company?
2023 was a busy year for Agile Analog. There were several significant company milestones, including product launches for our 12-bit ADC and RISC-V subsystem. There were also important industry partner announcements – when we joined the Intel Foundry Services Accelerator IP Alliance Program in March and then the TSMC Open Innovation Platform (OIP) IP Alliance Program in September.

If I had to single out just one exciting high point for Agile Analog in 2023, it would be becoming a TSMC OIP member. This was a great team achievement and it has opened up many new doors for us. For example, enabling us to take part in the TSMC OIP Ecosystem Forums last September and October. Collaborating closely with the major foundries means that we can better serve our growing array of global customers, especially those in sectors like HPC (High Performance Computing) working on the most advanced nodes.

What was the biggest challenge your company faced in 2023?
2023 was a challenging year for the semiconductor industry as a whole, with supply chain disruptions, component shortages, staff layoffs, the global economic downturn and geopolitical turmoil. As a result of these complex issues there was a lack of confidence across the industry. This impacted on most organisations in our market, particularly in the first half of 2023. By the end of 2023 however, and coming into 2024, at Agile Analog we have seen a really strong pick up with a lot of new projects.

How is your company’s work addressing this biggest challenge?
Over the last 12 months, the Agile Analog team has continued to develop our product portfolio and strengthen partner relationships, whilst the industry has slowly regained its confidence. Our core mission is to reduce chip design complexity, risk and cost, enabling a faster time-to-market for the very latest semiconductor devices.

As our innovative Composa™ platform automatically generates customizable and process agnostic analog IP, analog circuits can be designed more quickly and to a higher quality. Our digitally wrapped solutions simplify the semiconductor integration process. Using our technology, chip designers can add analog features to provide product differentiation without needing to worry about specialist analog engineers or high costs. This will help to accelerate innovation in semiconductor design.

In 2023 we saw great advances within our Composa tool, with multiple customer deliveries generated and verified through our Composa engine. The advances here allow us to generate transistor sized schematics in a matter of minutes. In 2024 we will continue to extend the functionality of our Composa tool, in particular adding layout functionality, initially around component placement and then on to full routing.

What do you think the biggest growth area for 2024 will be, and why?
Analysts are predicting that there will be a return to growth in the semiconductor industry in 2024. I expect that this will be driven in part by increased demand for HPC, especially with continued growth in ICs for AI. At some point we must see some recovery in the consumer space too!

Away from the more mainstream areas there is also mounting interest in quantum computing and quantum sensing technology. In 2024 we will begin to see more companies try and take these solutions out of the science labs and work towards commercializing the technology. This is largely thanks to the huge amounts of government investment worldwide in this field.

How is your company’s work addressing this growth?
There is growing demand for our highly configurable, high-quality and high-performance analog IP products – especially our data conversion, security, and power management solutions. In terms of quantum computing technology, one of the reasons that the Agile Analog team worked on the sureCore CryoCMOS project last year was to gain more experience in this area. We are working hard to extend our product portfolio further to support the increasing range of innovative applications.

In 2024 we will have a particular focus on building out our data converter roadmap, to bring higher resolution and higher data rate converters to market and delivering our chip health and monitoring, security, and power management IP on the latest nodes.

What conferences did you attend in 2023 and how was the traffic?
I was on the road quite a lot in 2023! I attended an interesting range of industry events across the world – including the TSMC Symposium in Amsterdam in May, the RISC-V Summit Europe in Barcelona in June, the GSA Forum in Munich in June, and then DAC (Design Automation Conference) in Santa Clara in July.

During the second half of the year we focused more on foundry events – such as the TSMC OIP Ecosystem Forums in Santa Clara in September and in Amsterdam in October that I mentioned earlier. The event world seemed to be beginning to come back to life in 2023, with increased traffic compared to 2022, so here’s hoping that this momentum continues in 2024.

Will you attend conferences in 2024? Same or more?
In 2024, it is likely that we will maintain our focus on strengthening our foundry relationships by participating in more of the foundry related events. The first of these being the new Intel IFS event in San Jose in February. This year I will be joined on the road by a new colleague, Christelle Faucon, our new VP of Sales. Together we will spread the word about the benefits of our unique analog IP technology to accelerate the adoption of our analog IP products across the globe.

Also Read:

Agile Analog Partners with sureCore for Quantum Computing

Agile Analog Visit at #60DAC

Counter-Measures for Voltage Side-Channel Attacks


Non-EUV Exposures in EUV Lithography Systems Provide the Floor for Stochastic Defects in EUV Lithography

Non-EUV Exposures in EUV Lithography Systems Provide the Floor for Stochastic Defects in EUV Lithography
by Fred Chen on 01-18-2024 at 10:00 am

Defocus flare (small)

EUV lithography is a complicated process with many factors affecting the production of the final image. The EUV light itself doesn’t directly generate the images, but acts through secondary electrons which are released as a result of ionization by incoming EUV photons. Consequently, we need to be aware of the fluctuations of the electron number density as well as the scattering of electrons, leading to blur [1,2].

In fact, these secondary electrons need not be coming from the direct EUV absorption in the resist either. Secondary electrons can come from absorption underneath the resist, which include a certain amount of defocus. Moreover, there is an EUV-induced plasma in the hydrogen ambient above the resist [3]. This plasma can be a source of hydrogen ions, electrons, as well as vacuum ultraviolet (VUV) radiation [4,5]. The VUV radiation, the electrons and even the ions constitute separate blanket resist exposure sources. These outside sources of secondary electrons and other non-EUV radiation all basically lead to non-EUV exposures of resists in EUV lithography systems.

Defocused images have reduced differences between maximum and minimum dose levels, and also add an offset to the minimum dose level (Figure 1). Thus, when incorporated with the EUV-electron dose profile, the overall image is more sensitive to stochastic fluctuations, since the defocused doses are everywhere closer to the printing threshold. The blanket exposures from the EUV-induced plasma further increase sensitivity to stochastic fluctuations at the minimum dose regions.

Figure 1. Defocus reduces the peak-valley difference and adds an offset to the minimum dose level. This tends to increase vulnerability to stochastic fluctuations.

Thus, stochastic defect levels are expected to be worse when including the contributions from these non-EUV sources. The effect is equivalent to adding a reduced incident EUV dose and adding an extra background electron dose.

Figure 2. 30 nm pitch, 30 mJ/cm2 absorbed, 3 nm blur, without non-EUV sources. Pixel-based smoothing (rolling average of 3×3 0.6 nm x 0.6 nm pixels) is applied. Numbers plotted are electrons per 0.6 nm x 0.6 nm pixel.

Figure 3. 30 nm pitch, 40 mJ/cm2 absorbed, 3 nm blur, 33 e/nm^2 from non-EUV sources. Pixel-based smoothing (rolling average of 3×3 0.6 nm x 0.6 nm pixels) is applied. Numbers plotted are electrons per 0.6 nm x 0.6 nm pixel.

Figures 2 and 3 show that including non-EUV exposure sources can lead to prohibitive stochastic defects, regardless of where the printing threshold is set in the resist development process. In particular, the nominally unexposed regions are more vulnerable to the non-EUV exposure sources. The nominally exposed regions, on the other hand, are more sensitive to the dose levels and blur. The non-EUV exposure sources therefore contribute to providing a floor for stochastic defect density.

Thus, it is necessary to include the electrons emitted from underneath the resist as well as the radiation from the EUV-induced plasma as exposure sources in EUV lithography systems.

References

[1] P. Theofanis et al., Proc. SPIE 11323, 113230I (2020).

[2] Z. Belete et al., J. Micro/Nanopattern. Mater. Metrol. 20, 014801 (2021).

[3] J. Beckers et al., Appl. Sci. 9, 2827 (2019).

[4] P. De Schepper et al., J. Micro/Nanolith. MEMS MOEMS 13, 023006 (2014).

[5] P. De Schepper et al., Proc. SPIE 9428, 94280C (2015).

This article first appeared in LinkedIn Pulse: Non-EUV Exposures in EUV Lithography Systems Provide the Floor for Stochastic Defects in EUV Lithography

Also Read:

Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM

BEOL Mask Reduction Using Spacer-Defined Vias and Cuts

Predicting Stochastic Defectivity from Intel’s EUV Resist Electron Scattering Model