Bronco Webinar 800x100 1

What if China doesn’t want TSMC’s factories but wants to take them out?

What if China doesn’t want TSMC’s factories but wants to take them out?
by admin on 07-02-2024 at 10:00 am

Foundry Numbers 2024

Insights into the Semiconductor Industry and the Semiconductor Supply Chain

I am not an expert in geopolitical issues, but lately, my research has begun to worry me. As I was preparing to be interviewed for Chinese media, I was in my “Chinese Zone”, adapting to what is palatable to a Chinese audience. Maybe my mode provoked the thought:

Who would benefit from eradicating the entire Taiwanese Semiconductor industry?

What if China is not deterred by potentially mined TSMC factories and ASML kill switches? What if the reunification plan is based on eradicating the TSMC factories and the Semiconductor supply chain in Taiwan and beyond?

For the first time, I can see the outline of a military invasion plan that can significantly benefit China, but the implications are scary. While I have no indications that this is actually about to happen, this is within the “Strong Man” thinking. This article describes that scenario and its implications.

The Three Reunifications Scenario

My upbringing in a small, peaceful country devoid of geopolitical ambitions during the Cold War has shaped my understanding of these issues. Living less than 200km from the Warsaw Pact border, I witnessed the system before and after the wall fell. The system didn’t crumble with the wall. It retreated to Russia, reorganising under the guise of democracy and economic reform. I experienced this firsthand during my extensive travels in Russia, not just the “civilised” parts.’

I’m not writing this in my mother tongue and only a few comprehend what it means to be Danish. As a small nation, we are not imperial and don’t need nationalism, we need cooperation with other countries, primarily in the coal union that brought peace to the parts of Europe that formed the EU.

Although not an expert, I feel qualified to give my view of the geopolitical situation, and I have no problem with others disagreeing with my assessment.

The background

After the fall of the Soviet Union, a new US-led, rules-based world order began. It was based on alliances and corporations and supported civil rights, free speech, and democracy. It was the birth of globalism.

The Semiconductor supply chain evolved and grew in locations where it made the most sense, and nobody interjected or tried to dictate this development. Most US semiconductor companies were happy to eliminate the hassle of making chips that could be made better and cheaper in Asia.

The world is moving from the rule of law to the law of rulers

That order is now breaking down. The US standings in the world have deteriorated due to long wars with little plans for the aftermath and the increasing division of the US society. Trust in elections and institutions is at an all-time low. The more the US tries to dictate the world order, the less it succeeds, as the USA has become unpredictable to the rest of the world. The outcome of a US election can now create two very different scenarios for the world order, and the election itself will be disrupted no matter what. A few votes in a swing state, a Florida judge or a Supreme Court decision can decide the election.

World order and Strong men

The breakdown of the rules-based world order got plenty of help from the outside. Democracy and free speech are not Strong Men’s favourite dishes. It goes against the dictator’s playbook, which has three chapters:

  • The Enemy Within – I alone can fix it
  • Prosperity – I am the system, and it is good for you
  • The external Enemy, I am the only one to protect you.

The enemy within is the tool to gain power. The breakdown of trust in politicians, institutions and society in general. It culminates when political opponents are hated more than external “enemies”.

For many years, Putin had a “deal” with the Russians that if they stayed out of politics, he would make them more prosperous, and he did. From the late nineties and 15 years on, the standard of living increased dramatically. But this was built on an organised cleptocracy that stole from the people and eventually ran out of steam.

Enter the external enemy. It is time to make Russia great again, first by covert little green men operations and later the full-scale invasion of Ukraine. As this did not go well, the enemy was changed from Ukraine to the West. “We are fighting the collective forces of the West.”

The overarching objective of a Strong Man is to keep power after securing it. Self-perseverance is key. If only a few per cent of the population starts to protest, you are in trouble. Stability is your friend.

Where Putin is deep in 3 and has transformed the entire Russian society into a war economy, President Xi is likely at the end of 2. The massive infrastructure and real estate investments are starting to look hollow, and economic growth is slowing. It’s time to make China great again.

Everyone is aware of China’s increasing military ambitions. Its claim to atolls in the South China Sea and naval conflicts with all its neighbours point to a new geopolitical reality.

The no-limits partnership between Russia and China and the latest treaty between North Korea and Russia opens up the possibility of a potentially devastating conflict to make Russia, Korea and China reunified and great again.

He who controls the present controls the past.
He who controls the past controls the future.
George Orwell, 1984

A vital element of the Strong Man’s storyline is the past. Everything was better, and we were stronger and a larger unified nation. If you have the ultimate power, you can make insurrections, nuclear accidents, and squares disappear. The past is no stranger to a strong man.

Ruler of the past (what is left)

While Western politicians come and go, China and Xi understand how to play the long game. The semiconductor industry has been a focus area for decades, while Western corporations cannot see beyond the quarterly boundary.

I slept relatively well at night, believing China couldn’t conquer Taiwan and preserve the Semiconductor industry. A scenario without winners that would set the world back 15 years or more.

I realised that a destructive reset could become a long-term advantage in a long game. If Xi were willing to turn back the clock 15 years, China could emerge as the winner of the past, of whatever was left.

Prelude

The Three Reunifications Scenario might not be the original plan, but it could have developed due to Russia’s invasion of Ukraine. This is the prelude that China has observed the overwhelming Russian war machine being degraded by drones and precision weapons from the West. It is unlikely that other invasions will differ, so any attempt to conserve part of the conquered industry is an illusion. Destruction is inevitable.

Diversion

Surprisingly, China has not commented much on the Russis-North Korea Defense pact. Why is China not uncomfortable with Little Rocket Man getting better rockets? Maybe because the next part of the plan is a North Korean attack on South Korea. This would further stretch Western military resources and deplete the already dwindling military resources. China could lean back and say: Not our problem. While it is unlikely that North Korea can conquer South Korea, it can destroy a lot of the South Korean Industry.

Finale

The West’s involvement in two wars of attrition forms the basis for an invasion of Taiwan. There is a limit to how many conflicts the US military and its Western allies can be in simultaneously. That might allow China to invade Taiwan without too much or any direct involvement from outside forces. With part of the US political establishment becoming more isolationistic, the West would likely be unable to handle three military flashpoints at a time.

Although I knew the tensions between Ukrainians and Russians first-hand, I was convinced Russia would not invade – I could not see the logic (maybe strongmen don’t use logic). I hope I am also wrong about the Three Reunifications Scenario. Unfortunately, I can see the logic in it from a Chinese perspective.

The current Manufacturing View:

Although not all companies have all manufacturing in their legal jurisdiction, it is the exception rather than the rule.

The distribution of semiconductor Property, Plant, and Equipment (PPE) gives a good overview of the manufacturing capacity from a geographic perspective. Chinese privately (state) owned companies are adding an estimated 35B$. These companies specialise in Memory, Micro, CPU, and mobile semiconductors.

While PPE includes other assets, these are very small for a Semiconductor Manufacturing Company. The Fabs and equipment dominate PPE. It can also be argued that PPE is higher per dollar of revenue in the US than in China and that advanced manufacturing takes more PPE value. These are all fair arguments to consider. Still, PPE is a much more solid number than most in the industry.

We divide manufacturing capacity into three categories:

  1. Integrated Device Manufacturing (IDM)
  2. Semiconductor Foundry
  3. Mixed Manufacturing

Traditionally, all semiconductor companies were IDMs that manufactured and sold branded products. With the emergence of TSMC, companies could choose to outsource manufacturing and go fabless. Some companies have chosen the mixed model and retained some manufacturing capacity while outsourcing the rest.

Property, Plant and Equipment $M, by Country of Incorporation, March 31st, 2024

The USA is still dominant despite some US capacity in other jurisdictions, so the physical location is elsewhere. Intel has Fabs in Ireland, Israel, and Malaysia, but most of its capacity is in the US. From a PPE perspective, Intel is not getting as much value from its manufacturing assets as TSMC is. This is a function of Intel making subpar manufacturing decisions, excluding Deep UV equipment. Despite these discrepancies, we still believe that a PPE analysis is solid and a source of good insights.

The PPE analysis shows that most of the chip manufacturing capacity is concentrated around the East China Sea, and 50% is in potential “Reunification” zones.

China’s 12% share of global PPE might sound low, but it is growing incredibly. Despite the US lead embargo, China still buys nearly half of the Western semiconductor tools sold, amounting to a roughly 31B$ annual run rate plus an additional 4B$ of Chinese tools. The embargo and Chips Act only accelerated this development.

The total Chinese PPE is projected to pass 100B$ within a year with these numbers. A near 50% growth will give China an estimated 18% global capacity.

If the semiconductor manufacturing capacity of South Korea and Taiwan is excluded, China will have a third of the global capacity by the end of the year. For every year that passes, China will gain more capacity than even the US.

With the exclusion, the total capacity, based on PPE, will decline from 550B$ to 300B$

Revenue versus Capacity

As global semiconductor revenue has been an average of $550B over the last couple of years, and the combined PPE is also around 550B$, it can be assumed that 1$ of PPE can generate 1$ of annual revenue. This might be too crude for accounts, but it can be used for this scenario.

Excluding Taiwanese and South Korean manufacturing, the world would be set back 15 years from a manufacturing standpoint. China would command 1/3rd of the capacity at the end of this year. This share would grow significantly every year any military action is delayed. Time is on China’s side.

How this will be devastating to the US

Based on the PPE analysis, the US would be a significant winner in the Three Reunifications Scenario, but there are some complications. The first is that the US capacity is concentrated in one company, Intel, which is not in the best of shapes. As with the other US semiconductor companies, Intel’s business heavily depends on the supply chain in Asia, especially the high-tech supply chain of Taiwan, Korea and Japan.

The destruction of the Semiconductor Supply Chain

The destruction will not only impact semiconductor manufacturing but also the semiconductor supply chain. A Korean conflict could impact US manufacturing positively, while the supply chain implications are minimal. A Taiwanese conflict, however, would be negative for the US. For the large US fabless companies it would be catastrophic and US rely a lot on the high tech Taiwanese supply chain.

The smaller and more sheltered Japanese and European Semiconductor industries are more sheltered and self-sufficient from a supply chain perspective and would likely fare better than the US.

Excluding Taiwanese and South Korean manufacturing, the world would be set back 15 years from a manufacturing standpoint. China would command 1/3rd of the capacity at the end of this year. This share would grow significantly every year any military action is delayed. Time is on China’s side.

How this will be devastating to the US

Based on the PPE analysis, the US would be a significant winner in the Three Reunifications Scenario, but there are some complications. The first is that the US capacity is concentrated in one company, Intel, which is not in the best of shapes. As with the other US semiconductor companies, Intel’s business heavily depends on the supply chain in Asia, especially the high-tech supply chain of Taiwan, Korea and Japan.

The destruction of the Semiconductor Supply Chain

The destruction will not only impact semiconductor manufacturing but also the semiconductor supply chain. A Korean conflict could impact US manufacturing positively, while the supply chain implications are minimal. A Taiwanese conflict, however, would be negative for the US. For the large US fabless companies it would be catastrophic and US rely a lot on the high tech Taiwanese supply chain.

The smaller and more sheltered Japanese and European Semiconductor industries are more sheltered and self-sufficient from a supply chain perspective and would likely fare better than the US.

China, however, would emerge as the winner. The Chinese semiconductor industry follows the deep tradition of locally building the entire supply chain. From raw goods to finished products.

As semiconductor materials manufacturing is not as PPE intensive as semiconductor manufacturing, it is more relevant to look at revenue:

According to SEMI research, in 2023, more than 35% of all Semiconductor materials outside Taiwan and South Korea will be in China.

Taking out most of the leading-edge manufacturing and supply chain would leave the Chinese semiconductor industry in the sweet spot, ready to serve the world. As the Chinese are experts in value chains, they are not likely to sell their semiconductors directly to the West, but they will be more than happy to sell them wrapped in products. Your next electronic products—phones, PCs, and Cars—might be Chinese, but you must wait until AI is back on the agenda, as the Semiconductor clock has reversed a few years.

It would serve many of Xi’s goals, including reunification, however ugly. An external enemy could be potentially humiliated by non-intervention. China is emerging as the number one country in semiconductors (based on mature nodes turned leading edge), with most of the high-tech manufacturing in the world.

As I started by saying: Fortunately, this is just a scenario….

Also Read:

Blank Wafer Suppliers are not Totally Blank

What’s all the Noise in the AI Basement?

Ncredible Nvidia


Automotive Designs Have No Room for Error!

Automotive Designs Have No Room for Error!
by Daniel Nenni on 07-02-2024 at 6:00 am

Automotive electronics

Automotive designs demand a high level of fault tolerance, and one of the methods to achieve this is to use error correcting codes (ECC). This Wikipedia page ECC Memory gives a flavor, though that article concentrates on memory and we are interested in wider applications using a form of forward error correction. This technique can be applicable to both memories (to detect and or correct data storage errors) and interconnect busses (to detect and or correct transmission errors). There are various levels of protection possible, and for automotive the choice of SECDED (Single Error Correction, Dual Error Detection) is popular. There are however multiple ways to implement this, such as choices of word size and particular error correcting code. There are trade-offs between efficiency (in terms of the number of extra bits required for the coding), degree of protection, and latency to consider.

At first glance, the selection of automotive-qualified IP incorporating ECC features would seem to be a boon for the designer of an automotive system on chip. It conjures the idea that one can just buy the parts and plug them together. However, there are many factors to consider, and traps for the unwary. The use of ‘end-to-end’ ECC for a data route between two IP blocks in attractive in terms of simplicity and efficiency, but only works if the IPs at each end support the same code and operate at the same word size. Unfortunately, they often don’t. Furthermore, even when the two ends of a data route are compatible, the Network on Chip (NoC) that routes data between them may combine or split the data transactions into larger or smaller word sizes for good reasons of address alignment and network performance and/or protocol conversion.

An additional challenge to deal with is to be sure that the various IPs uses the same equations. SECDED could be performed in different ways. So, one will then need to adapt the two ends preserving the encoding / decoding equations with the issue that the code is owned by different IP providers. This forces the usage of dedicated bridges knowing encoding to decode and correct.

Whilst that is what a NoC is supposed to do, this can eliminate the possibility of end-to-end ECC altogether. In those circumstances, it may be necessary to encode and decode the ECC protection several times in different ways as the data makes its way around the SoC. This adds a great deal of complexity, not only to implementation, but to verification. On the plus side to trade this off, it also adds more protection because if you have multiple independent stages of error correction, you can cope with more errors.

This is just one of the many complexities that have to be considered when designing ultra-complex custom chips for automotive applications. We have years of experience in this area with skilled engineers who know exactly how to design automotive chips that will ensure that IP blocks work together as described above. We have just taped out such a design for a Tier 1 automotive company so, if you want an automotive design where all the errors are correctly corrected, contact us now using this LINK.

Also Read

Sondrel’s Drive in the Automotive Industry

Transformative Year for Sondrel

Sondrel Extends ASIC Turnkey Design to Supply Services From Europe to US


Career in EDA Versus Chip Design: Solving the Dilemma

Career in EDA Versus Chip Design: Solving the Dilemma
by Jai Pollayil on 07-01-2024 at 6:00 am

EDA Semiconductor

Chip design and Electronic Design Automation (EDA) are two sides of the same coin in the semiconductor industry. Both fields are critical for developing the advanced integrated circuits (ICs) that power our modern world. This article explores the differences between a career in chip design and EDA, drawing on my personal experience transitioning from chip design to leading Application Engineering team at Ansys Semiconductor division globally.

My Journey from Chip Design to EDA

I began my career in the year 2000 as a circuit design engineer at Alliance Semiconductor (now part of ON Semiconductor). There, I focused on designing memory chips using circuit schematic/layout capture tools and spice simulators. In this role, the emphasis was on the engineer’s skillset rather than an in-depth understanding of EDA tools.

Transitioning to digital IC design at Texas Instruments (TI) exposed me to the significant reliance on EDA tools in the digital design domain. The success of a design heavily depended on both the engineer’s expertise and the capabilities of the EDA tool itself. This realization sparked my interest in the EDA world.

The Allure of EDA

While at TI, I interacted with EDA Application Engineers who played a crucial role in helping chip designers achieve optimal results. Witnessing their expertise, I recognized that for chip backend designers, true mastery required not only design skills but also a deep understanding of the underlying EDA tools. This realization paved the way for my move to the EDA industry.

The Advantages of an EDA Career
  • Technical Exposure: Working in EDA as an application or product engineer offers exposure to advanced technologies like BSPDN and SPR well before they reach chip designers. You become part of a larger ecosystem that shapes the future of semiconductor technology.
  • Cross-Team Collaboration: EDA companies like Ansys foster a collaborative environment where chip design, package design, and board design are integrated, unlike the compartmentalized structure often found in semiconductor companies. This collaboration is becoming essential for designing complex multi-die/3D IC systems.
  • Business Acumen: As an application engineer, you’re closer to the revenue stream, collaborating with sales teams and witnessing the direct impact of your efforts on the company’s growth.
  • Work Culture and Benefits: EDA companies typically foster a more customer-focused work culture, with opportunities for travel and a better work-life balance compared to the chip design industry with its highly demanding tapeout deadlines.
  • Industry Stability: The EDA industry boasts excellent compensation and benefits, with a stable and resilient revenue model as the business deals in EDA are mostly based on multi-year contracts. The growing number of chip design companies and the increasing complexity of ICs further fuel the demand for EDA tools.
  • The Future of EDA: With the rise of Artificial Intelligence and Machine Learning, the role of EDA tools are becoming ever more crucial. Chip designers are relying more heavily on these tools, making the future of the EDA industry exceptionally bright.
Conclusion

Choosing between a career in chip design and EDA depends on your individual preferences. If you enjoy a hands-on chip design experience, areas like analog circuit design might be a better fit. But if you crave exposure to cutting-edge technologies, business insights, and a collaborative work environment, then EDA offers a compelling path. The increasing interdependence between these two fields creates exciting opportunities for those considering a career in either domain.

Also Read:

Synopsys Accelerates Innovation on TSMC Advanced Processes

SoC Power Islands Verification with Hardware-assisted Verification

Anirudh Fireside Chats with Jensen and Cristiano


Podcast EP232: The Evolution of Yield Learning and Silicon Debug with Marc Hutner

Podcast EP232: The Evolution of Yield Learning and Silicon Debug with Marc Hutner
by Daniel Nenni on 06-28-2024 at 10:00 am

Dan is joined by Marc Hutner. Marc has been innovating in the areas of design, test, DFT and data analytics for more than 20 years. In June of 2023, he joined the Siemens EDA Tessent group as the product director of Silicon Learning, enabling how silicon data is applied to yield improvement and silicon debug. Previously, he worked for proteanTecs as senior director of product marketing and Teradyne as a system/silicon architect.

Marc explains how yield learning and silicon debug have evolved in the era of high complexity SoCs and multi-die systems. It turns out understanding how to harness the huge volume of test data available is a big part of a successful strategy. Marc discusses AI, ML and other techniques to improve yield insights that can result in millions of dollars of savings.

He describes the importance of people, processes, and technology and how they all relate to each other and the larger ecosystem for silicon production. He discusses some of the innovations at Siemens EDA and its impact.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


VLSI Technology Symposium – Intel describes i3 process, how does it measure up?

VLSI Technology Symposium – Intel describes i3 process, how does it measure up?
by Scotten Jones on 06-28-2024 at 6:00 am

Figure 1. Process Key Dimensions Comparison.

At the VLSI Technology Symposium this week Intel released details on their i3 process. Over the last four nodes Intel has had an interesting process progression. In 2019, 10nm finally entered production with both high performance and high-density standard cells. 10nm went through several iterations eventually resulting in i7, a high-performance cell only process. When we characterize process density, we always talk about the highest density logic standard cell, 10nm achieved just over 100 million transistors per millimeter squared density (MTx/mm2), i7 in in 2022 only achieved approximately 64 MTx/mm2 density because it only had high performance cells. i4 entered production in 2023 and is once again a high-performance cell only process and achieves approximately 130 MTx/mm2. Finally, i3 will enter production in 2024 on multiple Intel products providing both high performance and high-density cells. The high-density cells achieve approximately 148 MTx/mm2 transistor density.

The key dimensions for the processes are compared in figure 1.

Figure 1. Process Key Dimensions Comparison.

In figure 1 the values for 10nm and i7 are actual values measured by TechInsights on production parts, the i4 and i3 values are from the VLSI Technology papers on i4 [1], and i3 [2]. The cell height for i3 of 210nm is for high density cells, there is also a 240nm height high performance cell with the same density as the i4 process. 240nm height high performance cells are 3 fin devices the same at i4 and the 210nm high density cells are 2 fin devices with wide metal zero.

Figure 2 presents the density changes between the processes in graphics form.

From 32nm through 10nm Intel accelerated from  2.0x to 2.4x and then to 2.7x density improvements, but as is the case with other companies pushing the leading edge, i3 is a less than 2x density jump.

Figure 2. Intel Process Density Comparison.

Figure 3 is from the Intel presentation and presents more details on the i4 to i3 process shrink.

Figure 3. i4 to i3 Process Shrink.

The i3 process will offer multiple variants targeted at different applications.

  • i3 base process and i3-T with TSVs targeted at client, server and base die for chiplet applications.
  • i3-E offer native 1.2 volt I/O devices, deep N-wells, and long channel analog devices, and is targeted at chipsets and storage applications.
  • i3-PT targets high performance computing and AI with 9μm pitch TSVs and hybrid bonding.

Figure 4 summarizes the process variants.

Figure 4. i3 Process Variants.

i3 features:

  • Smaller M2 pitch than i4.
  • Better fin profile.
  • Utilizes dipoles to set threshold voltages, i4 does not use dipoles. Dipoles improve gate oxide reliability.
  • Offer 14, 18, and 21 metal layer options (counts include metal 0).
  • 4 threshold voltages, V:VT, LVT, SVT, HVT.
  • Contact optimization to provide less overlap capacitance.
  • More effective EUV usage, i4 was Intel’s first EUV process, i3 EUV processes are less complex.
  • Lower line resistance and capacitance than i4.
  • 5x lower leakage at the same drive current as i4.
  • Increased frequency and drive current with no hot carrier increase.
  • Interconnect delay is now approximately half of overall delay and the base process has better RC delay, the PT process is even better.
  • At the same power i3 HD cells provide 18% better performance than i4 HP cells.

Figure 5 presents the interconnect pitches for the 14, 18, and 21 metal options.

Figure 5. Interconnect Pitches.

Figure 6 illustrates the improvement in interconnect RC delay.

Figure 6. Interconnect RC Delay.

And finally, figure 7 illustrates the 18% performance improvement over i4.

Figure 7. Interconnect Delay Improvement.

During an analysts briefing session questions and answers session Intel disclosed the channels are all silicon, no silicon germanium channels. Also, i4 designs have been ported to i3 and they are seeing PPA improvements on the same designs.

i3 is currently in high volume manufacturing with multiple Intel products.

i3 clearly represents a significant improvement over i4.

Comparisons to competitors

i3 is a significant improvement over i4 but how does it compare to competitors?

TechInsights has analyzed density, performance, and cost of i3 versus Samsung and TSMC processes. That analysis is available in the TechInsights platform here (free registration required):

Conclusion

Intel’s i3 process is a significant step forward from Intel’s i4 process with better density and performance. Intel’s i3 process is a more competitive foundry process than previous generations. Cost is more in-line with other foundry processes, density is slightly lower than Samsung 3nm and much lower than TSMC 3nm, but it has the best performance of the “3nm” processes.

Also Read:

What’s all the Noise in the AI Basement?

The Case for U.S. CHIPS Act 2

Intel is Bringing AI Everywhere


Three New Circuit Simulators from Siemens EDA

Three New Circuit Simulators from Siemens EDA
by Daniel Payne on 06-27-2024 at 10:00 am

solido simulation suite

The week before DAC I had the privilege to take a video call with Pradeep Thiagarajan – Product Manager, Simulation, Custom IC Verification at Siemens EDA to get an update on new simulation products. I’ve been following Solido for years now and knew that they were an early adopter of ML for Monte Carlo simulations with SPICE users. Using generative AI with LLM has become quite popular with vendors like OpenAI, Google and Microsoft all updating their product offerings. This trend is driving semiconductor design starts, increasing system complexity, rising semiconductor costs, all while our universities are not attracting enough students to become engineers. Many of my EDA and semiconductor peers are now in retirement age. So, AI has the promise to help meet these challenges by improving productivity.

Over the years the software tools at Siemens EDA have infused AI technology where it makes sense:

  • Emulation, Prototyping – Veloce
  • Digital verification – Questa
  • Custom IC verification – Solido
  • DFT – Tessent
  • Place & Route Floor planning – Aprisa
  • DRC, LVS, DFM – Calibre
  • PCB design exploration – HyperLynx
  • PCB design – Xpedition

The news is that for Custom IC verification, there are three new product announcements under the name of Solido Simulation Suite. Let me show you where these new simulators fit into the product family.

Solido Simulation Suite has three new technologies with descriptive product names to fit different requirements:

  • Solido SPICE
  • Solido LibSPICE
  • Solido FastSPICE

The motivation for adding three new circuit simulators is to meet the growing need from 7nm and smaller nodes for higher performance and capacity while maintaining accuracy. Yes, the existing SPICE tools AFS and Eldo continue to serve customers and will remain supported and enhanced.

Three New Circuit Simulators

The Solido Design Environment was launched in 2023 at DAC, followed by the Solido Characterization Suite, and then the Solido IP Validation for QA was announced in May, so this news of three new simulators continues the progress at Siemens EDA. The Solido R&D headcount has doubled in just the past 3 years to bring all these advancements to life. Solido Sim AI is a technology inside of each new simulator to further accelerate the many internal computations, like: netlist parsing, model evaluation, partitioning, and matrix solving. The transistor models for 2nm and 3nm nodes are quite complex now, so using acceleration helps reduce run times.

For SPICE accuracy engineers would run Solido SPICE, for smaller designs and library characterization runs it would be Solido LibSPICE, and for the largest designs including memories the Solido FastSPICE tool is the best choice. These simulators also integrate nicely with other Siemens EDA tools, like mPower for EM/IR analysis, ESD analysis with Calibre PERC, 3D IC electro-thermal with Calibre 3DThermal, and analog fault analysis with Tessent Defectsim.

Looking at customer circuits the speed improvements in Solido SPICE showed a 2X – 30X verification speedup at full SPICE accuracy, Solido LibSPICE had 2.3X to 5.5X speedups across a variety of library cell runs, and Solido FastSPICE touted speed improvements ranging from 13.8X up to 68X. Early customer endorsements were noted from tier-one semiconductor companies: Silicon Laboratories, and Samsung Electronics. Foundry endorsement from Intel Foundry too.

Summary

The challenges of designing an SoC or chiplet with nm process nodes continue to grow, demanding higher capacity circuit simulations and even integration with other analysis tools. Siemens EDA has just launched three new circuit simulators that span a spectrum from SPICE accurate to libraries to the largest netlists, and the early customer results show dramatic speed improvements while maintaining accuracy. Integration with tools for EM/IR, ESD and 3D IC make these new simulators more valuable. Many EDA vendors launch a new circuit simulator once every 3-5 years, but having three new circuit simulators all at one time is something that I’ve never seen done before, so kudos to the development teams at Siemens EDA for pulling this feat off.

Related Blogs

 


Podcast EP231: Details of the New Solido Simulation Suite with Sathish Balasubramanian

Podcast EP231: Details of the New Solido Simulation Suite with Sathish Balasubramanian
by Daniel Nenni on 06-27-2024 at 8:00 am

Dan is joined by Sathishkumar Balasubramanian. Sathish currently leads the product management and marketing organization for CustomIC Verification (CICV) division at Siemens. Sathish is an experienced product leader with over 20+ years of experience in the EDA industry.

Sathish’s focus is on bringing value to the semiconductor ecosystem through innovative solutions. Sathish is proficient in scaling product portfolio growth and expansion of market share/revenue through relentless focus on data-based execution and thought leadership. Prior to Siemens, Sathish held various product management, strategic business development and corporate development roles for Cadence Design Systems and Synopsys.

Sathish describes a major announcement being made at DAC for a new Solido Simulation Suite. This represents a new, AI-powered circuit simulation capability to address the special requirements of advanced designs such as those driven by AI technology.

Sathish provides details of three new capabilities that are part of the Solido Simulation Suite. The first is Solido SPICE, a foundry-certified circuit simulator that provides significant speedup compared to other Spice simulators. The second is Solido Fast SPICE that employs AI partitioning and multiresolution technology to deliver orders of magnitude speedup. And the third is a purpose-built simulator focused on the special needs of foundation IP to ensure robust performance for new foundation IP design.

Sathish explains how these new offerings are integrated into the overall design flow to address all the requirements for advanced design verification.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Siemens Provides a Complete 3D IC Solution with Innovator3D IC

Siemens Provides a Complete 3D IC Solution with Innovator3D IC
by Mike Gianfagna on 06-27-2024 at 6:00 am

Siemens Provides a Complete 3D IC Solution with Innovator3D IC

Heterogeneous multi-die integration is gaining more momentum all the time. The limited roadmap offered by Moore’s Law monolithic, single-die integration has opened the door to a new era of more-than-Moore heterogeneous integration. The prospects offered by this new design paradigm are exciting and the entire ecosystem is jumping in to bring it all closer to reality. Standards to help make chiplets more widely available, new materials to increase density and a host of design technologies to make it all work are underway. While all this promises to deliver an integrated design capability, the elephant in the room is where to start.  High quality and well-integrated up-front planning at the system level is a necessity to ensure success for the next wave of designs. Siemens Digital Industries Software recently announced a comprehensive new approach to 3D IC design. They seem have gotten it right regarding how to scope the problem for success. Let’s examine how Siemens provides a complete 3D IC solution with Innovator3D IC.

What Problem Needs Solving?

Keith Felton

I recently had the opportunity to chat with Keith Felton, product marketing manager at Siemens for High-Density Advanced Packaging (HDAP) solutions. Keith has a long history of solving advanced design and packaging challenges.

Keith explained that there is indeed a lot of work going on to address the challenges of tasks such as place and route for multi-die heterogeneous designs. All of that is quite important, but Keith pointed out that early feasibility planning and analysis of the system, before implementation begins is a critical step that needs to be addressed first. The questions that must be answered before implementation begins include:

  • What are the system thermal considerations?
  • Can I get the right level of power to all parts of the system?
  • How will the substrate and the overall package behave under typical and extreme operation?

This is just a summary of a much longer list of questions that must be addressed early in the design flow and at the system level. This is really the only way to avoid downstream re-work that can have substantial negative impact. Keith explained that part of the innovation here is to build a digital twin model of the system early. Using this model a design cockpit can be created that allows forward visibility to all downstream tools to allow tradeoffs to be assessed and roadblocks avoided before detailed implementation begins.

This made perfect sense to me. Let’s look at some of the details of the announcement.

How Siemens Provides a Complete 3D IC Solution with Innovator3D IC

Innovator3D IC delivers the fastest and most predictable path for planning and heterogeneous integration of ASICs and chiplets using the latest semiconductor packaging 2.5D and 3D technology platforms and substrates. The technology provides a unified cockpit for design planning, prototyping and predictive multi-physics analysis. This cockpit constructs a power, performance, area (PPA) and cost optimized digital twin of the complete semiconductor package assembly that in turn drives implementation, multi-physics analysis, mechanical design, test, signoff, and release to fabrication and manufacturing through a managed and secure design IP digital thread conduit.

Innovator3D IC is architected around the system technology co-optimization (STCO) methodology process developed by IMEC. STCO is utilized throughout prototyping and planning, design, sign-off, and manufacturing hand-off, concluding with comprehensive verification and reliability assessment.

The figure below summarizes the broad set of capabilities delivered by Innovator3D IC.

Innovator3D IC Heterogeneuous Integration Cockpit

Although the cockpit is directly integrated with the extensive Siemens Xcelerator technology portfolio, it supports the integration of third-party point solutions, recognizing that customers may have third party tools in their current design flows that they wish to continue using. The co-optimization employed by Innovator3D IC also makes extensive use of AI technology for co-optimization as shown in the figure below.

Innovator3D IC AI Infused Co Optimization

Industry standards support is also an important part of the overall solution. A key area is the commitment and support for the growing 3Dblox™ standard that enables EDA tool interoperability, bringing the benefits of improved productivity and efficiency to end users and customers in 3D IC system level designs.

It is also important to ensure frictionless adoption and consumption of existing and new die-to-die interface IP, such as UCIe and BoW. The Open Compute Project Chiplet Design Exchange Working Group (OCP CDX) has enabled direct consumption of standardized chiplet models that will be provided by the emerging commercial chiplet ecosystem.

Predictive multiphysics analysis is also an important part of the solution. During prototyping and planning it is critical to evaluate the performance of all design scenarios before committing to implementation. Innovator3D IC integrates directly with power, signal, thermal, and mechanical stress analyses so that a design scenario can be evaluated quickly, and any issues explored and resolved prior to detailed design implementation. This shift-left approach prevents costly and time-consuming downstream rework and sub-optimal results.

To Learn More

According to the announcement, Innovator3D IC is expected to be available later in 2024. You can learn more about Siemens’ Innovator3D IC software here.  You’ll find a lot of useful information there, including a very informative brochure. You can read the complete press release here.  And that’s how Siemens provides a complete 3D IC solution with Innovator3D IC.

 

 

 


New EDA Tool for 3D Thermal Analysis

New EDA Tool for 3D Thermal Analysis
by Daniel Payne on 06-26-2024 at 10:00 am

3D IC cross section min

An emerging trend with IC design is the growing use of chiplets and even 3D IC designs, as the disaggregated approach has some economic and performance benefits over a single SoC. There are thermal challenges with using chiplets and 3D IC designs, so that means that thermal analysis has become more important. I just spoke with Michael White, Sr. Director in the Calibre group at Siemens EDA to get an update on their newest product, and it’s called Calibre 3DThermal.

3D IC cross-section

The emphasis with Calibre 3DThermal is to enable shift-left, helping IC designers get through analysis and verification more efficiently by doing early feasibility analysis of their IP, chiplet, SoC and package, eliminating surprises at the end of a project. This approach allows a team to start thermal analysis quite early, even in the concept phase with very few details, just to get the analysis process started. Siemens EDA has an array of tools from IC to package to systems, and now these tools can communicate through thermal analysis.

Siemens EDA thermal flows

This is another example of EDA enabling multi-physics analysis, as thermal issues also impact power, stress, timing and variation. Calibre 3DThermal has been designed to be easy to learn and use. The Simcenter Flotherm tool has been in use for years now in package and system thermal analysis, and with 3DThermal design teams can pass info back and forth from inside the package then outwards to the system. As a design progresses and more details are available, then annotated SPICE netlists are sent to Solido and other circuit simulators.

Early feasibility analysis helps design teams make decisions about floor planning, gauging the impact of using heatsinks, adding thermal TSVs, and seeing how close they are at meeting power, thermal and timing goals. Data used in Flotherm can use an embedded, abstracted model of the package, even encrypting it to hide any sensitive details or trade secrets.

.

Calibre 3DThermal to Flotherm

Inside the 3DThermal tool is an optimized version of the Flotherm solver for even better capacity during analysis of large IC designs. The 3DThermal tool could be used by a package engineer, systems designer or an IC designer to perform analysis. Engineers add details like LEF/DEF and GDS/OASIS files. Fast and accurate results are made easier through automatic gridding, automatic time step generation and automatic chip thermal model creation.  The 3Dblox language started by TSMC is also supported.

3DThermal Screenshots

UMC and their customers collaborated with Siemens EDA  during the development of Calibre 3DThermal

Summary

 It’s a busy week at DAC, and Siemens EDA has just announced another addition to the growing Calibre family of tools with their new 3DThermal product, enabling chiplet and 3D IC designers to start early thermal analysis, then proceed throughout the design process to work with package and systems engineers to meet thermal, power and timing goals. Multi-physics analysis is enabled with this approach, allowing teams to shift-left on tough problems. Expect to see announcements from the major foundries on their support of Calibre 3DThermal.

Read the press release from Siemens EDA online.

Related Blogs


Novelty-Based Methods for Random Test Selection. Innovation in Verification

Novelty-Based Methods for Random Test Selection. Innovation in Verification
by Bernard Murphy on 06-26-2024 at 6:00 am

Innovation New

Coverage improvement effectiveness through randomized testing declines as total coverage improves. Attacking stubborn holes in coverage could be augmented through learned novel test guidance to random test selection. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick Using Neural Networks for Novelty-based Test Selection to Accelerate Functional Coverage Closure. This article was published in 2023 IEEE AITest. The authors are from Bristol University (UK) and SiFive.

Randomized tests already benefit from ML methods to increase coverage in lightly covered regions of state space. However they struggle to handle coverage holes where there are no or few representative tests from which learning can be derived. This paper suggests learned methods to generate novel tests from input tests based on dissimilarity in each case from the input test

Paul’s view

AI again this month, this time AI to guide randomized simulation vs. to root cause bugs. In commercial EDA, AI-driven random simulation is hot and beginning to deploy at scale.

This paper focuses on an automotive RADAR signal processing unit (SPU) from Infineon. The SPU has 265 config registers and an 8,400-event test plan. Infineon tried 2 million random assignments of values to the config registers to cover their test plan.

The authors propose using a NN to guide config register values to close coverage faster. Simulations are run in batches of 1000. After each batch the NN is re-trained and used to select the next batch of 1000 configs that the NN scores highest from a test pool of 85k configs. Configs that are more different (“novel”) from previously simulated configs score higher. The authors try 3 NN scoring methods:

  • Autoencoder: NN determines only novelty of the config. The NN is a lossy compressor/decompressor for config register values. The 265 values for a config are compressed down to 64 (as trained by configs simulated so far) then expanded back to 265 (as trained same way). The bigger the error in decompression the more “novel” that config is.
  • Density: NN predicts coverage from config register values. The novelty of a new config is determined by inspecting hidden nodes in the NN and comparing to the values of these nodes for previously simulated configs. The bigger the differences the more novel that config is.
  • Coverage: NN predicts coverage from config register values. A final layer is added to the NN with only one neuron, trained to compute a novelty score as a weighted sum of predicted coverage over 82,000 cover events. The weight of each event is based on its rarity – events rarely hit by configs simulated so far are weighted higher.

Results are intriguing: the coverage-NN achieves the biggest improvement at around a 2.13x reduction in simulations needed to hit 99% and 99.5% coverage. However, it’s quite noisy and repeating the experiment 10 times reduces the gain to 1.75x. The autoencoder-NN is much more stable, achieving 1.87x best case and a matching 1.75x on average – even though it doesn’t consider coverage at all! The density-NN is just bad all over.

Great paper, well written, would welcome follow-on research.

Raúl’s view

This is about Neural networks to increase functional coverage, to find “coverage holes”. In previous blogs we reviewed the use of ML for fault localization (May 2024), to simulate transient faults (March 2024), verifying SW for Cyber-Physical systems (November 2023), generating Verilog assertions (September 2023), code review (July 2023), detecting and fixing bugs in Java (May 2023), improving random instruction generators (February 2023) – a wide range of functional verification topics tackled by ML!

The goal is to choose tests generated from a Constrained Random Test Generator to favor “novel” tests based on the assumption that novel tests are more likely to hit different functional coverage events. This has been done before with good results as explained in section II. The authors build a platform called Neural Network based Novel Test Selector (NNNTS). NNNTS picks tests in a loop, retraining three different NN for three different similarity criteria. These NNs have 5 layers with 1-512 neurons in each layer. The three criteria are:

  • Calculates the probability of a coverage event being hit by the input test
  • Reduces an input test into lower dimensions and then rebuilds the test from the compressed dimensions. The mean squared difference that expresses the reconstruction error is considered as Novelty Score.
  • Assumes that for a simulated test, if a coverage event hit by the test is also often hit by other simulated tests, then the test is very similar to the other tests in that coverage-event dimension. The overall difference of a simulated test in the coverage space is the sum of the difference in each coverage-event dimension.

They test against a Signal Processing Unit of the ADAS system. The production project consumes 6-month simulation of ~2 million constrained random tests with almost 1,000 machines and EDA licenses. The simulation expense of each test is 2 hours on average, there is some manual intervention and in the end 85,411 tests are generated.

In the experiment 100 tests from all generated tests are randomly picked to train NNNTS and then 1000 tests are picked at a time before retraining until reaching a coverage of 99% and 99.5%. This is repeated many times to get statistics. Density does the worst, saving on average 22% over random selection of tests to achieve 99% coverage and 14% to achieve 99.5%. Autoencoder and Density perform similarly, saving on average about 45% to reach 99% and 40% to reach 99.5% coverage.

This work is impressive as it can reduce the time and cost for functional verification by 40%, in the example of 6 months,1000 machines and EDA licenses and people – though the paper does not specify the cost of running NNNTS. The paper reviewed in February 2023 achieved 50% improvement on a simpler test case and a different method (DNNs were used to approximate the output of the simulator). I think enhancing/speeding up coverage in functional verification is one of the more promising areas for the application of ML, as shown in this paper.

Also Read:

Using LLMs for Fault Localization. Innovation in Verification

A Recipe for Performance Optimization in Arm-Based Systems

Anirudh Fireside Chats with Jensen and Cristiano