SNPS1670747138 DAC 2025 800x100px HRes

What’s New with Cadence Virtuoso?

What’s New with Cadence Virtuoso?
by Daniel Payne on 04-19-2023 at 10:00 am

Virtuoso Place and Route min

It was back in 1991 that Cadence first announced the Virtuoso product name, and here we are 32 years later and the product is alive and doing quite well. Steven Lewis from Cadence gave me an update on something new that they call Virtuoso Studio, and it’s all about custom IC design for the real world. In those 32 years we’ve seen the semiconductor process march along Moore’s Law from 600nm using planar CMOS, scaling down to the FinFET era below 22 nm, reaching GAA at the 3nm node. Clearly the EDA tool demands have changed as smaller nodes brought on new physical effects that needed to be modeled and simulated to ensure first silicon success.

The focus of Cadence Virtuoso Studio is to help IC designers take on the present day challenges through six areas:

  • Increased process complexity
  • Handling 10,000s of circuit simulations
  • Design automation and circuit migration
  • Heterogenous integration
  • AI
  • Sign-off, in-design verification and analysis

The Virtuoso ADE (Analog Design Environment) allows circuit engineers to explore their analog, mixed-signal and RFIC designs through schematic capture and circuit simulation. The architecture of Virtuoso ADE has been revamped for better job control, reducing RAM usage, and speeding up simulations by using the cloud. For one example the RAM required to run Spectre on 10,000s of simulations was reduced from 420MB down to just 18MB for simulation monitoring, while expression evaluations decreased from 420MB of RAM to just 280MB.

Updates to the Virtuoso Layout Suite include four choices of place and route technology, each suited to the unique task at hand through the Virtuoso environment:

Four P&R Technologies

DRC and LVS runs are part of physical verification, and running these in batch mode, fixing and repeating, leads to long development schedules. In-design verification allows the interactive use of DRC and LVS while working on an IC layout, so feedback on what to change is quickly highlighted, accelerating productivity. A layout designer using Virtuoso Layout Suite benefits from in-design verification using the Pegasus DRC and LVS technology.

Chiplets, 2.5D and 3D packaging span the traditionally separate realms of PCB, package and IC design domains. Virtuoso Studio enables the co-design and verification of packages, modules and ICs by:

Looking into the near future you can expect to see details emerge about how AI is being applied to automatically go from an analog schematic into layout  based on machine learning and specifications. These auto-generated trial layouts will further speed up a very labor intensive process. A second development area for AI to be applied is the problem of migrating custom analog IP to a new process node. Stay tuned.

Analog IP migration

Early customers of Virtuoso Studio include Analog Devices for the co-design of IC and package, leading-edge IC consumer designs at MediaTek, and AI-based process migration at Renesas.

Summary

Virtuoso Studio has put into release 23.1 some impressive new features that IC design teams can start using to be more productive. The Virtuoso infrastructure has changed to meet the challenges of Moore’s Law, simulations with 10,000s circuit simulations are practical, RFIC and module 2.5D/3D co-design are supported, in-design DRC/LVS verification takes much less time, and AI is being applied to automate analog tasks.

Related Blogs


Electronics Production in Decline

Electronics Production in Decline
by Bill Jewell on 04-19-2023 at 6:00 am

Unit Change Electronics 2023

Shipments of PCs and smartphones were weak in 2022 and continue to decline in 2023. For the first quarter of 2023, IDC estimated PC shipments dropped 29% from a year earlier. This follows a 28% year-to-year decline in 4Q 2022. For the year 2022, PC shipments declined 16% from 2021, the largest year-to-year decline in the history of PCs. The outlook for the rest of 2023 is not encouraging, with Gartner forecasting a 12% decline in year 2023 PC shipments. The PC market collapsed after the end of the boom during the COVID-19 pandemic. Global economic uncertainty is contributing to the current PC market weakness.

IDC estimated smartphone shipments in 4Q 2022 dropped 18% from a year earlier, resulting in an 11% decline in year 2022 shipments, the largest decline ever. Smartphones rebounded from a 7% decline in 2020 (driven by pandemic-related production slowdowns) to 6% growth in 2021. As with PCs, the current economic uncertainty is impacting smartphone shipments. DigiTimes estimates 1Q 2023 smartphone shipments dropped 13% from a year ago. IDC projects a 1% decline in smartphone shipments for the year 2023.

The weakness in PCs and smartphones is reflected in production data from China. Although some electronics manufacturing has shifted out of China in the last few years, China still accounts for about two-thirds of smartphone production (according to Counterpoint Research) and the vast majority of PC production. China’s three-month-average change versus a year ago (3/12) for PCs turned negative in April 2022 and the decline has been greater than 20% for the last three months ending in February 2023. Mobile phone production change (primarily smartphones) was negative for seven of the last nine months, with the decline in the last two months greater than 10%. Total China electronics production measured in local currency (yuan) showed 3/12 change turning negative in January 2023, the first decline since the early months of the pandemic in 2020.

Countries which have benefited from electronics production moving out of China are also showing a slowdown. Malaysia and Taiwan both reported strong electronics production growth in most of 2022, with 3/12 change mostly above 20% and approaching 30% in several months. In the latest data, 3/12 change dropped below 10% in January for Taiwan and in February for Malaysia. Vietnam 3/12 change was over 20% in 2Q 2022 but has been decelerating each month since June 2022. Vietnam’s 3/12 change turned negative at minus 1% in February 2023, the same as China. In March 2023, Vietnam was at minus 5%.

The more mature electronics manufacturing regions have not been as affected by the slowdown in PCs and smartphones. The United States, Japan, United Kingdom and the 27 countries of the European Union (EU 27) are less dependent on consumer electronics. Electronics production in these countries is largely industrial, automotive, communications infrastructure and enterprise computing. However, many of these countries are experiencing moderating growth. The EU 27 3/12 change in electronics production ranged primarily between 10% and 20% for most of 2022. In January 2023, the 3/12 changed dropped to 7%. The U.S. experienced a moderate acceleration in 3/12 growth through most of 2022, increasing from 3% in January 2022 to over 8% in the last three months of 2022. U.S. growth had been decelerating in 2023, dropping below 6% in February. In contrast, Japan and the UK electronics production was in decline for much of 2022. Japan 3/12 turned positive in October 2022 and was 4% in February 2023. The UK 3/12 turned positive in October 2022 at 0.9%. After a 0.7% decline in January 2023, the UK 3/12 bounced back to 0.8% in February 2023.

As stated in our February 2023 Semiconductor Intelligence newsletter, the outlook for semiconductors in 2023 is bleak. In addition to weak end demand in many electronics markets, many semiconductor companies are dealing with excess inventory and pricing pressures. Despite a few bright spots such as automotive (March 2023 newsletter), the overall semiconductor market will not recover until end demand of key end equipment such as PCs and smartphones reverses its decline.

Also Read:

Automotive Lone Bright Spot

Bleak Year for Semiconductors

CES is Back, but is the Market?


Mitigating the Effects of DFE Error Propagation on High-Speed SerDes Links

Mitigating the Effects of DFE Error Propagation on High-Speed SerDes Links
by Kalar Rajendiran on 04-18-2023 at 10:00 am

Pre and Post FEC BER as FEC Matrix size Reduces

As digital transmission speeds increase, designers use various techniques to improve the signal-to-noise ratio at the receiver output. One such technique is the Decision Feedback Equalizer (DFE) scheme, commonly used in high-speed Serializer-Deserializer (SerDes) circuits to mitigate the effects of channel noise and distortion. The DFE scheme relies on decisions about the levels of previous symbols (high/low) to correct the current symbol. This allows the DFE to account for distortion in the current symbol that is caused by the previous symbols.

However, DFE error propagation can occur when feedback signals are incorrect. Following are some of the situations that contribute to DFE error propagation. DFE circuits operate by using feedback to equalize the received signal, but this feedback can also amplify noise and distortion in the signal. In some cases, the feedback can overemphasize certain frequencies, leading to an increase in noise at those frequencies and an increase in the Bit Error Rate (BER). The mechanism also relies on accurate timing to make decisions about the incoming data. If there are timing errors in the feedback loop, these errors can propagate and cause additional errors in the received data. Nonlinear distortion in the transmission channel can also cause DFE circuits to make incorrect decisions about the received data. These errors can then propagate through the feedback loop and cause additional errors in the data. As the DFE scheme makes decisions based on previous decisions, errors in the feedback loop accumulate over time.

As noted above, DFE error propagation can lead to increased BER and reduced signal integrity. Increased BER in turn leads to data errors and reduced system performance. Reduced signal integrity results in increased jitter and reduced eye height, leading to errors in data transmission. As a result, DFE error propagation can significantly impact the performance of high-speed SerDes circuits and must be carefully managed to ensure reliable data transmission.

But existing statistical simulation methods cannot properly consider DFE feedback, and time-domain simulations become impractical for low error probabilities. A whitepaper by Siemens EDA presents a statistical solver that can find bit error ratio or symbol error ratio in the presence of isolated and burst DFE errors. The solver can accurately consider transmit and receive jitter, crosstalk aggressors, noise, and other impairments, and is useful in choosing forward error correction (FEC) schemes and parameters. The paper defines the essential building blocks of the statistical solver, including the main elements of statistical analysis, the convolution term for DFE feedback, the symbol error probability matrix, and the flow to find BER/SER metrics. It also discusses the use of a modified iteration process to find the probability distribution of symbol error groups and presents experimental results of the statistical solver.

The following are some excerpts from the whitepaper.

Building statistical eye that includes DFE errors

The method is considered a Markov chain with a transition operator defined by a function that transforms known error probabilities into new error matrices measured from the eye diagram. The process involves building a statistical eye diagram from which error probabilities are calculated. The iterations continue until the error probability matrices become equal up to machine precision. The iterations are consistent and converge to the same solution regardless of initial settings. Two examples are given to illustrate the convergence of the iterations with statistical solver in the loop. The first example is the simulation of a 200GBASE-CR4 link, while the second is the CEI VSR channel with 4 taps DFE.

Choosing FEC parameters

The size of FEC needed to correct the error groups can be determined by analyzing the probability parameters distribution of error groups found from statistical simulation.

The simulations with FEC demonstrate the importance of knowing the burst error distributions for the proper choice of FEC parameters while keeping the FEC-induced latency to a minimum. The statistical analysis results can be relied upon for the purpose of FEC parameter optimization for a large variety of channels.

Summary

The whitepaper presents a novel statistical simulation method that considers the effect of DFE error propagation in SerDes links. Simulation speeds are sufficient to make this approach a routine part of the design process that require multiple channel compliance evaluations and FEC parameter optimization. You can download the entire whitepaper from here.

Also Read:

Hardware Root of Trust for Automotive Safety

Siemens EDA on Managing Verification Complexity

Siemens Keynote Stresses Global Priorities


Can Attenuated Phase-Shift Masks Work For EUV?

Can Attenuated Phase-Shift Masks Work For EUV?
by Fred Chen on 04-18-2023 at 6:00 am

1679926948898

Normalized image log-slope (NILS) is probably the single most essential metric for describing lithographic image quality. It is defined as the slope of the log of intensity, multiplied by the linewidth [1], NILS = d(log I)/dx * w = w/I dI/dx.  Essentially, it gives the % change in width for a given % change in dose. This is particularly critical for EUV lithography, where stochastic variations in dose are naturally occurring.

A dark feature against a bright background has a higher NILS than a bright feature against a dark background. The reason is the intensity in the denominator is relatively much lower for the dark feature than the bright feature. For this case, the NILS is also made sufficiently high, e.g., > 2, with a sufficiently high mask bias, i.e., a dark feature size on the mask larger than 4x the targeted wafer dark feature size. However, if the dark feature on the mask is too large compared to the spacing between features, then there is too little light reaching the wafer. This means that a longer exposure time is needed to accumulate a sufficient number of photons absorbed. A way around this is to use an attenuated phase shift mask, or attPSM in abbreviation. The dark feature is actually partially transmitting light through the mask, and imparting a phase shift of 180 degrees. Both transmission (or reflectivity in the case of EUV) and phase are adjusted by the material and thickness of the dark feature on the mask.

Figure 1. The same NILS requires much longer exposure with a binary (T=0) mask than a 6% attPSM. This is based on a 4-beam image of dark island feature (width w, pitch p) in an expected quadrupole illumination scenario.

In Figure 1, we see that with the same NILS, the log-slope curves are similar in shape, but the attPSM with less bias than the binary mask allows more light to get to the wafer, so that a long exposure is not needed.

With the advantage of using an attPSM made clear, let’s turn now to why EUV hasn’t implemented any yet. A fundamental difference between an EUV mask and a DUV mask is that while there is only single pass of light through the DUV mask, the EUV mask has two passes of light through the pattern layer, and in between passes, the light propagates through a multilayer, which tends to absorb more light at higher angles [2].

Figure 2. While a DUV mask (left) is treated as a thin pattern layer, an EUV mask (right) is treated as two pattern layers separated by an absorbing layer, i.e., the multilayer.

Consequently, the phase shift (also no longer targeted at 180 degrees, but over 200 degrees [2]) is distributed over multiple layers, and not easily tailored by adjusting one layer’s thickness. Moreover, the known candidate materials are hard to process with good control. Materials like ruthenium and molybdenum easily oxidize. A few nanometers change of thickness can add tens of degrees of phase shift [3]. The different individual wavelengths within the 13.2-13.8 nm range also have significantly different phase shifts as well as reflectivities from the multilayer [4]. Regardless of these complicating factors, designing attPSMs for EUV continues to be a topic of ongoing investigation [5].

References

[1] C. A. Mack, “Using the Normalized Image Log-Slope,” The Lithography Expert, Microlithography World, Winter 2001: http://lithoguru.com/scientist/litho_tutor/TUTOR32%20(Winter%2001).pdf

[2] C. van Lare, F. Timmermans, J. Finders, “Mask-absorber optimization: the next phase,” J. Micro/Nanolith. MEMS MOEMS 19, 024401 (2020).

[3] I. Lee et al., “Thin Half-tone Phase Shift Mask Stack for Extreme Ultraviolet Lithography,” https://www.euvlitho.com/2011/P19.pdf

[4] A. Erdmann et al., “Simulation of polychromatic effects in high NA EUV lithography,” Proc. SPIE 11854,1185405 (2021).

[5] A. Erdmann, H. Mesilhy, P. Evanschitzky, “Attenuated phase shift masks: a wild card resolution enhancement for extreme ultraviolet lithography?,” J. Micro/Nanopattern. Mater. Metrol. 21, 020901 (2022).

This article first appeared in LinkedIn Pulse as Phase-Shifting Masks for NILS Improvement – A Handicap For EUV?

Also Read:

Lithography Resolution Limits: The Point Spread Function

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event

Resolution vs. Die Size Tradeoff Due to EUV Pupil Rotation

KLAC- Weak Guide-2023 will “drift down”-Not just memory weak, China & logic too


Podcast EP155: How User Experience design accelerates time-to-market and drives design wins

Podcast EP155: How User Experience design accelerates time-to-market and drives design wins
by Daniel Nenni on 04-17-2023 at 10:00 am

Dan is joined by Matt Genovese. Matt founded Planorama Design, a user experience design professional services company to design complex, technical software and systems that are simple and intuitive to use while reducing internal development and support costs. Staffed with seasoned engineers and user experience designers, the company is headquartered in Austin, Texas.

Matt explains how Planorama Design helps hardware companies create simple, intuitive user experiences for the software they ship with their products. Matt explains the process used and the substantial business benefits their customers are seeing.

You can also learn more at the LIVE WEBINAR Matt will conduct on April 25 at 9AM Pacific time, entitled The ROI of User Experience Design: Increase Sales and Minimize Costs. You can register for the webinar HERE.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


AI Assists PCB Designers

AI Assists PCB Designers
by Daniel Payne on 04-17-2023 at 6:00 am

PCB steps min

Generative AI is all the rage with systems like ChatGPT, Google Bard and DALL-E being introduced with great fanfare in the past year. The EDA industry has also been keen to adopt the trends of using AI techniques to assist IC engineers across many disciplines. Saugat Sen, Product Marketing at Cadence did a video call with me to explain what they’ve just announced, it’s called Allegro X AI. The history of Cadence includes both IC and Systems-design EDA tools, plus verification IP and design services.

By using generative AI-driven PCB design in Allegro X AI there are three big goals:

  • Better and Faster Hardware Design
  • Improved PCB Designer Productivity
  • PCB-design Driven by Physics engines

The typical PCB design flow has several sequential steps, where the most time-consuming parts are manual placement, followed by manual routing.

PCB Steps

With Allegro X AI the electrical engineer specifies design constraints, runs the tool; automating board placement, power and ground routing, plus critical signal routing.

Allegro X AI tool flow

Just like ADAS in the automotive world is bringing new automation levels to the driving experience, constraint-driven PCB design offers a shift-left time savings for electronic systems. Allegro X AI does not replace PCB designers, rather it makes the team of electrical engineer plus PCB designer more productive, saving valuable time to market.

To provide these new automation levels to the PCB design flow requires compute power found in the cloud, and one example of a PCB design that required 3 days for human-based placement, now takes only 75 minutes with Allegro X AI while producing a 14% better wire length metric.

Global Placement comparison

A second PCB example that Saugat showed me was for global placement and the automation results were impressive:

  • 20 minute runtime with Allegro X AI versus 3 human days
  • 0 opens, 0 DRCs, with 100% timing passes
  • Wire length improved by 3% using AI

In this new, AI-powered approach, the PCB designer still needs to complete detailed routing, but stay tuned for future improvements. With Allegro X AI an electrical engineer can quickly look at the feasibility of a PCB design, without using a layout designer resource. The learning curve for this new feature is quick, so expect to explore results on the very first day of use. Expect to use this technology on small to medium-sized PCB projects to start out with. The built-in engines for SI/PI analysis operate quickly, ensuring that your design meets timing and reliability constraints.

In the official press release you can read quotes from three companies that had early access to Allegro X AI:

Summary

PCB design is changing, and for the better by using AI techniques found in tools like Allegro X AI from Cadence. You can expect benefits like better and faster hardware design, as an electrical engineer can explore the PCB design space more quickly, even giving the PCB designer a layout starting point. PCB designers become more productive, as the placement and critical net routing becomes automated, freeing them up to complete the detailed routing task. Using constraints and built-in analytics for SI/PI is a physics-based approach, which helps produce a more optimal PCB design compared to fully manual Methods.

Cadence engineered all of this AI technology in-house, and you just need to contact your local account manager to get started with Allegro X AI.

Related Blogs


Podcast EP154: The Future of Custom Silicon – Views From Alphawave Semi’s Sudhir Mallya

Podcast EP154: The Future of Custom Silicon – Views From Alphawave Semi’s Sudhir Mallya
by Daniel Nenni on 04-14-2023 at 10:00 am

Dan is joined by Sudhir Mallya, Senior Vice President of Corporate Marketing at Alphawave Semi. He is based in Silicon Valley and has over 25 years of experience at leading global semiconductor companies with executive positions in engineering, marketing, and business unit management. His experience spans custom silicon and application-specific products across multiple domains including data centers, networking, storage, and edge-computing.

Dan explores the future of custom silicon with Sudhir. Key drivers, including data center, automotive and edge/IoT are discussed. The tradeoffs between custom and off-the-shelf design is also explored, along with the importance of interface standards and design challenges that lie ahead.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Multi-Die Systems: The Biggest Disruption in Computing for Years

Multi-Die Systems: The Biggest Disruption in Computing for Years
by Daniel Nenni on 04-14-2023 at 6:00 am

SNUG Panel 1

At the recent Synopsys Users Group Meeting (SNUG) I had the honor of leading a panel of experts on the topic of chiplets. The discussion was based on a report published by the MIT Technology Review Insights in cooperation with Synopsys. This is a very comprehensive report (12 pages) that is available online HERE.

Here is the preface of the MIT paper:

“Multi-die systems define the future of semiconductors” is an MIT Technology Review Insights report sponsored by Synopsys. The report was produced through interviews with technologists, industry analysts, and experts worldwide, as well as a cross-industry poll of executives. Stephanie Walden was the writer for this report, Teresa Elsey was the editor, and Nicola Crepaldi was the publisher. The research is editorially independent, and the views expressed are those of MIT Technology Review Insights. This report draws on a poll of the MIT Technology Review Global Insights Panel, as well as a series of interviews with experts specializing in the semiconductor industry and chip design and manufacturing. Interviews occurred between December 2022 and February 2023.

The focus of a panel like this is the participants. During our lunch prior to the panel I learned quite a bit:

Simon Burke, AMD Senior Fellow, has 25+ years experience building chips starting with HPC vendor Silicon Graphics. He them moved to AMD then to Xilinx and back to AMD through the acquisition. An amazing journey, both depth of knowledge and a great sense of humor. Simon and AMD are leaders in the chiplet race so there was a lot to be learned here.

John Lee, Head of Electronics, Semiconductors and Optics at Ansys, is a serial entrepreneur. I met John when Avant! bought his Signal Integrity company in 1994. Synopsys then bought Avant! In 2002 and John became R&D Director. John then co founded Mojave Design which was acquired by Magma in 2004. John left Magma after they were acquired by Synopsys and later founded Gear Design, a big data platform for chip design, which was acquired by Ansys in 2015.  John is one of my favorite panelists, he says it like it is.

Javier DeLaCruz has 25+ years of experience including a long stint with one of my favorite companies eSilicon. Javier works for Arm handling advanced packaging technology development and architecture adaption including 2.xD and 3D systems. Arm is everywhere so this is a big job.

Francois Piednol is Chief Architect at Mercedes Benz but prior to that he spent 20 years at Intel so he knows stuff. Francios is also a member of UCIe and a jet pilot. He actually owns a jet and as a pilot myself I could not be more impressed. Francios was part of the MIT Chiplets paper mentioned above as well so he is a great resource.

Dr. Henry Sheng, Group Director of R&D in the EDA Group at Synopsys, he currently leads engineering for 3DIC, advanced technology and visualization. He has over 25 years of R&D experience in EDA where he has led development across the spectrum of digital implementation, including placement, routing, optimization, timing, signal integrity and electromigration. He has previously led efforts on EDA enablement and collaborations for emerging silicon technologies nodes. Henry knows EDA.

Dan Kochpatcharin is the Head of Design Infrastructure Management Division at TSMC. Dan is a 30+ year semiconductor professional with 25 years at foundries. For the past 15 years Dan has been instrumental in the creation of the TSMC OIP. Today he leads the OIP Ecosystem Partnerships: 3DFabric Alliance, IP Alliance, EDA Alliance, DCA, Cloud Alliance, and VCA. Dan K, as we call him, knows the foundry business inside and out. I always talk to Dan whenever I can.

Here is the abstract for the panel:

The new era of multi-die systems is an exciting inflection point in the semiconductor industry. From high-performance and hyper-disaggregated compute systems to fully autonomous cars and ultra-high-definition vision systems; multi-die chip designs will transform computing possibilities, driving many new innovations, expanding existing markets and paving the way for new ones. Critical to fueling this momentum is the coherent convergence of innovations across the semiconductor industry by EDA, IP, chiplet, foundry and OSAT leaders. But what’s really happening inside the companies driving what can have one of the biggest impacts on system design and performance in a very long time?

Join this panel of industry leaders who are at the forefront of shaping the multi-die system era. Many have already made the move or are making key contributions to help designers achieve multi-die system success. Listen to their insights, their views on how multi-die system approaches are evolving, and what they see as best practice. Hear about the near, medium, and long-term future for multi-die innovation.

Here are the questions I asked:

Why Multi-Die System and Why Now?
  1. Mercedes: What is driving the change, and what is multi-die system offering you?
  2. AMD: How do you see the trend to multi-die at AMD and what is the key driver?
  3. Synopsys: Are we seeing other markets move in this direction?
  4. TSMC: How are you seeing the overall market developing?
It Takes a Village?
  1. Arm: How are companies like Arm viewing the multi-die opportunity and how does something like multi-die impact the day-to-day work for designers and system architects working with Arm?
  2. Ansys: how is the signoff flow evolving and what is being done to help mitigate the growing signoff complexity challenge?
  3. Synopsys: What other industry collaborations, IP, and methodologies are required to address the system-level complexity challenge?
It’s Just the Beginning?
  1. TSMC: Which technologies are driving the multi-die growth trend and how do you see these technologies evolving over time?
  2. AMD: When do you foresee true 3D – logic-on-logic – entering the arena for AMD and what kind of uplift would if offer compared to infinity fabric style connectivity solutions.
  3. Synopsys: How are the EDA design flows and the associated IP evolving and where do customers want to see them go?
Designing Multi-Die Systems?
  1. Mercedes: How is the multi-die design challenge being handled at Mercedes and is it evolving in lock-step – true HW/SW co-design – with these ongoing software advancements?
  2. AMD: What methodology advancements would you like to see across the industry to make system development more efficient? And what kind of impact does multi-die sytem design have on block designers?
  3. Ansys: How is the increased learning curve for these multi-physics effects being addressed?
  4. Arm: How is the Arm core design flow evolving to absorb these new degrees of freedom?
  5. Synopsys: how is EDA ensuring that designers can achieve the entitlement promised in their move to multi-die?
  6. TSMC: How is TSMC working with EDA and OSAT partners to simplify the move to multi-die design?
The Long Term?
  1. Mercedes: How is Mercedes approaching the long-term reliability challenge?
  2. TSMC: How is TSMC dealing with process reliability and longevity for these expanding use cases?
  3. Ansys: What is the customer view of the reliability challenge?
  4. Synopsys: Do you see multi-die system as a significant driver for this technology moving forward?

(I can cover the answers to these questions in a second blog)

Summary

The answers to most of the questions are covered in the MIT paper but here are a couple of points that I rang true to me:

Chiplets are truly all about the ecosystem. So many companies could be involved, especially for chip designers that are using commercial chiplets, so where is the accountability? Dan K. made a great point about working with TSMC because they are the ecosystem experts and the buck stops with the wafer manufacturer. The TSMC ecosystem really is like the semiconductor version of Disneyland, the happiest place on earth.

Another point that was made, which was a good reminder for me, is that we are at the beginning of the chiplet era and the semiconductor industry is very fast moving. Either you harness the power of chiplets or the power of chiplets will harness you, my opinion.

Also Read:

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Feeding the Growing Hunger for Bandwidth with High-Speed Ethernet

Takeaways from SNUG 2023

Intel Keynote on Formal a Mind-Stretcher


Hardware Root of Trust for Automotive Safety

Hardware Root of Trust for Automotive Safety
by Daniel Payne on 04-13-2023 at 10:00 am

Rambus, RT 640,

Traveling by car is something that I take for granted and I just expect that my trips will be safe, yet our cars are increasingly using dozens of ECUs, SoCs and millions of lines of software code that combined together present a target for hackers or system failures. The Automotive Safety Integrity Levels (ASIL) are known by the letters: A, B, C, D; where the ISO 26262 standard defines ASIL D as the highest degree of automotive hazard. Reliability metrics for an automotive system are Single Point Fault Metric (SPFM) and Latent Fault Metric (LFM).

Siemens EDA worked together with Rambus on a functional safety evaluation for automotive using the RT-640 Embedded Hardware Security Module with about 3 million faults, reaching ISO 26262 ASIL-B certification, by achieving a SPFM > 90% and a LFM > 60%. The two Siemens tools used for functional safety evaluations were:

  • SafetyScope
    • Failures In Time (FIT)
    • Failure Mode Effect and Diagnostic Analysis (FMEDA) – permanent and transient faults, fault list
  • KaleidoScope
    • Fault simulation on the fault list
    • Fault detected, not detected, not observed

The Rambus RT-640 is a hardware security co-processor for automotive use, providing the root of trust, meeting the ISO 26262 ASIL-B requirements. Architectural blocks for the RT-640 include a RISC-V secure co-processor, secure memories and cryptographic accelerators.

Rambus RT-640 Root of Trust

Your automotive SoC would add an RT-640 to provide secure execution of user apps that are authenticated, stop tampering, provide secure storage, and thwart side-channel attacks. Software cannot even reach the critical tasks like key derivation done in hardware. All of the major processor architectures are supported: Intel, RISC-V, Arm.

Security warranties, and hardware cryptographic accelerators are supported, plus there’s protection against glitching and over-clocking.

For the functional safety evaluation there was a manually defined fault list for signals covered by the provided safety mechanism. SafetyScope then reported the estimated FMEDA metrics, so an initial idea of the core’s safety level. Modules that that didn’t affect the core safety or were not safety critical  were pruned from the fault list.

The Fault Tolerant Time Interval (FTTI) tells the tool how long to look for a fault to be propagated before an alarm is set. FTTI impacts fault simulation run times, so a balance is required. The max concurrent fault number was set between 600 to 1,000 faults from experimentation. A two-step fault campaign approach was used to get the best results in the least amount of time.

Unclassified faults were faults not injected and not observed, so to reduce the number of non-injected faults they used two reduction methods:

  • Bus-simplification – when one or more bits are detected for a certain fault, the safety mechanism works well. Faults on the remaining bits of the bus are also considered detected.
  • Duplication-simplification – all faults not injected or observed the are part of a duplicated module are classified as detected.

Both permanent and transient fault campaigns were run on the RT-640 co-processor, taking some 12 days to complete when run on an IBM LSF HPC environment with parallel execution. The estimated SPFM numbers came from the first run of SafetyScope.

RT-640 fault campaign results

These fault campaign results exceed the ISO 26262 requirements of SPFM > 90% and LFM > 60% for ASIL-B certification.

Summary

Siemens and Rambus showed a methodology to evaluate the RT-640 co-processor, with nearly 3 million faults, reaching a total SPFM value of 91.9%, plus a TFM of 75%, exceeded the requirements of the ASIL-B safety level in automotive applications.  This is good news for electronics systems used in cars, ensuring drivers that their travels are safer, drama-free and resistant to hacking efforts. Using a hardware root of trust like the Rambus RT-640 makes sense for safety-critical automotive applications, and the fault campaign results confirm it.

Read the complete 11 page white paper on the Siemens site.

Related Blogs

 


Siemens EDA on Managing Verification Complexity

Siemens EDA on Managing Verification Complexity
by Bernard Murphy on 04-13-2023 at 6:00 am

2023 DVCon Harry Foster

Harry Foster is Chief Scientist in Verification at Siemens EDA and has held roles in the DAC Executive Committee over multiple years. He gave a lunchtime talk at DVCon on the verification complexity topic. He is an accomplished speaker and always has a lot of interesting data to share, especially his takeaways from the Wilson Research Group reports on FPGA and ASIC trends in verification. He segued that analysis into a further appeal for his W. Edwards Deming-inspired pitch that managing complexity demands eliminating bugs early. Not just shift-left but a mindset shift.

Statistics/analysis

I won’t dive into the full Wilson Report, just some interesting takeaways that Harry shared. One question included in the most recent survey was on use of data mining or AI. 17% of reporting FPGA projects and 21% of ASIC projects said they had used some form of solution around their tool flows (not counting vendor supplied tool features). Most of these capabilities he said are home-grown. He anticipates opportunity to expand such flow-based solutions as we do more to enable interoperability (think log files and other collateral data which are often input to such analyses).

Another though provoking analysis was on first-silicon successes. Over 20 years of reports this is now at the lowest level at 24%, where typical responses over that period have hovered around 30%. Needing respins is of course expensive, especially for designs in 3nm. Digging deeper into the stats, about 50% of failures were attributable to logic/functional flaws. Harry said he found a big spike in analog flaws – almost doubling –  in 2020. Semiconductor Engineering held a panel to debate the topic; panelists agreed with the conclusion. Even more interesting the same conclusion held across multiple geometries; it wasn’t just a small feature size problem. Harry believes this issue is more attributable to increasing integration of analog into digital, for which design processes are not yet fully mature. The current Wilson report does not break down analog failure root causes. Harry said that subsequent reports will dig deeper here, also into safety and security issues.

Staffing

This a perennial hot topic, also covered in the Wilson report. From 2014 to 2022 survey respondents report a 50% increase in design engineers but a 144% increase in verification engineers over the same period, now showing as many verification engineers on a project as design engineers. This is just for self-identified roles. The design engineers report they spend about 50% of their time in verification-related tasks. Whatever budget numbers you subscribe to for verification, those number are growing much faster than design as a percentage of total staffing budget.

Wilson notes that the verification to design ratio is even higher still in some market segments, more like 5 to 1 for processor designs. Harry added that they are starting to see similar ratios in automotive design which he finds surprising. Perhaps attributable to complexity added by AI subsystems and safety?

The cost of quality

Wilson updates where time is spent in verification, now attributing 47% to debug. Half of the verification budget is going into debug. Our first reaction to such a large amount of time being spent in debug is to improve debug tools. I have written about this elsewhere, especially in using AI to improve debug throughput. This is indeed an important area for focus. However Harry suggests we should also turn to lessons from W. Edwards Deming the father of the quality movement. An equally important way to reduce the amount of time spent in debug is to reduce the number of bugs created. Well duh!

Deming’s central thesis was that quality can’t be inspected into a product. It must be built in by reducing the number of bugs you create in the first place. This is common practice in fabs, OSATs and frankly any large-scale manufacturing operation. Design out and weed out bugs before they even get into the mainstream flow. We think of this as shift left but it is actually more than that. Trapping bugs not just early in the design flow but at RTL checkin through static and formal tests applied as a pre-checkin signoff. The same tests should also be run in regression, but for confirmation, not to find bugs that could have been caught before starting a regression.

A larger point is that a very effective way to reduce bugs is to switch to a higher-level programming language. Industry wisdom generally holds that new code will commonly contain between 15 and 50 bugs per 1,000 lines of code. This rate seems to hold independent of the language or level of abstraction. Here’s another mindset shift. In 1,000 lines of new RTL you can expect 15 to 50 bugs. Replace that with 100 lines of a higher-level language implementing the same functionality and you can expect ~2 to 5 bugs. Representing a significant reduction in the time you will need to spend in debug.

Back to Wilson on this topic. 40% of FPGA projects and 30% of ASIC projects claim they are using high level languages. Harry attributes this relatively high adoption to signal processing applications, perhaps because HLS has had enthusiastic adoption in signal processing, an increasingly important domain these days (video, audio, sensing). But SystemC isn’t the only option. Halide is popular at least in academia for GPU design. As architecture becomes more important for modern SoCs I can see the trend extending to other domains through other domain specific languages.

Here are a collection of relevant links: Harry’s analysis of the 2022 verification study, the Wilson 2022 report on IC/ASIC verification trends, the Wilson 2022 report on FPGA verification trends and a podcast series on the results.