SNPS1670747138 DAC 2025 800x100px HRes

Addressing Reliability and Safety of Power Modules for Electric Vehicles

Addressing Reliability and Safety of Power Modules for Electric Vehicles
by Kalar Rajendiran on 10-23-2024 at 10:00 am

Cadence Power Module Design Process

As electric vehicles (EVs) gain widespread adoption, safety, reliability, and efficiency are becoming increasingly important. A crucial component in ensuring these aspects is the power module (PM), which manages the energy flow between the EV battery and the motor. The design of these power modules must not only meet the high-performance demands of modern EVs but also address key challenges such as thermal management, electromagnetic interference, and long-term reliability. To tackle these challenges effectively, an integrated design approach is necessary.

Cadence recently published a whitepaper that addresses this topic and presents an integrated design methodology.

Power Module Design Challenges

As mechanical power demands and fast-charging capabilities increase, power modules must handle higher energy loads, increasing the risk of failures. Poor thermal management, electromigration, warpage, and electromagnetic interference (EMI) are just a few of the challenges that can compromise power module reliability. Additionally, as electric vehicles operate at varying voltages and temperatures, designing compact and efficient power modules that can withstand these conditions is essential to ensure the vehicle’s longevity and safety.

Traditional design processes often fall short because they rely on real-world testing at later stages of development. When issues are discovered at this stage, the cost of fixing them through redesigns can be significant. The new methodology for power module design focuses on early-stage simulations and analyses to avoid these late-cycle issues.

Circuit Analysis and Schematic-Driven Package Design

A reliable power module design starts with circuit analysis, using tools like Cadence PSpice to simulate both digital and analog components. Digital simulations focus on factors like jitter, while analog simulations analyze gain, input impedance, and other performance metrics. By simulating current, voltage, power dissipation, and operating temperatures, designers can ensure reliable thermal performance and preemptively address potential failure points.

These simulations also inform other key analyses, such as warpage and life expectancy estimations through failure modes effects and diagnostic analysis (FMEDA). Ensuring that the schematic-driven package design is correct from the start, through layout versus schematic (LVS) checks, helps streamline the placement of components and connections within the power module. This early-stage precision eliminates potential errors and ensures the reliability of the final design.

Parasitic Extraction and Temperature-Aware Simulation

One of the hidden challenges in power module design is stray inductances, which can cause unwanted electromagnetic radiation and disrupt nearby electronic systems. To address this, the design methodology includes parasitic extraction of bondwires using 3D-quasistatic electromagnetic (EM) simulations. These simulations help identify and mitigate potential risks from electromagnetic interference (EMI), which could otherwise affect the vehicle’s electronic control unit (ECU) and compromise occupant safety.

Parasitic elements are highly temperature-dependent, and their effects on performance can vary significantly with operating temperature and frequency. The methodology incorporates temperature-aware parasitic extraction to account for these variations and ensure accurate calculations for power and temperature. This approach enables the design team to refine the schematic by adjusting discrete components and compensating for parasitic effects, ensuring functional reliability and reducing time-to-market delays.

Thermal Simulation and Electromigration

Effective thermal management is critical for the long-term reliability of power modules in EVs, as these modules can reach temperatures of 130°C or more. Traditional thermal analysis tools often overestimate power levels, leading to over-designed, costlier packages. By tightly coupling electrical and thermal environments in the design phase, the proposed methodology enables a more accurate thermal simulation based on actual operating conditions. This ensures that power modules are designed to handle peak power without being unnecessarily over-engineered, reducing both cost and weight.

High current densities in power modules can lead to electromigration, where metal atoms in interconnects are displaced due to high currents. This can cause functional failures over time. By analyzing current densities in the design phase, the methodology helps designers mitigate these risks and prolong the lifespan of the power module, ensuring long-term vehicle safety.

Mechanical Strain and Warpage Analysis

Mechanical stress can also lead to power module failure, especially when there are large differences in the coefficient of thermal expansion (CTE) between different materials used in the module. This can cause warpage or bending, which leads to component failure. Warpage analysis identifies potential deformation risks and allows designers to implement corrective measures, such as strategic placement of redundant components. This significantly improves the reliability and safety of power modules.

Proposed Integrated Design Methodology

Cadence’s proposed design flow for power modules in EVs focuses on co-optimizing both the die and the package to ensure efficient thermal performance at the design frequency and operating temperature. This integrated approach allows for greater safety and reliability in high-power systems, including EVs, by addressing potential risks early in the design process. By incorporating a “shift left” methodology, which emphasizes identifying and solving problems in the early stages of design, this process minimizes the risk of costly respins and unforeseen failures later in the product cycle.

The comprehensive design flow (refer to Figure below), which integrates advanced simulation, parasitic-aware analysis, and thermal management tools, offers a one-stop Cadence solution for power module development. This streamlined process not only accelerates time-to-market but also ensures that EV power modules meet stringent safety and performance requirements.

For more details including the complete analysis of a Press-Fit Full Bridge MOSFET Power Module, access the entire whitepaper here.

Summary

An integrated approach to power module design for electric vehicles not only enhances efficiency but also ensures the safety and reliability of these critical components. By addressing key concerns such as electromigration, parasitic effects, thermal management, and mechanical strain early in the design process, manufacturers can prevent late-stage failures and costly respins. The proposed “shift left” design methodology minimizes risks and reduces development costs, ensuring that power modules deliver reliable performance throughout their operating life. By implementing this methodology, manufacturers can develop power modules that contribute to safer, more efficient, and more reliable electric vehicles for the future.

Also Read:

SI and PI Update from Cadence on Sigrity X

Advanced Audio Tightens Integration to Implementation

Hearing Aids are Embracing Tech, and Cool


Shaping Tomorrow’s Semiconductor Technology IEDM 2024

Shaping Tomorrow’s Semiconductor Technology IEDM 2024
by Scotten Jones on 10-23-2024 at 6:00 am

IEDM 2024 SFO

Anyone who has read my articles about IEDM in the past know I consider it a premiere conference covering developments of leading-edge semiconductor process technology. The 2024 conference will take place in San Francisco from December 7th through 11th.

Some highlight of this year’s technical program are:

AI  – Lots of artificial intelligence-related semiconductor research will be presented this year. For example, since AI computing is more memory-centric than other methods, there will be an entire special Focus Session (session #3) devoted to memory technologies for AI.  Also, both of the Short Courses on Sunday, Dec. 8 will deal with AI; see the news release below for more details.  Two of the keynote talks will do the same; again, more details are below.  China’s Tsinghua University will describe a dense, fast and energy-efficient 3D chip for computing-in-memory in paper #5.3, and there are many other AI-related talks throughout the program, too.

TSMC’s new industry-leading 2nm Logic Platform – TSMC will provide details of its new manufacturing process (paper #2.1), which the researchers say has already passed initial qualification tests.

Intel  – paper #2.2 describes extremely scaled transistors, and Intel paper #24.3 shows record performance for transistors with channels that are just one atom thick.

A look back at 70 years of IEDM history – Since the conference began in 1955, IEDM has been where the world’s experts go to report groundbreaking advances in transistors and other devices that operate based on the movements or other qualities of electrons. To celebrate the conference’s 70th anniversary, there will be a special Focus Session (#28) highlighting historic advances which we now take for granted in logic, memory, interconnects, imaging and other areas.

Advanced packaging is another way forward – Packaging has become critically important to the semiconductor industry, because the benefits to be gained from advanced packaging are rivaling those which can be obtained by the traditional approach of integrating more functions onto a single chip, following Moore’s Law. You put together different types of chips, made from different technologies, in various vertical or horizontal configurations, and interconnect them in fast, dense and energy-efficient ways, and voila, you can create appealing solutions for AI and other fast-growing applications.  Of course, it’s not so simple, and another special Focus Session (#21) will delve into cutting-edge advances and future requirements in chip and package design for high-performance computing and advanced AI applications.

Power transistors for a more electrified, sustainable society – Another special Focus Session (#33) will highlight advances and future trends in power semiconductor technologies, which are crucial for sustainable electronics-based energy solutions.

Brain/electronics interfaces – Special Focus Session #14 will cover advanced neural interface technologies and their potential to revolutionize human/electronic interactions for both medical and technological applications.

To register or read more about the conference please go HERE.

Also Read:

Webinar: When Failure in Silicon Is Not an Option

TSMC 16th OIP Ecosystem Forum First Thoughts

TSMC OIP Ecosystem Forum Preview 2024


SI and PI Update from Cadence on Sigrity X

SI and PI Update from Cadence on Sigrity X
by Daniel Payne on 10-22-2024 at 10:00 am

Sigrity X

Signal Integrity (SI) and Power Integrity (PI) issues are critical to analyze, ensuring the proper operation of PCB systems and IC packages, yet the computational demands from EDA tools can cause engineers to only analyze what they deem are critical signals, instead of the entire system. Cadence has managed to overcome this SI/PI analysis limitation by using distributed simulation, where all of the cores in a machine are used, along with distributing the workload to multiple machines. Sigrity X is the platform from Cadence that uses distributed simulation for SI/PI for PCB and IC package designs.

Sigrity entails multiple EDA tools, each aimed at a specific task.

  • IBIS Modeling – AMI Builder
  • Transistor to Behavioral model conversion – T2B
  • Interconnect extraction – XcitePI
  • EM field solver – PowerSI, XtractIM
  • Serial/Parallel link analysis – SystemSI
  • Finite Difference Time-Domain Analysis – SPEEDEM
  • Power Integrity – OptimizePI, Sigrity PowerDC
Sigrity Tools

The database used in Sigrity X has migrated into a single .spd file, instead of being spread out over multiple files, simplifying your workflow.

Sigrity uses a single .spd file

Runtime improvements with Sigrity X are about 10X faster, all with the same accuracy level. For the PowerSI comparison a package plus PCB simulation improved by 12X. With XtractIM run on an InFO package showed a 15.1X speedup. On a FC-BGA package simulation the improvement was 7.18X. OptimizePI simulation was 8.56X faster using Sigrity X.

Sigrity X speedups

Run times depend on how many cores are used, so here’s a table showing the scalability comparisons. Users decide how many machines and cores to use for each run.

PCB designers using Allegro canvas can run Sigrity analysis for topological extractions, crosstalk and reflections, IR drop, impedance, coupling and return path simulations. For system-level simulations here’s an example flow using Sigrity and Clarity tools.

Sigrity and Clarity tool flow

DDR5 simulations and data-dependent measurements are done with Sigrity X by using accurate interconnect models, plus accurate modeling of transceiver equalization.

DDR5 analysis with Sigrity

Analyzing a board and package power delivery network (PDN) is accomplished using Sigrity, Clarity 3D Solver and Voltus IC Power integrity in a flow.

Power domain analysis

Case Studies

Setting up a Sigrity Simulation starts with dragging and dropping a .brd file into the window, then selecting which nets should be simulated. Ports are generated based on the selected nets, then compute resources are selected.

DDR simulation results

Continuing the DDR example a testbench is created, a circuit simulator selected, then time domain results viewed. A channel report is run and the results are compared versus expected, based on the standard, so this mask is passing at 4.4Gbps.

Eye mask plot

A power domain analysis was run on a system consisting of building blocks, subcircuits, S-parameter blocks, ideal elements, layout element, VRMs, Voltus blocks, plus an IC block. The IC block has a PWL source for the current. Target impedance is then simulated. Impedance from all eight nodes (IC2:IC9) is examined and compared to the target impedances.

Power domain analysis flow

Some of the eight impedances are peaking out above 100MHz, so the designers can reduce these impedances to meet the power ripple specification.

Eight port impedances

A final simulation is run after impedance changes, then the eight nodes are examined and the ripples are overlapping each other. Both impedance and power ripple results have been analyzed properly at the system-level.

Eight nodes simulated

Summary

SI/PI analysis is improved by the abilities in Sigrity X and the Clarity 3D Solver, using distributed simulation. Your PCB and IC package design teams will have increased confidence that each new project will meet requirements through SI/PI analysis.

Read the complete 18-page White Paper on Sigrity X.

Related Blogs


Advanced Audio Tightens Integration to Implementation

Advanced Audio Tightens Integration to Implementation
by Bernard Murphy on 10-22-2024 at 6:00 am

girl wearing wireless earbuds on noisy street min

You might think that in the sensing world all the action is in imaging and audio is a backwater. While imaging features continue to evolve, audio innovations may be accelerating even faster to serve multiple emerging demands: active noise cancellation, projecting a sound stage from multiple speakers, 3D audio and ambisonics, voice activated control, breakthrough to hear priority inputs, and in-cabin audio communication to keep driver attention on the road. Audio is especially attractive to product builders, providing more room for big quality and feature advances with premium pricing ($2k for premium headphones as an example). Such capabilities are already feasible but depend on advanced audio algorithms which must run on an embedded DSP. Today that option can add considerable complexity and cost to product development.

What makes advanced audio difficult?

Most audio algorithm developers work in MATLAB/ Simulink to design their algorithms. They build and profile these algorithms around predefined function blocks and new blocks they might define. When done, MATLAB/ Simulink outputs C-code which can run on a PC or a DSP though it will massively underexploit the capabilities of modern DSPs and will fail to meet performance and quality expectations for advanced audio.

A team of DSP programming experts is needed to get past this hurdle, to optimize the original algorithm to take full advantage of those unique strengths. For example, vectorized processing will pump through streaming audio much faster than scalar processing but effective vectorization demands expert insight into when and where it can be applied. Naturally this DSP software team will need to work with the MATLAB/ Simulink algorithm developer to make sure intent is carried through faithfully into DSP code implementation. Equally they must also work together to validate that the DSP audio streams match the audio streams from the original algorithm with sufficient fidelity.

MathWorks supports rich libraries to build sophisticated audio algorithms, making it the design platform of choice for serious audio product builders. Yet this automation gap between design and implementation remains a serious drawback, both in staffing requirements and in competitive time to market.

Streamlining the link from design to implementation

Cadence and MathWorks (the company behind MATLAB and Simulink) have partnered over several years to accelerate and simplify the path from algorithm development to implementation and validation, without requiring a designer to leave the familiar MathWorks environment. This they accomplish through a toolbox Cadence call the Hardware Support Package (HSP), which together with MATLAB and Simulink provides an integrated flow to drive optimized implementation prototyping, verification, and performance profiling, all from within the MathWorks framework.

Through this facility, HSP will map MATLAB/ Simulink function blocks and code replacement candidates to highly optimized DSP equivalents (this mapping could be one-to-one, one-to-many, or many-to-one). Additionally, it will map functions supported in the HSP Nature DSP library (trig, log, exponentiation, vector min/max/stddev). These mapping steps can largely eliminate need for DSP software experts, except perhaps for specialized functions developed during algorithm development. Equivalents for these, where required, can be built by a DSP expert and added to the mapping library.

Integration can also handle dynamic mode switching. If you want to support multiple audio processing chains in one platform – voice command pickup, play a songlist, or phone communication, each with their own codecs – you must manage switching between these chains as needed. The XAF Audio Framework will handle that switching. This capability can also be managed from within the MATLAB/ Simulink framework.

Once a prototype has been mapped it can be compiled and validated through a processor-in-loop (PIL) engine based on the HiFi DSP instruction set simulator. All these steps can be launched from within the MathWorks framework. An algorithm developer can then feed this output together with original algorithm output into whatever comparison they consider appropriate to assess the quality of the implementation. Wherever a problem is found, it can be debugged during PIL evaluation, again from within the MathWorks framework with additional support as needed from the familiar gdb debugger.

Finally, the HiFi toolkit also supports performance profiling (provided as a text output), which an algorithm developer can use to guide further optimization.

What about AI?

AI is playing a growing role in audio streams, in voice command recognition and active noise cancellation (ANC) for example. According to Prakash Madhvapathy (Product Marketing and Product Management Director for the Audio and Voice Product Line at Cadence), AI-based improvements in ANC are likely to be the next big driver for consumer demands in earbuds, headphones, in car cabins and elsewhere, making ANC enhancements a must-have for product developers.

HiFi DSPs provide direct support for AI acceleration compatible with audio streaming speeds. The NeuroWeave platform provides all the necessary infrastructure to consume architecture networks, ONNX or TFML, with a goal Prakash tells me to ensure “no code” translation from whatever standard format model you supply to mapping onto the target HiFi DSP. Support for integrating the model is currently outside the HSP integration.

Availability

The Hardware Support Package integration with MathWorks is available today. No-code AI support through NeuroWeave is available for HiFi 1s and HiFi 5s IPs today.

This integration looks like an important time to market accelerator for anyone working in advanced audio development. You can learn more about the HSP/MathWorks integration in this blog from Cadence and from the MathWorks page on the integration.


Unlocking SoC Debugging Challenges: Paving the Way for Efficient Prototyping

Unlocking SoC Debugging Challenges: Paving the Way for Efficient Prototyping
by Daniel Nenni on 10-21-2024 at 10:00 am

3D rendering of cyberpunk AI. Circuit board. Technology background. Central Computer Processors CPU and GPU concept. Motherboard digital chip. Tech science background.

As chip design complexity increases, integration scales expand and time-to-market pressures grow, as a result, design verification has become increasingly challenging. In multi-FPGA environments, the complexity of design debugging and verification further escalates, making it difficult for traditional debugging methods to meet the demands for high performance and efficiency. This makes efficient debugging solutions critical for successful prototype verification. Today, we’ll explore common debugging methods ranging from basic to advanced approaches.

Why is Prototyping Important

With the growing complexity of large-scale integrated circuits, chip verification faces immense time and cost pressures. Historically, designers relied on simulation or silicon tape-out for validation, a process that is both time-consuming and costly. Prototyping allows for testing real-world scenarios before tape-out, ensuring the reliability and stability of functional modules while assessing performance. This not only shortens time-to-market but also enables early demonstrations to customers enabling pre-sales activities.  Additionally, prototyping significantly reduces costs by enabling driver development ahead of silicon availability. Once silicon is ready, applications can be seamlessly integrated with drivers developed during prototyping which accelerates SoC deployment.

Prototyping offers a unique speed advantage, outperforming software simulation and hardware emulation in verification turn around times, though historically it has lacked robust debugging capabilities. While software simulation is user-friendly but  it is slower and more suitable for small-scale or module-level verification. Hardware emulation is better suited for larger designs with strong debugging capabilities while prototyping stands out for its speed. As complexity increases, users increasingly rely on debugging tools provided by FPGA vendors which can be limited in scope. Here we’ll discuss how to overcome these debugging challenges and introduce S2C’s advanced debugging solutions.

S2C’s Prodigy prototyping solution offers a comprehensive and flexible debugging platform equipped with a complete toolchain that includes real-time control software (Player Pro-RunTime), design debugging software (Player Pro-DebugTime), the Multi-Debug Module (MDM) for deep debugging, and ProtoBridge co-simulation software. These tools significantly enhance user efficiency, catering to the diverse requirements of S2C’s broad customer base with capabilities that may not be available from other vendors.

What Are the Common Debugging Techniques

With prototype verification, debugging aims to pinpoint and resolve design issues, ensuring system functionality. In complex SoC designs, engineers must ensure that issues are debuggable, minimizing time spent troubleshooting during development. When users first deploy their designs on an FPGA, they often encounter various failures which could stem from network issues within the FPGA prototype, design errors, or compilation problems like timing errors due to design partitioning or pin multiplexing. Effective debugging and monitoring tools are essential for confirming hardware functionality and verifying that all modules operate as expected. This often requires using external or embedded logic analyzers to identify the root cause of issues.

Common debugging methods include basic I/O debugging, AXI bus transaction monitoring, signal-level debugging, and protocol-based debugging. While many users rely on embedded logic analyzers from FPGA vendors during prototype bring-up, these tools may become resource-intensive and harder to manage when dealing with complex multi-FPGA designs.

S2C offers a comprehensive set of debugging solutions, ranging from simple to advanced, meeting the diverse needs of engineers during prototype verification and ensuring a smoother debugging process.

-Basic I/O Debugging:

While FPGA vendors provide various signal monitoring tools such as VIO IP cores, signal editors, and probes typically accessed through JTAG, S2C’s solution extends I/O debugging capabilities. It integrates multiple basic I/O interfaces—such as push buttons, DIP switches, GPIOs, and UARTs—directly into the prototype, offering intuitive user interactions. Additionally, S2C’s Player Pro software enhances remote diagnostic capabilities through virtual interfaces, making the debugging process more efficient.

-Bus Transaction Debugging:

For complex SoC designs, debugging AXI bus transactions is highly effective, especially when AXI has become a standard protocol. S2C’s ProtoBridge solution uses PCIe to offer up to 4GB/s of high-bandwidth AXI transaction bridging. It includes an AXI-bridged RTL interface for seamless design integration, along with PCIe drivers and APIs to support software-based stimulus development. With built-in Ethernet debugging (~10Mbps), ProtoBridge enables rapid read/write access to memory-mapped AXI slaves, meeting low-bandwidth debugging needs.

– Signal-Level Debugging:

Signal-level debugging is one of the most commonly used methods in prototype verification, involving the extraction of internal signals for issue diagnosis. S2C’s Player Pro excels in this area which allows designers to easily map internal signals to I/O. A range of expansion cards offers additional flexibility, including pin connections, 3.3V voltage conversion, and extra interfaces for push buttons, switches, and external logic analyzers, making real-time testing more efficient.

-In-System Protocol Debugging:

When FPGA prototypes interact with real-world data, protocol-based debugging becomes essential. S2C supports over 90 expansion cards and reference designs to facilitate system-level testing across various protocols. Custom solutions are also available to optimize testing and debugging for a smooth prototyping experience.

-Deep Logic Analysis:

For users requiring in-depth debugging or dealing with multi-FPGA setups, challenges often include memory capacity for signal storage and the need for cross-FPGA debugging. S2C addresses these issues with the MDM Pro which supports concurrent signal probing across up to 8 FPGAs and includes cross-trigger functionality. Equipped with 64GB of DDR4 memory, MDM Pro captures up to 16K signals in 8 groups without the need for FPGA recompilation. Pre-built into S2C’s Quad 10M and Quad 19P Logic Systems, MDM Pro offers intuitive trigger settings similar to those in FPGA vendor tools ensuring a smooth transition for engineers. It supports both IP and compile modes which provides flexibility to match various design flows.

Conclusion:

Prototyping offers significant performance advantages over software and hardware emulation. This has driven a demand for advanced debugging solutions to maximize the benefits of prototyping. As one of the earliest providers of prototyping tools, S2C enhances productivity and efficiency with a comprehensive range of debugging methods. These solutions are particularly effective in addressing the challenges of multi-FPGA environments helping engineers accelerate verification and shorten time-to-market.

For more information: https://www.s2cinc.com/

Also Read:

Evolution of Prototyping in EDA

S2C Prototyping Solutions at the 2024 Design Automation Conference

Accelerate SoC Design: Addressing Modern Prototyping Challenges with S2C’s Comprehensive Solutions (II)


Analog Bits Builds a Road to the Future at TSMC OIP

Analog Bits Builds a Road to the Future at TSMC OIP
by Mike Gianfagna on 10-21-2024 at 6:00 am

Analog Bits Builds a Road to the Future at TSMC OIP

The TSMC Open Innovation Platform (OIP) Ecosystem Forum has become the industry benchmark when it comes to showcasing industry-wide collaboration. The extreme design, integration and packaging demands presented by multi-die, chiplet-based design have raised the bar in terms of required collaboration across the entire supply chain. World-class development and collaboration were on display at the recent event, which was held in Santa Clara on September 25, 2024. A critical technology required for success is enabling IP, in particular for sensing and power management.  Analog Bits showcased substantial capabilities here. Let’s examine some of the work presented to see how Analog Bits builds a road to the future at TSMC OIP.

IP Development Progress

Analog Bits discussed some of the unique challenges advanced chip and multi-die design presents. Multi-domain sensing was discussed, along with the additional challenge of non-uniform thermal distributions. Real-time monitoring is another requirement. If the face of all this, calibration complexity, voltage supply noise, and crosstalk must all be dealt with as well.

Analog Bits portfolio of on-die sensing IP was presented, including:

  • PVT Sensors – integrated and pinless
  • Power on reset and over current detection macros
  • Power supply detectors that include:
    • Fast detecting glitch
    • Synchronized droop detection with filtering and differential sensing

The benefits of a comprehensive on-die sensing IP portfolio were also discussed. At the top of the list is improved power efficiency. A good approach here also prevents overheating and minimizes thermal stress. The overall benefits of enhanced reliability and improved yield also come into play.

Power management is also a key benefit. Things like voltage scalability, voltage spike, and droop protection are examples. Better integration that results in space savings is an added benefit.

Analog Bits presented a significant amount of silicon data based on a TSMC N3P test chip. The graphic at the stop of this post is an overview of what’s on this chip. There were many impressive results to show. Here is a list of some of them:

  • Temperature linearity and precision for the High-Accuracy Thermometer
  • Linearity and precision for the high-accuracy Voltage Sensor
  • Measured trigger voltage vs. threshold and untrimmed threshold accuracy for the Droop Detector
  • An overview of Low-Dropout (LDO) regulator development

Regarding the LDO, here is a summary of the program:

  • First LDO modules proven in silicon
  • Latest N3 test-chip taped out Q2 2024
  • Packaging and initial bring up Q1 2025
  • Automotive planned for mid-2025

Here is an example of the data presented. The plot is showing Voltage Sensor accuracy with the following parameters: VDDA: 1.2V, VDD: 0.75V, Corner: TT.

Voltage Sensor Accuracy

IP Collaboration Progress

OIP is all about ecosystem collaboration, so Analog Bits teamed with Arm to present an impressive presentation entitled, Optimized Power Management of Arm CPU Cores with Integrated Analog Bits Power Management and Clocking IP’s. The presenters were Lisa Minwell, Director of Technology Management at Arm and Alan Rogers, President at Analog Bits.

The once-in-a-generation transformation occurring in digital infrastructure was discussed. Complexity increases in data center SoC’s, coupled with AI deployment has made energy efficiency a central issue. It was pointed out that advanced chip and chiplet-based designs in 3nm and 2nm are integrating many Arm Neoverse cores.

The need for managing power to these cores on a granular level is getting increasingly important. The traditional methods of using off-chip LDO and power sensors no longer scales.  A new approach is needed.

The work Analog Bits and Arm have done on several integrated power management and clocking IPs was presented. Arm customers can readily use these solutions in N3P and soon in N2P. LDO regulator IPs were also discussed to efficiently manage the large absolute and dynamic current supplies to Arm CPU cores.

A case study of how CPU cores seamlessly integrate with Analog Bits LDO and Power Glitch Detector IPs, along with integrated clocking capabilities was also presented.  The implications of this work is substantial for advanced data center applications.

To Learn More

I have presented some of the highlights of Analog Bits presence at TSMC OIP. There is a lot more to the story, and you find out more about Analog Bits industry impact on SemiWiki here. You can also check out the company’s website here. And that’s how Analog Bits builds a road to the future at TSMC OIP.

 


Podcast EP254: How Genealogy Correlation Can Uncover New Design Insights and improvements with yieldHUB’s Kevin Robinson

Podcast EP254: How Genealogy Correlation Can Uncover New Design Insights and improvements with yieldHUB’s Kevin Robinson
by Daniel Nenni on 10-18-2024 at 10:00 am

Dan is joined by Kevin Robinson, yieldHUB’s Vice President of operations and sales. With over 23 years of experience as a test engineer in the semiconductor industry, Kevin brings a wealth of knowledge and dedication to his dual role. At yieldHUB, Kevin leads both sales and operations teams, playing a crucial role in delivering top-notch experiences to UK and European customers.

In this informative discussion, Dan explores with Kevin what genealogy correlation is and how this technique can be used to analyze and optimize yield and performance. By comparing and correlating observed data with expected results based on fab parameters, complex and subtle relationships can be found.

These relationships may not be clear at first, but after applying complex correlation analysis to massive data sets with yieldHib’s automated technology, new relationships between the process, the design, and associated design sensitivities can be quickly uncovered that can have a significant impact on overall yield and performance.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Mehdi Asghari of SiLC Technologies

CEO Interview: Dr. Mehdi Asghari of SiLC Technologies
by Daniel Nenni on 10-18-2024 at 6:00 am

Dr. Mehdi Asghari

Mehdi is a serial entrepreneur with a track record of success. He is currently CEO and co-founder of SiLC. SiLC is his third Silicon Photonics start up focusing on advanced imaging solutions that enable machines to see like humans. His previous start up, Kotura where he was CTO and SVP of Engineering, developed communication solutions for data center applications. Kotura was acquired by Mellanox in 2013 where Mehdi continued to serve as VP of Silicon Photonics for over 4 years. Prior to that Mehdi was the VP of R&D at Bookham, the first company to ever commercialize silicon photonics. Bookham focused on sensing and telecom applications and had a successful IP in 2000 with a valuation of ~$7B.

Mehdi is one of the early pioneers of silicon photonics industry with over 25 years of commercialization experience in this space. Prior to that he spent 10 years in the III/V industry.

Tell us about your company.

On a mission to enable machines to see like humans, SiLC Technologies is a silicon photonics innovator delivering coherent vision and chip scale FMCW LiDAR solutions. SiLC has developed the industry’s first fully integrated coherent LiDAR chip. Our breakthrough 4D+ Eyeonic Chip integrates all photonics functions needed to enable a coherent vision sensor, offering a tiny footprint while addressing the need for low cost and low power. SiLC’s innovations are targeted to robotics, mobility, perimeter security, industrial automation and other leading markets. SiLC was founded in 2018 by silicon photonics industry veterans with decades of commercial product development and manufacturing experience.

What problems are you solving?

AI is all the rage now, but its scope to date is limited to the non-physical world. As the author Joanna Maciejewska said, “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” Beyond the desire for more creative and recreational time, the continued economic growth of industrialized countries depends on addressing the severe labor force shortage caused by the persistent (~1% per year) decline in the working-age population, driven by decades of falling birth rates.

To solve this problem, we need AI to take on a physical form and be capable of performing dexterous and human-like work. Even more critically, we need to be able to deploy the technology safely and cost effectively. While AI and Robotics technologies are relatively mature and capable of supporting such a goal, the vision technology to support machine autonomy and human-like activity is missing. The human eye works by partial processing of images in the eye with a direct linkage to our muscles to enable hand-eye coordination. This facilitates prediction-based movements in real-time and gives us dexterity and the ability to perform complex tasks rapidly and without conscious thinking. Existing machine vision solutions lack this ability as they focus on storage of images rather than real-time processing of images.

What application areas are your strongest?

SiLC is currently active in 3 key market sectors. First, is industrial and robotics, which aims to enable the autonomy revolution ahead of us. The second market, which is of strategic importance, is the C-UAS (Counter-Unmanned Aerial Systems, or drones) market where we enable detection, tracking and classification of drones. The third market is mobility/automotive. Honda’s investment in SiLC is a testament to our capabilities and differentiation in this market.

What keeps your customers up at night?

Our customers need a solution that provides reliable and accurate depth and velocity data regardless of lighting conditions and without the need for expensive and power-hungry computation. They also need to be able to position these systems without concern for multi-user interference (imagine going blind every time you looked someone in the eye or saw their reflections). Coherent imaging can address these issues and additionally offer precise depth and velocity information. Our solutions are eye safe, immune to multi-user interference and offer an order of magnitude performance improvement.

What does the competitive landscape look like and how do you differentiate?

There are companies which aimed to get a product together quickly by purchasing telecom grade components to make a full system. While this allows for faster time to market it does not provide a viable path to full commercialization as cost and scaling is very challenging. A coherent imaging system is a very complex optical apparatus requiring many high-performance optical components to work together. A crucial factor in commercializing such a complex platform is the integration of photonics, which ensures the robustness and cost-effective implementation of the required photonic circuits. A photonics integration platform is needed. The integration platforms available today, however, are designed for data communication and lack the advanced performance required to achieve the system level demands of a coherent imaging solution, which is orders of magnitude more challenging.

What new features/technology are you working on?

Our integration platform is rather unique and has been developed to enable the optical performance needed for coherent imaging applications. With our fully integrated chips, we are already capable of meeting the stringent requirements of our key markets, delivering world-leading performance levels. But we are just beginning. We are pushing on multiple fronts, developing new technologies that enhance our play in key markets. On the Industrial and Robotics side, we are working on technologies that allow us to offer even higher resolution at longer working distances. For C-UAS products we are developing technologies that enable us to achieve even longer ranges, providing more reaction time to respond to perceived threats. For mobility/automotive, we have a longer-term perspective and are focused on developing technologies that allow us to meet their demanding requirements for cost, power, size, performance, and reliability.

How do customers normally engage with your company?

SiLC is a semiconductor player that sells critical components rather than full systems. As such, our customers are typically system integrators and end users who have the internal capability to build their own systems. This allows our customers to add more value and ultimately produce a more cost-effective and application-optimized product. It enables us to focus on our core competencies, maintain higher margins and generate greater aggregate volumes. This strategy helps us to create the right economy of scale to offer better prices.

While our focus is on selling components, we also engage in full system design and optimization. This allows us to provide our customers with fully functioning reference designs and development kits as well as the knowledge and experience needed to support their development efforts. Our development kits are the first step in our engagement process with customers. These are designed to be flexible and multi-purpose with a user friendly interface, enabling access to data at almost every point in the system.

Also Read:

CEO Interview: Doug Smith of Veevx

CEO Interview: Adam Khan of Diamond Quanta

Executive Interview: Michael Wu, GM and President of Phison US


Prioritize Short Isolation for Faster SoC Verification

Prioritize Short Isolation for Faster SoC Verification
by Ritu Walia on 10-17-2024 at 10:00 am

Fig1 shorts analysis conf data

Improve productivity by shifting left LVS
In modern semiconductor design, technology nodes continue to shrink and the complexity and size of circuits increase, making layout versus schematic (LVS) verification more challenging. One of the most critical errors designers encounter during LVS runs are shorted nets. Identifying and isolating these shorts early in the process is crucial to meeting deadlines and ensuring a high-quality design. However, isolating shorts in early design cycles can be a time-consuming and resource-intensive task because the design is “dirty” with numerous shorted nets.

To tackle this challenge, designers need an LVS solution for rapid short isolation that enhances productivity by addressing shorts early in the design flow. This article explores the key difficulties designers face with short isolation, and a novel solution that integrates LVS runs with a debug environment to make the verification process faster and more efficient.

The challenge of shorted nets in LVS verification

Design size, component density, and advanced nodes like 5 nm and below all contribute to the growing complexity of SoC designs. With layouts containing billions of transistors, connectivity issues like shorted nets can proliferate. Shorts can occur between power/ground networks or signal lines and may result from misalignment, incorrect placement, or simply the close proximity of electrical connections in densely packed areas of the chip.

As shown in industry conference surveys, the number of shorts in “dirty” early-stage designs has skyrocketed as process nodes have shrunk, leading to an increased need for comprehensive short isolation (Figure 1). While earlier nodes like 7 nm might see a manageable number of shorts, modern 5 nm designs can produce over 15,000 short paths that need to be investigated, analyzed, and corrected. Identifying the specific short paths causing the issue is not just difficult—it’s overwhelming.

Figure 1. Short path analysis statistics from industry conference surveys show the increase in shorts across process nodes.

Traditional LVS verification and short debugging approaches require designers to switch between a graphical user interface (GUI) for short inspection and a command-line environment for LVS reruns, resulting in longer design cycle times and less efficient workflows. Furthermore, manually inspecting and debugging each short path is an incredibly tedious process, especially when designers need to pinpoint shorts in hierarchical designs where components and interconnects are densely packed across multiple layers.

Debugging shorts: common pitfalls

The key challenges designers face during short isolation and debugging include:

  • Locating the exact short path: Each short is composed of multiple paths, and identifying the specific path responsible for the short can be time-consuming.
  • Extended LVS cycle times: Running a full LVS verification after each short fix significantly lengthens the process.
  • Tedious visual inspection: Manually inspecting and analyzing short paths across the entire chip layout can take several days, especially in large, complex designs.

With these challenges in mind, having an efficient short isolation solution can drastically improve the speed and accuracy of the LVS process.

A comprehensive solution for interactive short isolation

To address these challenges, Siemens EDA has developed the Calibre RVE Interactive Short Isolation (ISI) flow, which integrates short analysis directly into the Calibre RVE environment. This solution allows designers to quickly identify and debug shorts without leaving the familiar layout viewing and debugging interface.

The flow lets designers visualize short paths in their design layouts after running LVS verification. With the addition of the “SI” keyword (short isolation) in the Mask SVDB Directory statement of the rule file, designers can isolate and inspect shorts in real time. The flow automatically highlights shorted segments in the layout and organizes them in an intuitive tree view, making it easier to manage and debug shorts (Figure 2).

Figure 2. Older Summary View of the shorted paths for each texted-short in Calibre RVE (top). Updated comprehensive Summary View of the shorted paths for each texted-short in Calibre RVE (bottom).

The ability to simulate short fixes without making actual changes to the design layout is a key feature. Designers can perform virtual fixes, verify them, and save the results in a separate database. This means they can debug multiple short paths simultaneously, reducing the overall LVS cycle time and minimizing disruptions to their workflow.

Benefits of short isolation in early-stage design

By running partial LVS checks targeted at specific nets, designers can quickly isolate and fix shorts on power/ground or signal nets, significantly reducing the number of shorts before running a full LVS signoff extraction.

With the integration of LVS runs into a graphical debug environment, designers no longer need to switch between different tools for verification and debugging. Instead, they can invoke LVS runs directly from the debug GUI (Figure 3). This push-button feature allows for quick, targeted LVS runs, with options for multithreading and distributed processing to further accelerate runtimes.

Figure 3. Designers can quickly prioritize and fix critical shorts using the Calibre RVE background verify shorts functionality.

This short isolation flow helps designers simulate short fixes and verify them without requiring full-chip LVS runs. This targeted, parallel processing reduces overall verification time, allows for early identification of critical issues, and helps design teams stay on schedule.

Boosting designer productivity with integrated short isolation

The tight integration between Calibre tools enables a much more efficient LVS process by providing a unified toolset for short isolation, debugging, and verification. Designers can now:

  • Run targeted partial LVS checks for shorts without waiting for full-chip LVS runs.
  • Perform interactive short isolation and virtual fixes in the same environment.
  • Automatically update results in the debug interface, eliminating the need for manual context switching.
  • Leverage parallel processing and multithreading options to speed up debugging.

This seamless flow significantly reduces the time spent on short isolation and debugging, enabling designers to focus on optimizing other aspects of their design.

Conclusion: Faster SoC verification with early short isolation

As SoC designs become larger and more complex, early-stage short isolation and verification are critical to keeping projects on schedule. By allowing designers to simulate short fixes and verify them in parallel, this flow helps reduce the number of full LVS iterations required, leading to shorter design cycles and improved productivity. With the combined LVS and debug environments, design teams can tackle the most critical LVS violations early, ensuring higher-quality designs and faster time to market.

To learn more about how Calibre nmLVS Recon can streamline your verification process, download the full technical paper here: Siemens EDA Technical Paper.

Ritu Walia is a product engineer in the Calibre Design Solutions division of Siemens EDA, a part of Siemens Digital Industries Software. Her primary focus is the development of Calibre integration and interface tools and technologies.

Also Read:

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs

SystemVerilog Functional Coverage for Real Datatypes

Automating Reset Domain Crossing (RDC) Verification with Advanced Data Analytics


The Perils of Aging, From a Semiconductor Device Perspective

The Perils of Aging, From a Semiconductor Device Perspective
by Mike Gianfagna on 10-17-2024 at 6:00 am

The Perils of Aging, From a Semiconductor Device Perspective

We‘re all aware of the challenges aging brings. I find the older I get, the more in touch I feel with those challenges.  I still find it to be true that aging beats the alternative. I think most would agree. Human factors aside, I’d like to discuss the aging process as applied to the realm of semiconductor device physics. Here, as with humans, there are degradations to be reckoned with. But, unlike a lot of human aging, the forces causing the problems can be better understood and even avoided. There is a recent high-profile news story regarding issues with the 13th and 14th generation of the Intel ‘Raptor Lake’ core processors. After a fair amount of debugging and analysis, the observed problems highlight the perils of aging from a semiconductor device perspective. Let’s look at what happened, and what it means going forward.

What Went Wrong?

Back in August, PC Magazine reported that unstable 13th and 14th Gen Intel Core processors are raising lots of concerns for desktop owners. The article went on to say that:

An unusual number of the company’s latest 14th Gen “Raptor Lake Refresh” chips, which debuted late in 2023, are proving to be prone to crashes and blue screens. Intel’s older 13th Gen “Raptor Lake” processors are, similarly, showing the same distressing traits.

What was particularly vexing was the incidence of stability issues so early in the life of these chips. And the fact that not everyone was seeing the problems, and further the problems were not always in the same form or frequency. News such as this about a part that sees widespread use can cause a lot of angst.

Root Cause Analysis

After much analysis, research and code updates, Intel has honed in the root cause and developed a plan. Dubbed the Vmin Shift Instability issue, Intel traced the problem to a clock tree circuit within the IA core which is particularly vulnerable to reliability aging under elevated voltage and temperature. What was observed was that these conditions can lead to a duty cycle shift of the clocks and observed system instability.  

Intel has identified four operating scenarios that lead to the observed issues. In a recent communication from the company, details of these four scenarios and mitigation plans were published. The company is releasing updated documentation, microcode, and BIOS to modify the clock/supply voltage behavior, so the rapid aging behavior is mitigated. Intel is working with its partners to roll out the relevant BIOS updates to the public.

This issue was manifested in the desktop version of the part. Intel also affirmed that both the Intel Core 13thand 14th Gen mobile processors and future client product families – including the codename Lunar Lake and Arrow Lake families – are unaffected by the Vmin Shift Instability issue.

These fixes are taking a substantial amount of resources, both to mitigate the problem and deal with the impact in the market.

How to Avoid Problems Like This

This is a highly visible example of what happens when clock trees go out of specification particularly when N- and P-channel devices age differently, leading to asymmetrical changes in clock signals. As these performance shifts accumulate, circuits stop working reliably or stop working completely.

The solution to such aging degradation lies in the use of precise, high-resolution analysis tools throughout the design process.  It turns out there is a company that targets the identification and mitigation of clock anomalies. Infinisim solution ClockEdge® offers a powerful approach, simulating how clock signals degrade over time due to factors like Negative Bias Temperature Instability (NBTI) and Hot Carrier Injection (HCI). By performing comprehensive aging simulations across entire clock domains over multiple PVT corners, Infinisim’s technology allows designers to predict signal degradation and mitigate its impact, effectively extending the operational lifespan of high-performance clocks.

My gut tells me Intel’s problems could have been tamed before they reached the field with tools like this. By identifying potential clock aging failures early, Infinisim’s solutions reduce the risk of expensive field failures and costly silicon re-spins. Their proven track record demonstrates how they enable customers to achieve exceptional design robustness before tape-out, providing fast, accurate analysis that helps optimize performance without compromising reliability.

You can learn more about clock aging and Infinisim’s approach in this blog on SemiWiki. And that’s a look at the perils of aging from a semiconductor device perspective.

Also Read:

A Closer Look at Conquering Clock Jitter with Infinisim

Afraid of mesh-based clock topologies? You should be

Power Supply Induced Jitter on Clocks: Risks, Mitigation, and the Importance of Accurate Verification