Bronco Webinar 800x100 1

Mixed Signal Verification is Growing in Importance

Mixed Signal Verification is Growing in Importance
by Bernard Murphy on 09-07-2023 at 6:00 am

Mixed signal min

I have historically avoided mixed signal topics, assuming they decouple from digital and can be left to the experts. That simple view no longer holds water. Analog and digital are becoming more closely linked through control loops and datapaths, requiring a careful balancing act in verification between performance, accuracy and overall metric driven verification.

Improvements in support for this area are not a nice-to-have. A 2020 Wilson survey reported a significant jump in respins attributed to analog problems. Worse yet, system customers are now demanding unified metric data for coverage, safety, and power. We’ll need to jump out of our digital comfort zone to better understand full system verification challenges and solutions. My thanks to Shekar Chetput (R&D Group Director in Xcelium/Mixed Signal) and Paul Graykowski (Director Product Marketing), both at Cadence, for guiding me along this path. 😊

Application drivers and the mixed signal challenge

Sensors of all types require digital support to gather calibration and drift compensation data. Calibration is also a factor for IO interfaces; DDR provides a familiar example. RF for 5G/6G must support multiple bands and hybrid beamforming, again administered from the digital domain. Battery management systems, essential for EVs, handle sophisticated charging and usage behaviors such as preconditioning, fast charge, top-off and battery protection, all (you guessed it) overseen digitally.

Medical implants, held to very high standards of safety and reliability, now offer wireless communication, also sensing and actuators must be verified against body models (RC networks). Non-volatile memory cells handle multiple voltages and support circuitry for read, programming and wear/error detection. Even digital design depends on power management ICs (PMICs) supplying multiple voltages under digital control, supervising complex power management scenarios. These now extend to high voltage management in EVs.

Common to all objectives is the need to find a balance between the analog/RF world, where SPICE models are high accuracy but very low cycles/second, and the digital world with very high cycles/second throughput but very low analog accuracy (0/1 for voltage and no concept of current or impedance). Co-simulation is the obvious answer, but you can’t just bolt together low accuracy/high performance and high accuracy/low performance. These need intelligent interfaces.

Finding the right balance

First make SPICE run faster and make digital simulation more accurate. Cadence speeds up SPICE through the Spectre FX simulator, for which portions of the circuit can run in any of 4 modes, from full analog accuracy to progressively abstracted modes preserving some level of accuracy while compromising complete precision.

To improve accuracy in digital a first step is Verilog-AMS/SystemVerilog wreal support, a real number signal good enough for simple interfaces. Something closer to analog modeling is possible through real number models (RNM) supported by Verilog/SV nettype, where a signal is modeled as a voltage, current and impedance structure, allowing for resolution between connected nets. Cadence provides an RNM nettype EEnet (Electrically Equivalent net). With EEnets it is possible to build a meaningful behavioral model running tests orders of magnitude faster while able to approach SPICE-level accuracy in some use-cases.

Together Spectre FX and RNM/EEnet models provide a spectrum of possibilities for modeling. A full behavioral wreal or EEnet model can be very useful in architectural design to explore different options without getting too bogged down in detail. When models are available, Shekar tells me this use case is now attracting a lot of attention.

In more detailed verification mix and match is often ideal: RNM for certain analog blocks for speed, and SPICE level where accuracy is important, such as in sensitivity analyses against supply voltage and temperature variations, beyond the scope of RNM analyses.

Building models

This all sounds great, but where do these models come from? The basic nettype is flexible but very low-level requiring significant investment from an analog designer unfamiliar with SystemVerilog. Cadence has an EEnet standard library of common base circuits (think capacitors, diodes, inductors, MOS devices), also a test library of mixed signal modules showing examples of how these components are used. Designers can build more complex components schematically using these components.

Shekar tells me that working with customers from the very early days of EEnet this base library is very stable and has recently been released as a part of the Xcelium Mixed-Signal App. Cadence is now working on building and reviewing several mid-level components (think PLLs, voltage regulators, ADCs, DACs). Their customers are also building their own mid-level components, extending even further to more complex functions. It seems necessity is driving progress rather than waiting for pre-packaged libraries.

A quick digression on standardization since significant effort goes into building models. Lu Dai (Accellera chair) told me at DVCon this year that the Accellera mixed signal working group is very active, and that demand from users is intense. Cadence is a participant and has an established reputation in this domain, so I assume their releases are likely to be close to whatever is finally agreed in the standard. Lu warned however that some SV mixed signal update requests are moving slowly since the SV standard is now under IEEE where updates are infrequent. Accellera are considering workarounds.

Testbench automation, assertions, coverage, etc

Higher simulation throughput is always an important goal but mixed signal verification teams need more. They want the automation their digital peers routinely use and have been lobbying hard to get these extended to mixed signal. A UVM-AMS working group under Accellera is already underway to this end. A standard is not yet released and is also wrestling with scheduling problems, but they are on the right track.

In the meantime, designers and verifiers serve these needs through proprietary flows; I imagine these too are tracking the evolving standard. Cadence supports metric driven verification across digital and analog through UVM testbenches, regular assertions and complex mixed signal assertions, together with randomization. For pure analog, coverage and other status can be imported from the Virtuoso ADE Verifier into vManager.

In summary, there have been significant advances in mixed signal verification and there is hope for progress through standardization. Mixed-signal verification truly is becoming a first-class partner with digital verification. You can get more information about the Xcelium Mixed Signal app HERE, the Spectre FX simulator HERE and a useful webinar on mixed signal HERE.


Advancing Semiconductor Processes with Novel Extreme UV Photoresist Materials

Advancing Semiconductor Processes with Novel Extreme UV Photoresist Materials
by Rupesh Yelhekar on 09-06-2023 at 10:00 am

Banner Advancing Semiconductor Processes with Novel Extreme UV Photoresist Materials

Introduction

The ever-growing demand for faster, smaller, and more efficient electronic devices has fueled the semiconductor industry’s relentless pursuit of innovation. One crucial technology at the heart of semiconductor manufacturing is Extreme Ultraviolet Lithography (EUVL) to achieve smaller feature sizes with higher resolution, leading to miniaturized devices. Researchers and companies across the globe are focusing on developing novel Extreme UV (EUV) photoresist materials that support EUVL patterning at nanometer-scale resolutions and improve the performances of semiconductor devices. [1], [2], [3]

Lithography is a crucial step in semiconductor fabrication, where patterns are transferred onto wafers to create integrated circuits and other microstructures. Traditional lithography relies on deep ultraviolet light, but as integrated circuits reach single-digit nanometre scales, EUV lithography becomes imperative. EUV light operates at wavelengths around 13.5 nanometres, enabling the printing of significantly smaller features with high precision. EUV photoresists are light-sensitive materials utilized in the semiconductor manufacturing process, particularly in advanced lithography techniques. These materials must withstand high-energy EUV photons and provide high-resolution patterning capabilities. Some challenges for developing EUV photoresist materials are that they need to be highly sensitive to the short wavelengths, achieving high resolution is essential for producing intricate and small-scale patterns under 3nm, minimizing line-edge roughness, and outgassing (contamination) creates issues for manufacturers in maintaining production.[2], [4], [5]

These innovative materials are often classified as Chemically Amplified Photoresists (CARs), Non-Chemical Amplified Resists, Inorganic EUV Photoresists, and Hybrid EUV Photoresists depending on their formulations or compositions. When exposed to EUV light, they undergo chemical or physical changes, enabling the accurate transfer of patterns onto surfaces.[2],[6]

Understanding Extreme UV Photoresist Material

EUV photoresist materials are light-sensitive substances that undergo chemical changes when exposed to high-energy EUV photons. EUV photons generate photoacids from a photosensitive compound. This acid catalyzes a deprotection reaction in the resist polymer, making it more soluble in the developer solution. The amplified reaction enhances sensitivity and enables high-resolution patterning. As semiconductor nodes advance to smaller scales, maintaining resolution, sensitivity, and pattern fidelity becomes more complex and challenging. Research is ongoing to develop new materials, mechanisms, and processing techniques to address these challenges and enable further miniaturization. [2], [4], [6]

Exhibitor 1: Types of Extreme UV Photoresist Materials

  • Chemical Amplified Resist: Chemically amplified photoresists are the most commonly used EUV photoresists. They employ a photoacid generator (PAG) that produces acid upon exposure to EUV photons. This acid catalyzes a chemical reaction in the resist, leading to the dissolution of the exposed regions during the development process. CARs are known for their high sensitivity, making them suitable for low-dose EUV exposure and improving throughput during semiconductor manufacturing. They may find applications in optical devices, displays, and advanced packaging. [2], [7], [8]
  • Inorganic EUV Photoresists: Inorganic photoresist materials with different EUV absorption coefficients and high etching are important to solve some existing problems. Therefore, many researchers have begun to study the use of inorganic materials in the field of photoresists. These materials differ from organic CARs as they are composed of inorganic materials, such as metal oxides or metal-containing compounds. They work by applying the metal oxide system with acrylic acid as the organic ligand to EUV lithography. Inorganic photoresists are expected to offer higher thermal stability and reduced outgassing than their organic counterparts. They may find applications in extreme environments or specialized semiconductor processes. [2], [7], [8]
  • Non-chemically Amplified Resists: Unlike CARs, non-chemically amplified photoresists do not rely on acid-catalyzed reactions. Instead, they directly undergo a photolytic reaction upon EUV exposure, resulting in a change in solubility. These materials often require higher doses of EUV light for patterning and are being explored for specific applications and process requirements. [2], [7], [8]
  • Hybrid EUV Photoresists: Hybrid EUV photoresists combine organic and inorganic elements to leverage the advantages of both material types. These materials work by selecting the resins selected for the purification step after the ligand exchange reaction as polystyrene resins functionalized with tertiary amines, piperidine, and dimethylamine. These materials aim to provide enhanced sensitivity, resolution, and thermal stability, addressing some of the limitations of purely organic or inorganic photoresists. [2], [7], [8]

Exhibitor 2: Extreme UV Photoresist Materials: Last 5 years Publication Trend

Key Challenges in EUV Photoresist Development
  • EUV Sensitivity: Sensitivity is one of the key challenges of EUV lithography; developing and optimizing photoresist materials that effectively absorb and react with EUV light to produce precise patterns on semiconductor wafers is difficult. EUV photons are scarce and expensive, necessitating photoresist materials with high sensitivity to achieve an adequate throughput of 100 to 120 wafers per hour during manufacturing.[9], [10]
  • Resolution and LER: As feature sizes reduce, maintaining high resolution without excessive line-edge roughness (LER) becomes problematic. An important potential source of LER for EUV resists is photon shot noise due to the high photon energy. The LER challenge involves minimizing irregularities or roughness along the edges of the developed photoresist lines that form the transistor features. Excessive LER can lead to variations in transistor performance and reduced chip yield. The manufacturers need to optimize the photoresist formulation and process conditions to achieve an LER of 2 nm but a sensitivity of only 70 mJ/cm and smoother, more precise edges on the transistor features. [2], [9]
  • Outgassing: The outgassing problem in EUV lithography refers to releasing volatile organic compounds (VOCs) or other materials from the photoresist during exposure to EUV light. These outgassed materials potentially contaminate the surrounding environment, including the optics and mirrors used in the EUV lithography equipment. Contamination reduces equipment performance and production yield, alongside increasing maintenance requirements. Controlling and minimizing outgassing is critical to maintaining the reliability and efficiency of the entire EUV lithography process.[9], [11]
  • Thermal Stability: EUV exposure generates considerable heat, demanding stable photoresist materials under high-energy conditions. Many applications demand coatings with excellent thermal stability. Most commercially available removers rapidly dissolve resist layers after thermal loads of up to 130°C. [9], [12]
Promising Advancements in Novel EUV Photoresist Materials
  • High Sensitivity, Low Dose Materials: Researchers are exploring innovative chemically amplified photoresists that react strongly to EUV photons even at lower doses, improving throughput to 100 wafers per hour and reducing manufacturing costs. [1o], [13], [14]
  • Improved Resolution and LER Control: Novel materials such as chemically amplified and inorganic resists are designed to mitigate LER while maintaining high-resolution patterning capabilities. Advanced chemical atomic resist compositions and unique polymer structures play a vital role in achieving higher sensitivity to EUV light, improving contrast, and reducing LER below 2nm.[13], [14]
  • Reduced Outgassing: The development of low-outgassing photoresists ensures cleaner EUV exposure, resulting in a higher yield and improved semiconductor device reliability. Reducing outgassing is crucial to maintaining the cleanliness and integrity of the EUV lithography process, which is highly sensitive to contaminants. Semiconductor manufacturers collaborate closely with material suppliers and equipment manufacturers to ensure that the photoresists and other materials used in the EUV lithography process meet stringent outgassing requirements and contribute to producing high-quality semiconductor devices.[11], [13], [14]
  • Thermal Stability Solutions: To tackle the thermal challenges of EUV lithography, researchers are bringing engineering materials with enhanced thermal stability, allowing for more prolonged exposure times without compromising performance. [12], [13], [14]

Exhibitor 3:

Collaboration and Future Prospects

Developing and optimizing novel EUV photoresist materials requires collaboration between semiconductor manufacturers, material suppliers, and research institutions. The semiconductor industry’s pursuit of next-generation devices relies on the continual advancement and refinement of EUV lithography technology. [14], [18]

The successful implementation of novel EUV photoresist materials will unlock numerous possibilities for semiconductor technology. Smaller and more powerful devices will revolutionize various sectors, including data centers, healthcare, automotive, and artificial intelligence. The impact is not only limited to traditional computing, allowing semiconductor manufacturers to produce chips with smaller feature sizes. This leads to higher transistor density, improved performance, and lower power consumption in electronic devices. It also enhances the capabilities of semiconductor devices, enabling the production of advanced processors, memory devices, and sensors that drive technological innovation in various industries [14], [18], [19]

Conclusion

Novel Extreme UV photoresist materials represent a crucial stepping-stone in the relentless drive to enhance semiconductor technology. The ability to print ever smaller and more precise features on semiconductor wafers is vital to meeting the demands of the digital age. Collaborative research and development in this field promise a bright future for the semiconductor industry, ensuring the continuous evolution of electronic devices that empower and enrich our lives.

Developing novel EUV photoresist materials requires collaboration among material scientists, chemists, physicists, and engineers. Material suppliers, semiconductor manufacturers, and research institutions work in tandem to design, characterize, and test these materials under demanding EUV exposure conditions.

The field of EUV lithography and photoresist development is continuously evolving. Researchers are exploring a wide range of material innovations, including inorganic resists, nanostructured materials, and hybrid polymers. As the semiconductor industry strives for even greater levels of miniaturization and performance, the pursuit of novel EUV photoresist materials remains an active area of research and innovation.

Authors:

Rupesh Yelhekar
Principal Consultant
Hi-tech Operations

Vikas Konkimalla
Senior Consultant
Hi-tech Operations

Sources
  1. https://ieeexplore.ieee.org/document/9286794
  2. https://spie.org/news/6883-novel-concept-for-extreme-uv-photoresist-materials?SSO=1
  3. https://sci-hub.se/10.1109/IWAPS51164.2020.9286794
  4. https://www.eenewseurope.com/en/irresistible-materials-ramps-production-of-euv-photoresist-2/
  5. https://www.nature.com/articles/srep09235
  6. https://www.shinetsu.co.jp/en/products/electronics-materials/photoresist/
  7. https://www.sciencedirect.com/science/article/abs/pii/S1369702123001785
  8. https://colab.ws/articles/10.1016/j.mattod.2023.05.027
  9. https://www.osti.gov/servlets/purl/1039917
  10. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7466712/
  11. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4646569/
  12. https://www.allresist.com/thermostable-photoresists/#:~:text=Many%20applications%20demand%20coatings%20with,with%20high%20shape%20accuracy%20(Fig
  13. https://www.sciencedirect.com/science/article/abs/pii/S0032386123003506
  14. https://www.tandfonline.com/doi/abs/10.1080/08940886.2019.1634431?journalCode=gsrn20
  15. https://www.digitimes.com/news/a20230626PD203/euv-photoresist-chemical-materials-asml.html
  16. https://www.sciencedirect.com/science/article/abs/pii/S0010854523002965#:~:text=Consequently%2C%20recent%20investigations%20in%20EUV,contain%20typically%20one%20metal%20center
  17. https://www.businesswire.com/news/home/20220802005747/en/Inpria-Co-Developing-Metal-Oxide-Resist-with-SK-hynix-to-Reduce-Complexity-of-Patterning-for-Next-Generation-DRAM
  18. https://www.mdpi.com/2073-4360/11/12/1923
  19. https://chemistry-europe.onlinelibrary.wiley.com/doi/10.1002/ejic.201900745
Also Read:

Modeling EUV Stochastic Defects with Secondary Electron Blur

Enhanced Stochastic Imaging in High-NA EUV Lithography

Application-Specific Lithography: Via Separation for 5nm and Beyond


Rad Hard Circuit Design and Optimization for Space Applications

Rad Hard Circuit Design and Optimization for Space Applications
by Daniel Payne on 09-06-2023 at 6:00 am

LCL smart power switch diagram

The Brazilian Ministry of Science and Technology (MCTIC) has a research unit, Renato Archer Information Technology Center (CTI), and two of their IC engineers presented at the MunEDA User Group meeting this May on the topic of designing Latching Current Limiter (LCL) circuits for space applications with RHBD (radiation-hardened by design) techniques.

Small satellites operate in a hostile environment but do not have adequate shielding due to their low weight requirement. Therefore, electronic circuits must use techniques for radiation-hardening by process (RHBP) or by design (RHBD).

The LCL is a smart power switch that protects an electronic system to isolate faults in power distribution by limiting the current. This protects space-based electronics in a satellite from radiation effects like

  • Total Ionizing Dose (TID)
  • Displacement Damage (DD)
  • Single Event Effects (SEE)
  • Single Event Latch-up (SEL)

In their MUGM2023 contribution, the authors showed three LCL designs and how MunEDA WiCkeD contributed to their RHBD circuit optimization:

LCL smart power switch diagram

In order to mitigate radiation effects by design (RHBD), several decisions were made:

  • Trench isolation and guard rings to reduce leakage currents
  • Replace standard MOS transistor layout with enclosed-layout transistors (ELT) to eliminate drain-source leakage caused by the accumulation of charges in the oxide region (TID)
  • Device redundancy for increased fault tolerance
  • Use MunEDA WiCkeD to optimize the LCL circuits’ performance and robustness against parameter and process variation

Two circuit versions were designed, one with standard transistors and the other using Radiation Hardened By Design (RHBD) techniques.

 

Redundant structure with Enclosed Layout Transistor (ELT)

EDA tools used were Cadence Virtuoso for schematics, MunEDA WiCkeD for optimization, and Cadence Assura or Siemens Calibre for DRC/LVS. The advantages of using WiCkeD for this project were:

  • Shorter design time and engineering effort
  • All critical blocks of the LCL optimized for performance and robustness: operational amplifiers, bandgap voltage reference circuits, current references, comparators, drivers, current sensors, control loop, timers, telemetry circuits, thermal shutdown, and voltage regulators.
  • Large circuit control loop optimization: limiting the short-circuit peak current made an inductor next to the power transistor necessary, introducing stability issues that had to be resolved by optimizing the whole control loop with WiCkeD
  • Improved accuracy
  • Make Rad Hard By Design with ELT and redundancy perform like the standard transistor version
  • All LCL circuits worked as designed

Test chips for the two LCL approaches were fabricated and then tested. The RHBD circuit performed similarly to the standard transistor circuit, so both LCL circuits worked properly under different loads and power transistors. The short-circuit peak current was eliminated, keeping the entire system safe. They plan to perform radiation tests to validate the hardening circuit techniques used.

Summary

Using a RHBD approach was shown to be effective in an LCL circuit to eliminate short-circuit peak currents, keeping an electronic system safe from failure modes while operating in space. Both the standard transistor and ELT approaches were optimized using the MunEDA WiCkeD tool, as part of a multi-vendor EDA tool flow.

Related Blogs


Extending RISC-V for accelerating FIR and median filters

Extending RISC-V for accelerating FIR and median filters
by Don Dingee on 09-05-2023 at 10:00 am

Custom hardware blocks for FIR and median filters

RISC-V presents a unique opportunity for designers to extend the microarchitecture with custom instructions. One possible application is digital signal filtering using finite impulse response (FIR) or median filters, potential algorithms for carrier demodulation schemes in communications systems like 5G. Codasip application teams studied the potential for accelerating FIR and median filters on their L31 RISC-V core, documenting their design process for extending RISC-V and the execution results in their latest white paper.

A brief overview of FIR and median filters

FIR and median filters seek to remove noise from an input signal using a set of N time-domain samples of the input. Neither filter uses internal feedback. A reasonable number of samples fit in a RISC-V register array, shifting on each sample clock with the oldest sample leaving the array and the newest one entering it at the other end.

The FIR filter draws its name from its finite time to settle to zero (if the input signal is zero for at least N consecutive samples). Samples in the order received are weighted by multiplication with filter coefficients, and a summation obtains the output. Using the standard RISC-V instruction set results in 2N-1 instructions, 2N memory loads, and approximately N comparisons and jumps depending on for-loop coding.

The median filter relies on sorting a sample set instead of multiplication. The sequence of sorted elements results in a median – the sample in the middle – taken as the output. Again, using standard RISC-V instructions, a sort usually takes about N·logN arithmetic comparisons and roughly the same number of memory operations.

 

 

 

 

 

Custom hardware blocks for FIR (left) and Median (right) digital filters. Source: Codasip

Codasip teams investigated the premise that custom instructions for each filter could speed up normal sample-by-sample execution once each FIFO fills with N samples – the “filtering window.” Coincidentally, accelerating FIR and median filters happens in three RISC-V clock cycles with different parallelized execution steps:

  • For the FIR filter, the first cycle fetches filter coefficients, the second multiplies and sums, and the third writes the result back to the register file.
  • For the median filter, the first cycle removes the oldest sample from the sort and shifts to close the gap, the second positions the newest sample and shifts bigger samples before inserting it, and the third pulls the median from the new sort.

Basics of extending RISC-V with CodAL

The starting point for the investigation is the stock Codasip L31 core, shown below with a block for custom extensions. In the white paper, Codasip describes creating four custom instructions for accelerating FIR and median filters: one setting up the FIR filter flow, one setting up the median filter flow, one setting FIR coefficients, and one clearing the FIFO.

 

L31 block diagram. Source: Codasip

Codasip’s CodAL language simplifies the description of processor features in a compact syntax similar to C++. Codasip Studio converts CodAL markup to RTL so designers can work in a more familiar high-level language. The “fir_push” instruction represented in CodAL is the simpler of the two examples; the white paper has a complete discussion on the more complex “med_push” instruction and constructing a cycle-accurate state machine model for both instructions.

 

 

 

 

 

 

 

 

CodAL custom instruction implementation for the FIR filtering flow. Source: Codasip

Using CodAL, the FIR filter fits in about 150 lines of code, and the median filter is slightly larger at about 160 lines – entirely describing hardware resources and ready-to-simulate cycle-accurate instructions. By comparison, using Verilog, the FIR filter is 670 lines of code, and the median filter is 1180, without automatic compiler and simulator awareness.

Extending RISC-V for substantial PPA savings

Accelerating FIR and median filters is only part of an application, but the results show how it is possible to take on crucial routines by extending RISC-V microarchitecture. We’ll save the specifics for the white paper – in short, a performance increase of greater than 27x for either the FIR or median filter is seen in this approach, with only a 37% increase in L31 core area for the FIR, less for the median filter. Again, this is achievable without resorting to hand-coded RTL and dealing with compiler and simulator extensions – CodAL handles those details automatically.

To get the whole story, download the Codasip technical paper (registration access for full text):

Finite Impulse Response (FIR) and Median Filter accelerators in CodAL


Fitting GPT into Edge Devices, Why and How

Fitting GPT into Edge Devices, Why and How
by Bernard Murphy on 09-05-2023 at 6:00 am

It is tempting to think that everything GPT-related is just chasing the publicity bandwagon and that articles on the topic, especially with evidently impossible claims (as in this case), are simply clickbait. In fact, there are practical reasons for hosting at least a subset of these large language models (LLMs) on edge devices such as phones, especially for greatly improved natural language processing. At the same time the sheer size of such models, routinely associated with massive cloud-based platforms, presents a challenge for any attempt to move an LLM to the edge. Making the transition to mobile GPT requires some pretty major innovation.

Why is mobile GPT worth the effort?

When Siri and similar capabilities appeared, we were captivated – at first. Talking to a machine and having it understand us seemed like science fiction become fact. The illusion fell apart as we started to realize their “understanding” is very shallow and we’re reduced to angrily repeating variants of a request in the hope that at some point the AI will get it right. Phrase-level recognition (rather than word-level) helps somewhat but is still defeated by the flexibility of natural language. As a user your question will probably be most effective if you know the training phrases in advance, rather than if you ask a natural question. This hardly rises to the level of natural language processing.

LLMs have proven very successful in understanding natural language requests effectively through brute force. They learn across worldwide datasets and use transformer methods for learning based on self-attention algorithms, recognizing common patterns in natural language unconstrained by proximity of keywords or pre-determined phrase structures. In this way they can extract and rephrase intent, or some suggested refinements of intent, from a natural language request. This is much more like NLP and can be valuable quite independent of ability to retrieve factual references from the internet. But it is still running in the cloud.

How can mobile GPT be accomplished?

More capable smart speakers do some local processing beyond the basic voice pickup (speech recognition, tokenization) then pass the real understanding problem back to the cloud. There are proposals that a similar hybrid approach could be used from our phones to upgrade NLP quality. But that has the usual downsides – latency and privacy concerns. It would be greatly preferable to keep all or most of the compute on the edge device.

Do we really need the same huge model used in the cloud? GPT-4 has about a trillion parameters, wildly infeasible to fit in a mobile application. In 2021 Google DeepMind published a significant advance, their Retrieval-Enhanced Transformer (Retro), which breaks away from storing facts in the model, recognizing that these can be retrieved directly from pure text data or searches rather than from model weights. (There are some wrinkles in doing this effectively but that is the basic principle.) This change alone can reduce model size to a few percent of the original size, getting closer but still bulky for a handheld device.

Going lower requires pruning and quantization. Quantization is already familiar in mapping trained CNN or other models to edge devices. You selectively replace floating point weights with fixed point, down to 8-bit, 4-bit or even 2-bit, while constantly checking accuracy of results to ensure you didn’t go too far. Together with compression and decompression, the more fine-grained the quantization the smaller you can make the model, also inference becomes faster and lower power because DDR activity can be reduced. Pruning is a step in which you test sensitivity of results to selectively replacing weights with zeroes. For many NLP models a significant percentage of the model weights are really not that important. With an effective sparsity handler, such pruning further reduces effective size and further increases performance. How much improvement these techniques can deliver depends on the AI platform.

CEVA NeuPro-M for Mobile GPT

The NeuPro-M NPU IP are a family of AI processor engines designed for embedded applications. The pre-processing software to map a model to these engines can reduce effective model size by up to 20:1, delivering a total LLM size including Retro compression to around a billion parameters, comfortably within the capacity of a modern edge AI engine such as the NeuPro-M IP.

NeuPro-M cores come in configurations with from 1 to 8 parallel engines. Each engine provides a set of accelerators, including a true sparsity module to optimize throughput for unstructured sparsity, a neural multiplier for attention or softmax computations and a vector processing unit to handle any special purpose customization an application might need.

Cores share a common L2 memory and can run streams in parallel, for higher throughput and especially to parallelize time-consuming softmax normalization computations with attention steps, effectively eliminating the latency overhead normally incurred by normalization. Ranging from 4 TOPS/core up to 256 TOPS/core, NeuPro-M can deliver over 1200 TOPS.

If we want to replace Siri and similar voice-based apps, we need to add voice recognition and text to speech for a fully voice-centric interface. CEVA already has voice pickup in WhisPro, and speech recognition on input and text to speech on output can each be handled by small transformers which can run on the NeuPro-M. So you can build a full voice-based pipeline on this platform, from voice input to recognition and intelligent response, to speech output.

If you really want your phone to write a detailed essay on a complex topic from a voice prompt, it may still need to turn to the internet to retrieve factual data from which it can generate that essay. More realistically in many cases (find a restaurant, find a movie on the TV, tell me the weather in Austin) it will probably only need to go to internet for that last piece of data, no longer needing that step to accurately understand your question. Pretty cool.

You can learn more at CEVA’s ChatGPT Mobile page.


Interface IP in 2022: 22% YoY growth still data-centric driven

Interface IP in 2022: 22% YoY growth still data-centric driven
by Eric Esteve on 09-04-2023 at 10:00 am

IF 2018 2027no$

We have shown in the “Design IP Report” 2022 that the market share of the wired Interface IP category is a growing part of the total IP, and that this trend is confirmed year after year. The interface IP category has moved from 18% share in 2017 to 25% in 2022.

During the 2010-decade, smartphone was the strong driver for the IP industry, pushing the CPU, GPU categories and some interface protocols like LPDDR, USB and MIPI. Since 2018, and again in 2022, the new drivers are data-centric application, including servers, datacenter, wired and wireless networking and emerging AI. All these applications share the need for higher and higher bandwidth, in term of speed and volume. This translates into high-speed memory controller (DDR5, HBM or GDDR6) and faster release of interface protocols (PCIe 5, 400G and 800G Ethernet, 112G SerDes). We think that this trend will confirm during the 2020-decade. It can be illustrated by comparing TSMC Revenues by Platform in 2022 and 2020: HPC has grown from 33% to 41% when smartphone has declined from 47% to 33%!

As usual, IPnest has made the five-year forecast (2023-2027) by protocol and computed the CAGR by protocol (picture below). As you can see on the picture, most of the growth is expected to come from three categories, PCIe, memory controller (DDR) and Ethernet & D2D, exhibiting 5 years CAGR of resp. 19.2%, 18.8% and 22.3%. It should not be surprising as all these protocols are linked with data-centric applications! If we consider that the weight of the Top 5 protocols was $1440 million in 2022, the value forecasted in 2027 will be $3500 million, or CAGR of 18%.

Conclusion

Synopsys has built a strong position on every protocol -and on every application, enjoying more than 55% market share, by doing strategic acquisitions since the early 2000’s and by offering integrated solutions, PHY and Controller. We still don’t see any competitor in position of challenging the leader.

In 2020, we have seen the emergence of Alphawave Semi building strong position on the high-end interface IP segment (thanks to PAM4 DSP SerDes), creating “Stop-for-Top” strategy, by opposition with Synopsys “One-Stop-Shop”. If we consider that this high-end segment, strongly driven by HPC (including datacenter, IA, storage, etc.), is expected to considerably grow on the 2020 decade, Alphawave Semi could enjoy 25% market share on this $3 billion sub-segment by 2027, a revenue of $600 to $800 million being realistic. At that time Synopsys revenues could be close to $2 billion range for interface IP only.

In 2023, we think that a major strategy change will happen during the decade. IP vendors focused on high-end IP architecture will try to develop a multi-product strategy and market ASIC, ASSP and chiplet derived from leading IP (PCIe, CXL, memory controller, SerDes…). Some have already started, like Credo, Rambus or Alphawave. Credo and Rambus already see significant revenues results on ASSP, but we will have to wait to 2025, at best, to see measurable results on chiplet. The interesting question is whether Synopsys or Cadence will adopt this new strategy or wait until its success will have been proven to make a decision (by acquisition if they want to share this multi-product strategy).

This is the 15th version of the survey, started in 2009 when the Interface IP category market was $250 million (in 2022 $1650 million), and we can affirm that the 5 years forecast stayed within +/- 5% error margin!

IPnest predict in 2023 that the interface IP category in 2027 will be in the $3750 million range (+/- $200), and this forecast is realistic.

If you’re interested by this “Interface IP Survey” released in July 2023, just contact me:

eric.esteve@ip-nest.com .

Eric Esteve from IPnest

Also Read:

Design IP Sales Grew 20.2% in 2022 after 19.4% in 2021 and 16.7% in 2020!

Interface IP in 2021: $1.3B, 22% growth and $3B in 2026

Stop-For-Top IP Model to Replace One-Stop-Shop by 2025


Former TSMC President Don Brooks

Former TSMC President Don Brooks
by Daniel Nenni on 09-04-2023 at 6:00 am

Don Brooks

Don Brooks is well known to many long time semiconductor insiders, like myself, but most SemiWiki readers have probably never heard of him. Don is a semiconductor legend and here is his story. This will be in two parts since he had a big impact on the semiconductor industry and TSMC. From 1991 to 1997 Don served as President of TSMC and helped grow the nascent company into what it is today, the world’s largest semiconductor foundry with a market capitalization of $500B.

Don Brooks passed away in 2013 and here is the story from his memorial. If you read between the lines you can get a real sense of who Don really was, a very intelligent, driven, semiconductor professional of the highest caliber, absolutely.

Don graduated from Sunset High School in 1957 and was a key player on their basketball team, which won the City Championship his senior year. Don attended Tarleton State College on a basketball scholarship his freshman year. He married his high school sweetheart in 1958 and enrolled in SMU under a co-op program with Texas Instruments.

He happened to be assigned to TI’s Research Lab during a time when Jack Kilby invented/developed the integrated circuit. Consequently, his entire 25-year career at TI focused on the commercialization and production of semiconductors. He rapidly rose through the ranks of TI’s management and became the youngest man ever to be promoted to Senior Vice President at Texas Instruments. Under his leadership TI developed a reputation as the world’s leading supplier of MOS memories.

In 1983 he became President & CEO of Fairchild Industries in Mountain View, CA. He founded KLM Capital in 1988 and served as its Chairman for years. Don joined TSMC as President in 1991. During his tenure as President, TSMC returned to profitability, and grew to become the world’s largest independent semiconductor fabrication company.

Morris Chang, Founder and Chairman of TSMC had these words to say about Don’s tenure as President of the Company “Since his arrival in 1991 Don Brooks has provided dramatic leadership that built TSMC into the world’s most successful dedicated foundry”. TSMC grew at an average annual rate of 54% over Don’s time as President of the Company and achieved record profits.

Following TSMC, he was a board member of United Microelectronics Corporation of Taiwan (NYSE: UMC and TSE: 2303) and previously served as its President and co-Chief Executive Officer from 1997-1999. From what I understand, Morris Chang and Don had a disagreement (broken promise) and moving across the street to UMC was Don’s way of resolving it.

In addition to Don’s success as a senior executive, he also had significant success as a private investor including, but not limited to;

Don was the first outside investor in Silicon Labs (SLAB; NASDAQ) one of the premier success stories of the Austin high tech boom, and he was the first outside investor in Broadcom (BRCM; NASDAQ) one of the most successful startup semiconductor companies of all time.

One thing that impresses me about Don and other semiconductor legends is their dedication to family. In my opinion, fifty plus year marriages show true character, compassion, and the ability to compromise. My father’s parents were married for 72 years, I saw it first hand, something I aspire to.

There is a lot more to Don’s TSMC story of course and that is what I will cover in Part II.

Also Read:

How Taiwan Saved the Semiconductor Industry

Morris Chang’s Journey to Taiwan and TSMC

How Philips Saved TSMC

The First TSMC CEO James E. Dykes


Podcast EP179: An Expert Panel Discussion on the Move to Chiplets

Podcast EP179: An Expert Panel Discussion on the Move to Chiplets
by Daniel Nenni on 09-01-2023 at 10:00 am

Dan is joined by a panel of experts to discuss chiplets and 2.5/3D design. The panelists are: Saif Alam – Vice President of Engineering at Movellus Inc., Tony Mastroianni Siemens EDA- Advanced Packaging Solutions Director and Craig Bishop – CTO Deca Technologies.

In this spirited and informative discussion the panel explores the move to chiplets. Why it’s happening now and who can benefit from the trend are discussed in detail, along with considerations for ecosystem management. design methodology, the role of standards and addressing the risks associated with this new design style.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Incredible Journey of Analog Bits Through the Eyes of Mahesh Tirupattur

The Incredible Journey of Analog Bits Through the Eyes of Mahesh Tirupattur
by Mike Gianfagna on 09-01-2023 at 6:00 am

The Incredible Journey of Analog Bits Through the Eyes of Mahesh Tirupattur

If you’ve designed a chip with analog content (and who hasn’t), you know Analog Bits. Along the way, you likely met Mahesh. If you are a lover of fine wines, you probably know Mahesh quite well. More on that later. I got the opportunity to speak with him recently about what he’s been up to, both now and over the past few years. It’s a story about a love for technology and a love for wine. If you believe that wine is an art form, then the statement “life imitates art” is very relevant to what follows. Read on to learn about the incredible journey of Analog Bits through the eyes of Mahesh Tirupattur.

Wine, and Life Imitating Art

Just like in the high-tech world of Silicon Valley, there are many M&A transactions occurring in Napa Valley and beyond. Private equity firms are acquiring and consolidating many of the large wineries we’ve come to know and love over the years. While the details of these transactions are often not public, I do know a few facts about some of the larger ones, thanks to my love for wine and the connections I’ve made along the way.

The model for several of these deals is quite unique. A private equity firm will acquire a controlling interest in a winery and then essentially do nothing, allowing the original creativity to flow unhindered. The message to the owners is simple – we love your wine and the brand you’ve built. Please continue to do what you do regarding making your product. We’ll worry about the operational details.  And when you’re ready to step down, just call us and we’ll be ready to take over. Until then, keep focused on your passion.

This laissez faire acquisition strategy from the wine industry has found its way to other transactions as well. Case in point being the acquisition of Analog Bits by SEMIFIVE that occurred about a year and half ago. As covered on SemiWiki here, there is a variety of business models and foundry relationships that comprise the combination of these two companies. SEMIFIVE is the pioneer of platform-based SoC design, working with customers to implement innovative ideas into custom silicon in the most efficient way. The company has a close relationship with Samsung Foundry. Analog Bits is the leader in developing and delivering low-power integrated clocking, sensors and interconnect IP that are pervasive in virtually all of today’s semiconductors. The company has not only developed IP on the Samsung process, but it also has a close and growing relationship with TSMC.

One approach (and a common one) would be to combine the operations of both companies into one model with one set of relationships. That would cause a significant ripple effect in one or both of these company’s businesses, and not a good ripple effect. Rather than do that, SEMIFIVE took a page out of the winery acquisition playbook being used in Napa Valley and elsewhere.

Analog Bits continues to operate as an independent entity, but now as part of a larger enterprise. The company continues to do the things it loves to do, providing critical enabling IP that its customers need. Dan Nenni summarized it well in the SemiWiki post:

To me this acquisition is another 1+1=3. SEMIFIVE gets a strong IP base in North America plus foundry and customer relationships that have been silicon proven for 20+ years. Analog Bits gets the ability to scale rapidly and increase the depth and breadth of their IP offering.

I mentioned a connection between Mahesh and wine earlier. It turns out he is quite an accomplished Sommelier as well as a technologist, completing three of the four levels that pave the way to Master Sommelier. While there is still more road ahead for Mahesh to achieve this ultimate title, his progress in the face of also building a very successful IP business is noteworthy. There are 269 Master Sommeliers in the world today. This is truly a rare achievement. Mahesh has also become an expert in the making of Sake, which he claims is far more complex and nuanced than wine.

Perhaps this is the topic of a future blog post or podcast.

The Road Ahead

During my discussions with Mahesh, it was quite clear that he was happy with the outcome of the acquisition. The ability to continue to operate independently, continuing to do what he loves with the backing of a larger enterprise feels good. I can imagine the winemakers that were part of the Napa Valley acquisitions saying the same thing.

He talked about the great position Analog Bits enjoys in the development of purpose-built IP blocks for various high-growth markets. The track record and customer-focused nature of the company make this a great match. Mahesh talked about many new market opportunities. One interesting one is power management and spike detection. With so many cores and power domains in advanced designs, often fueled by AI, power spikes have become a very real liability. Analog Bits is developing on-chip IP to sense and manage these events.

Overall, Analog Bits is becoming more “sticky” for advanced designs thanks to their broad catalog and excellent track record. According to Mahesh, the future is bright and a larger operation at Analog Bits seems likely. And that’s just part of the incredible journey of Analog Bits through the eyes of Mahesh Tirupattur.

 


ISO 21434 for Cybersecurity-Aware SoC Development

ISO 21434 for Cybersecurity-Aware SoC Development
by Kalar Rajendiran on 08-31-2023 at 10:00 am

Cybersecurity agreement in supply chain

The automotive industry is undergoing a remarkable transformation, with vehicles becoming more connected, automated, and reliant on software. While these advancements promise convenience, comfort and efficiency to the consumers, the nature and complexity of the technologies also raise concerns for functional safety and security. The ISO 26262 standard was established for ensuring a systematic approach to functional safety in the automotive industry. This standard provides a comprehensive framework for managing functional safety throughout the entire product development lifecycle, including concept, design, implementation, production, operation, maintenance, and decommissioning. It offers guidance on hazard analysis, risk assessment, safety goals, safety mechanisms, and verification and validation processes to ensure that electronic systems function as intended and maintain safety even in the presence of faults or errors.

The ISO 26262 standard addresses impact to safety due to faults and failures. What about addressing factors such as cybersecurity? The soaring adoption of electronics in the automotive sector has led to a corresponding expansion in the cybersecurity threat landscape. As vehicles become more connected and reliant on software-driven functionality, the attack surface expands significantly. This convergence of technological advancement and risk underscores the critical importance of cybersecurity-aware development practices. Road vehicles rely heavily on communication between components and external systems, making them susceptible to various cyber risks. Over-the-Air (OTA) software updates dramatically increase cybersecurity risks. Hackers could potentially manipulate sensor data, compromise vehicle control systems, or gain unauthorized access to sensitive personal information. The ISO/SAE 21434 Road Vehicles—Cybersecurity Engineering standard was established to address the security challenges posed by cyberthreats to road vehicles.

Synopsys has recently published a whitepaper that delves into the ISO 21434 driven best practices for cybersecurity-aware SoC development. Anyone involved in the development and post-production support of automotive related products and systems would find this whitepaper very informative. Following are some excerpts.

Key Aspects of ISO 21434

The ISO 21434 standard provides a structured approach to identifying, assessing, and mitigating cybersecurity risks throughout the development of automotive products, including components like SoCs. This comprehensive framework builds upon similar principles of ISO 26262 to address the cybersecurity dimension. The alignment between these two standards not only streamlines the integration of cybersecurity practices but also establishes a common vocabulary, ensuring seamless adaptation for organizations already compliant with ISO 26262.

Organizational Responsibilities

ISO 21434 follows in the footsteps of ISO 26262 by delineating roles and responsibilities across various stages of product development. This includes the commitment of executive management, the establishment of standardized roles between suppliers and supply chain entities, the creation of distinct phases within the product life cycle, and the formulation of Threat Analysis and Risk Assessment (TARA) processes equivalent to Hazard Analysis and Risk Assessment (HARA) in ISO 26262.

Cybersecurity Risk Assessment and Management

Cybersecurity hinges on a thorough assessment of a product’s inherent risks and its vulnerabilities when deployed. Four critical factors govern the severity of a cybersecurity risk, enabling an informed approach to risk mitigation. These four key factors are the Threat Scenario, Impact, Attack Vector, and Attack Feasibility. Together, these factors determine the potential harm, enabling a structured evaluation of the risk’s impact and the need for intervention. In essence, the Threat Scenario and its Impact gauge potential damage, the Attack Vector factor maps how an attack could be executed, while the Feasibility factor evaluates the ease of enacting the attack. ISO 21434 offers techniques for calculating the risk score from these four factors and elucidates a structured approach for fostering a proactive stance against cyberattacks.

Security by Design

The Secure Development Lifecycle (SDL) process championed by Microsoft to address cybersecurity permeates all facets of production development. SDL orchestrates a number of measures during the design phase to safeguard products against potential vulnerabilities. At the heart of this phase lies the mandate to generate concrete evidence affirming the integration of secure practices the team has been trained for. This evidence encompasses a spectrum of reviews and metrics, from security design reviews and verification plan assessments to privacy design reviews. Tools such as Synopsys Coverity and Black Duck play pivotal roles, generating code coverage and composition analysis reports. These reports help gauge the codebase’s maturity while flagging vulnerabilities in third-party components.

Collaboration and Communication

In the interconnected world of today’s product development, cybersecurity cannot operate in isolation. A collaborative approach is imperative, demanding a cohesive and cybersecurity-aware handshake between every link in the supply chain. The collaborative mindset guides the development cycles, necessitating an ongoing flow of cybersecurity information among supply chain entities.

Cybersecurity agreement in supply chain

Continuous Monitoring and Updating

Continuously monitoring products to identify known vulnerabilities and updating, ensures cybersecurity from the product’s release to its decommissioning. Post-release support is a focal point in SDL’s continuum. It mandates the specification of requirements for post-production security controls. This meticulous preparation equips the product to navigate the complexities of its operational environment and supply chain.

Summary

Given the surge in electronics adoption into road vehicles and the evolving landscape of cyberattack threats, customers are demanding cybersecurity assurances.  Cybersecurity impacts every level of the automotive supply chain starting with semiconductor SoCs. For component suppliers, embracing standardized cybersecurity principles and processes becomes a strategic imperative to remain competitive in the dynamic automotive market. By adhering to these evolving industry standards, suppliers can not only address the growing cybersecurity concerns but also cater to the mounting customer expectations for robust cybersecurity assurance.

During development of complex SoCs, partnering with an IP supplier with a structured ISO 21434 development platform minimizes cybersecurity risks and ensures highest levels of success. Synopsys develops IP products as per the ISO 21434 standard and rigorously follows cybersecurity policies, processes and procedures as promulgated in the standard. The company deploys cybersecurity teams through all levels of the organization.

Cybersecurity teams through all levels of an organization

For more details, visit Accelerate Your Automotive Innovation page.

You can access the entire whitepaper here.

Also Read:

Key MAC Considerations for the Road to 1.6T Ethernet Success

AMD Puts Synopsys AI Verification Tools to the Test

WEBINAR: Why Rigorous Testing is So Important for PCI Express 6.0