DAC2025 SemiWiki 800x100

Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow

Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow
by Daniel Payne on 03-23-2023 at 6:00 am

analog Circuit Optimization

Analog IC designers can spend way too much time and effort re-using old, familiar, manual iteration methods for circuit design, just because that’s the way it’s always been done. Circuit optimization is an EDA approach that can automatically size all the transistors in a cell, by running SPICE simulations across PVT corners and process variations, to meet analog and mixed-signal design requirements. Sounds promising, right?

So which circuit optimizer should I consider using?

To answer that question there’s a webinar coming up, hosted by MunEDA, an EDA company started back in 2001, and it’s all about their circuit optimizer named WiCkeD. Inputs are a SPICE netlist along with design requirements, like: gain, bandwidth and power consumption. Outputs are a sized netlist that meets or exceed the design requirements.

Analog Circuit Optimization

The secret sauce with WiCkeD is how it builds up a Machine Learning (ML) model to run a Design Of Experiments (DOE) to calculate the worst-case PVT corner, find the transistor geometry sensitivities, and even calculate the On Chip Variation (OCV) sensitivities. This approach creates and updates a non-linear, high-dimensional ML model from simulated data.

Having a ML model enables the tool to solve  the optimization challenge, then do a final verification by running a SPICE simulation. There are automated iterations until all requirements are met. Now that sounds much faster than the old manual iteration methods. Training the ML model is all automatic, and quite efficient.

Circuit designers will also learn:

  • Where to use circuit optimization
  • What types of circuits are good to optimize
  • How much value circuit optimization brings to the design flow

Engineers at STMicroelectronics have used the circuit optimization in WiCkeD, and MunEDA talks about their specific results in time savings and improvements in meeting requirements. Power Amplifier company Inplay Technologies showed circuit optimization results from the DAC 2018 conference.

Webinar Details

View the webinar replay by registering online.

About MunEDA
MunEDA provides leading EDA technology for analysis and optimization of yield and performance of analog, mixed-signal and digital designs. MunEDA’s products and solutions enable customers to reduce the design times of their circuits and to maximize robustness and yield. MunEDA’s solutions are in industrial use by leading semiconductor companies in the areas of communication, computer, memories, automotive, and consumer electronics. www.muneda.com.

Related Blogs


Narrow AI vs. General AI vs. Super AI

Narrow AI vs. General AI vs. Super AI
by Ahmed Banafa on 03-22-2023 at 10:00 am

Narrow AI vs. General AI vs. Super AI

Artificial intelligence (AI) is a term used to describe machines that can perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI is classified into three main types: Narrow AI, General AI, and Super AI. Each type of AI has its unique characteristics, capabilities, and limitations. In this article, we will explain the differences between these three types of AI.

Narrow AI  

Narrow AI, also known as weak AI, refers to AI that is designed to perform a specific task or a limited range of tasks. It is the most common type of AI and is widely used in various applications such as facial recognition, speech recognition, image recognition, natural language processing, and recommendation systems.

Narrow #ai works by using machine learning algorithms, which are trained on a large amount of data to identify patterns and make predictions. These algorithms are designed to perform specific tasks, such as identifying objects in images or translating languages. Narrow AI is not capable of generalizing beyond the tasks for which it is programmed, meaning that it cannot perform tasks that it has not been specifically trained to do.

One of the key advantages of Narrow AI is its ability to perform tasks faster and more accurately than humans. For example, facial recognition systems can scan thousands of faces in seconds and accurately identify individuals. Similarly, speech recognition systems can transcribe spoken words with high accuracy, making it easier for people to interact with computers.

However, Narrow AI has some limitations. It is not capable of reasoning or understanding the context of the tasks it performs. For example, a language translation system can translate words and phrases accurately, but it cannot understand the meaning behind the words or the cultural nuances that may affect the translation. Similarly, image recognition systems can identify objects in images, but they cannot understand the context of the images or the emotions conveyed by the people in the images.

General AI  

 General AI, also known as strong AI, refers to AI that is designed to perform any intellectual task that a human can do. It is a theoretical form of AI that is not yet possible to achieve. General AI would be able to reason, learn, and understand complex concepts, just like humans.

The goal of General AI is to create a machine that can think and learn in the same way that humans do. It would be capable of understanding language, solving problems, making decisions, and even exhibiting emotions. General AI would be able to perform any intellectual task that a human can do, including tasks that it has not been specifically trained to do.

One of the key advantages of General AI is that it would be able to perform any task that a human can do, including tasks that require creativity, empathy, and intuition. This would open up new possibilities for AI applications in fields such as healthcare, education, and the arts.

However, General AI also raises some concerns. The development of General AI could have significant ethical implications, as it could potentially surpass human intelligence and become a threat to humanity. It could also lead to widespread unemployment, as machines would be able to perform tasks that were previously done by humans. Here are a few examples of General AI:

1.    AlphaGo: A computer program developed by Google’s DeepMind that is capable of playing the board game Go at a professional level.

2.    Siri: An AI-powered personal assistant developed by Apple that can answer questions, make recommendations, and perform tasks such as setting reminders and sending messages.

3.    ChatGPT: a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a chatbot. The language model can answer questions, and assist you with tasks such as composing emails, essays, and code.

Super AI

Super AI refers to AI that is capable of surpassing human intelligence in all areas. It is a hypothetical form of AI that is not yet possible to achieve. Super AI would be capable of solving complex problems that are beyond human capabilities and would be able to learn and adapt at a rate that far exceeds human intelligence.

The development of Super AI is the ultimate goal of AI research. It would have the ability to perform any task that a human can do, and more. It could potentially solve some of the world’s most pressing problems, such as climate change, disease, and poverty.

Possible examples from movies: Skynet (Terminator), Viki (iRobot), Jarvis (Ironman).

Challenges and Ethical Implications of General AI and Super AI

The development of General AI and Super AI poses significant challenges and ethical implications for society. Some of these challenges and implications are discussed below:

  1. Control and Safety: General AI and Super AI have the potential to become more intelligent than humans, and their actions could be difficult to predict or control. It is essential to ensure that these machines are safe and do not pose a threat to humans. There is a risk that these machines could malfunction or be hacked, leading to catastrophic consequences.
  2. Bias and Discrimination: AI systems are only as good as the data they are trained on. If the data is biased, the AI system will be biased as well. This could lead to discrimination against certain groups of people, such as women or minorities. There is a need to ensure that AI systems are trained on unbiased and diverse data.
  3. Unemployment: General AI and Super AI have the potential to replace humans in many jobs, leading to widespread unemployment. It is essential to ensure that new job opportunities are created to offset the job losses caused by these machines.
  4. Ethical Decision-making: AI systems are not capable of ethical decision-making. There is a need to ensure that these machines are programmed to make ethical decisions, and that they are held accountable for their actions.
  5. Privacy: AI systems require vast amounts of data to function effectively. This data may include personal information, such as health records and financial data. There is a need to ensure that this data is protected and that the privacy of individuals is respected.
  6. Singularity: Some experts have raised concerns that General AI or Super AI could become so intelligent that they surpass human intelligence, leading to a singularity event. This could result in machines taking over the world and creating a dystopian future.

Narrow AI, General AI, and Super AI are three different types of AI with unique characteristics, capabilities, and limitations. While Narrow AI is already in use in various applications, General AI and Super AI are still theoretical and pose significant challenges and ethical implications. It is essential to ensure that AI systems are developed ethically and that they are designed to benefit society as a whole

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

 

References

1.    Quantum Computing and Other Transformative Technologies , Book by Ahmed Banafa https://www.amazon.com/Transformative-Technologies-Publishers-Information-Technology/dp/8770226849/ref=sr_1_1?

2.    https://www.bbvaopenmind.com/en/technology/artificial-intelligence/intellectual-abilities-of-artificial-intelligence/

3.    #chatgpt

4. Terminator Movie

5. Iron Man Movie

6. iRobot Movie

7. https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/

Also Read:

Scaling AI as a Service Demands New Server Hardware

10 Impactful Technologies in 2023 and Beyond

Effective Writing and ChatGPT. The SEMI Test


Intel Keynote on Formal a Mind-Stretcher

Intel Keynote on Formal a Mind-Stretcher
by Bernard Murphy on 03-22-2023 at 6:00 am

Intellectual understanding min

Synopsys has posted on the SolvNet site a fascinating talk given by Dr. Theo Drane of Intel Graphics. The topic is datapath equivalency checking. Might sound like just another Synopsys VC Formal DPV endorsement but you should watch it anyway. This is a mind-expanding discussion on the uses of and considerations in formal which will take you beyond the routine user-guide kind of pitch into more fascinating territory.

Intellectual understanding versus sample testing

Test-driven simulation in all its forms is excellent and often irreplaceable in verifying the correctness of a design specification or implementation. It’s also easy to get started. Just write a test program and start simulating. But the flip side of that simplicity is that we don’t need to fully understand what we are testing to get started. We convince ourselves that we have read the spec carefully and understand all the corner cases, but it doesn’t take much compounded complexity to overwhelm our understanding.

Formal encourages you to understand the functionality at a deep level (at least if you want to deliver a valuable result). In the example above, a simple question – can z ever be all 1’s – fails to demonstrate an example in a billion cycles on a simulator. Not surprising, since this is an extreme corner case. A formal test provides a specific and very non-obvious example in 188 seconds and can prove this is the only such case in slightly less time.

OK formal did what dynamic testing couldn’t do, but more importantly you learned something the simulator might never have told you. That there was only one possible case in which that condition could happen. Formal helped you better understand the design at an intellectual level, not just as probabilistic summary across a finite set of test cases.

Spec issues

Theo’s next example is based on a bug vending machine (so called because when you press a button you get a bug). This looks like a pretty straightforward C to RTL equivalence check problem, C model on the left, RTL model on the right. One surprise for Theo in his early days in formal was that right-shift behavior in the C-model is not completely defined in the C standard, even though gcc will behave reasonably. However, DPV will complain about a mismatch in a comparison with the RTL, as it should. Undefined behavior is a dangerous thing to rely on.

Spec comparison between C and RTL comes with other hazards, especially around bit widths. Truncation or loss of a carry bit in an intermediate signal (#3 above) are good examples. Are these spec issues? Maybe a gray area between spec and implementation choices.

Beyond equivalence checking

The primary purpose of DPV, it would seem, is to check equivalence between a C or RTL reference and an RTL implementation. But that need is relatively infrequent and there are other useful ways such a technology might be applied, if a little out of the box. First a classic in the implementation world – I made a change, fixed a bug – did I introduce any new bugs as a result? A bit like SEQ checking after you add clock gating. Reachability analysis in block outputs may be another useful application in some cases.

Theo gets even more creative, asking trainees to use counter examples to better understand the design, solve Sudokus or factorize integers. He acknowledges DPV make be an odd way to approach such problems but points out that his intent is to break the illusion that DPV is only for equivalence checking. Interesting idea and certainly brain-stretching to think through such challenges. (I confess I immediately started thinking about the Sudoku problem as soon he mentioned it.)

Wrap up

Theo concludes with a discussion on methodologies important in production usage, around constraints, regressions and comparisons with legacy RTL models. Also the challenges in knowing whether what you are checking actually matches the top-level natural language specification.

Very energizing talk, well worth watching here on SolvNet!

 

 


eFPGA goes back to basics for low-power programmable logic

eFPGA goes back to basics for low-power programmable logic
by Don Dingee on 03-21-2023 at 10:00 am

Renesas ForgeFPGA Evaluation Board features Flex Logic ELFX 1K low-power programmable logic tile

When you think “FPGA,” what comes to mind? Massive, expensive parts capable of holding a lot of logic but also consuming a lot of power. Reconfigurable platforms that can swallow RTL for an SoC design in pre-silicon testing. Big splashy corporate acquisitions where investors made tons of money. Exotic 3D packaging and advanced interconnects. But probably not inexpensive, small package, low pin count, low standby power parts, right? Flex Logix’s eFPGA goes back to basics for low-power programmable logic that can take on lower cost, higher volume, and size-constrained devices.

Two programmable roads presented a choice

At the risk of dating myself, my first exposure to what was then called FPGA technology was back when Altera brought out their EPROM-based EP1200 family in a 40-pin DIP package with its 16 MHz clock, 400 mW active power and 15 mW standby power. It came with a schematic editor and a library of gate macros. Designers would draw their logic, “burn” their part, test it out, throw it under a UV lamp and erase it if it didn’t work, and try again.

Soon after, a board showed up in another of our labs with some of the first Xilinx FPGAs. These were RAM-based instead of EPROM-based – bigger, faster, and reprogramming without the UV lamp wait or removing the part from the board. The logic inside was also more complex, with the introduction of fast multipliers. These parts could not only sweep up logic but could also be used to explore custom digital signal processing capability with rapid redesign cycles.

That set off the programmable silicon arms race, and a bifurcation developed between the PLD – programmable logic device – and the FPGA. Manufacturers made choices, with Altera and Xilinx taking the high road of FPGA scalability and Actel, Lattice, and others taking the lower road of PLD flexibility for “glue logic” to reduce bill-of-materials costs.

eFPGA shifts the low-power programmable logic equation

All that sounds like a mature market, with a high barrier to entry on one end and a more commoditized offering on the other. But what if programmable logic was an IP block that could be designed into any chip in this fabless era – including a small, low-power FPGA? That would circumvent the barrier (at least in the low and mid-range offerings) and commoditization.

Flex Logix took on that challenge with the EFLX 1K eFPGA Tile. Each logic tile has 560 six-input look-up tables (LUTs) with RAM, clocking, and interconnect. Arraying EFLX tiles gives the ability to handle various logic and DSP roles. But its most prominent features may be its size and power management.

Fabbed in TSMC 40ULP, the EFLX 1K tile fits in 1.5mm2 and offers power-gating for deep sleep modes with state retention – much more aggressive than traditional PLDs. EFLX 1K also has production-ready features borrowed from FPGAs. It presents AXI or JTAG interfaces for bitstream configuration, readback circuitry enabling soft error checking, and a test mode with streamlined vectors improving coverage and lowering test times.

See the chip in the center of this next image? That’s a ForgeFPGA from Renesas in a QFN-24 package, based on EFLX 1K IP, which Renesas offers at sub-$1 price points in volume. Its standby target current checks in at less than 20uA. Smaller size, lower cost, and less power open doors previously closed to FPGAs. The lineage of ForgeFPGA traces back to Silego Technology, then to Dialog Semiconductor, acquired by Renesas in 2021.

 

 

 

 

 

 

 

Renesas brings the Go Configure IDE environment, putting a graphical user interface on top of the Flex Logix EFLX compiler. It supports mapping ForgeFPGA pins, compiling Verilog, generating a bitstream, and has a lightweight logic analyzer.

 

 

 

 

 

 

 

 

 

The pre-built application blocks for the ForgeFPGA have an interesting one that Flex Logix’s Geoff Tate points out: a UART. Creating a UART in logic isn’t all that difficult, but it turns out that everyone has gone about it differently, and it’s just enough logic to be more than a couple of discrete chips. A ForgeFPGA is a chunk of reconfigurable logic that can solve that problem, allowing one hardware implementation to adapt quickly for various configurations.

 

 

 

 

 

 

 

ForgeFPGA is just one example of what can be done with the Flex Logix EFLX 1K eFPGA Tile. Flex Logix can adapt the IP for various process nodes, and the mix-and-match tiling capability offers scalability. It achieves new lows for low-power programmable logic and allows chip makers to differentiate solutions in remarkable ways. For more info, please visit:

Flex Logix EFLX eFPGA family

Also Read:

eFPGAs handling crypto-agility for SoCs with PQC

Flex Logix: Industry’s First AI Integrated Mini-ITX based System

Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform


Lithography Resolution Limits: The Point Spread Function

Lithography Resolution Limits: The Point Spread Function
by Fred Chen on 03-21-2023 at 6:00 am

Lithography Resolution Limits The Point Spread Function

The point spread function is the basic metric defining the resolution of an optical system [1]. A focused spot will have a diameter defined by the Airy disk [2], which is itself a part of the diffraction pattern, based on a Bessel function of the 1st kind and 1st order J1(x), with x being a normalized coordinate defined by pi*radius/(0.5 wavelength/NA), with NA being the numerical aperture of the system. The intensity is proportional to the square of 2J1(x)/x. The intensity profile is the point spread function, since it is the smallest possible defined pattern that can be focused by a lens (or mirror). The full-width at half-maximum (FWHM) is closely estimated by 0.5 wavelength/NA. DUV patterns are often much smaller than this size (down to ~0.3 wavelength/NA) and are thus required to be dense arrays and use phase-shifting masks [3].

In the context of EUV lithography, there are 0.33 NA systems and 0.55 NA systems with 20% central obscuration. The latter requires a modification of the point spread function by subtracting the point spread function corresponding to the obscured portion. For a 20% central obscuration, this means subtracting 0.4 J1(0.2x)/x, i.e., the intensity is proportional to the square of [2J1(x)/x – 0.4 J(0.2x)/x]. The point spread functions for 0.33 NA and 0.55 NA EUV systems are plotted below.

Point spread functions for 0.33 NA and 0.55 NA EUV systems

The 0.55 NA system has a narrower FWHM, ~12.5 nm vs ~21 nm for 0.33 NA. However, the larger NA goes out of focus faster for a given defocus distance due to larger center-to-edge optical path differences [4]. Moreover, experimentally measured EUV point spread functions [5] indicated much reduced contrast than expected from a ~22 nm FWHM point spread function for a 13.5 nm wavelength 0.3 NA system. This can be attributed to aberrations but also significantly includes relatively long-range effects specific to the resist, which can be attributed to photoelectrons and secondary electrons resulting from EUV absorption [6].

As indicated earlier, spot sizes smaller than the point spread function are possible only for dense pitches, with a lower pitch limit of 0.7 wavelength/NA. For random logic arrangements on interconnects, however, pitches have to be much larger, and so line cuts, for example, are still limited by the point spread function. On current 0.33 NA EUV systems, for example, it can be seen that the point spread function already covers popularly targeted line pitches in the 28-36 nm range. So, in fact, the edge placement from overlay and CD targeting, compounded by the spread of the secondary electrons [6,7], looks prohibitive. No wonder, then, that SALELE (Self-Aligned Litho-Etch-Litho-Etch) has been the default technique, even for EUV [8-11].

References

[1] https://en.wikipedia.org/wiki/Point_spread_function

[2] https://en.wikipedia.org/wiki/Airy_disk

[3] Y-T. Chen et al., Proc. SPIE 5853 (2005).

[4] A Simple Model for Sharpness in Digital Cameras – Defocus, https://www.strollswithmydog.com/a-simple-model-for-sharpness-in-digital-cameras-defocus/

[5] J. P. Cain, P. Naulleau, and C. Spanos, Proc. SPIE 5751 (2005).

[6] Y. Kandel et al., Proc. SPIE 10143, 101430B (2017).

[7] F. Chen, Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects, https://www.linkedin.com/pulse/secondary-electron-blur-randomness-origin-euv-stochastic-chen

[8] F. Chen, SALELE Double Patterning for 7nm and 5nm Nodes, https://www.linkedin.com/pulse/salele-double-patterning-7nm-5nm-nodes-frederick-chen

[9] R. Venkatesan et al., Proc. SPIE 12292, 1229202 (2022).

[10] Q. Lin et al. Proc. SPIE 11327, 113270X (2020).

[11] Y. Drissi et al., “SALELE process from theory to fabrication,” Proc. SPIE 10962, 109620V (2019).

This article first appeared in LinkedIn Pulse: Lithography  Resolution Limits: The Point Spread Function

Also Read:

Resolution vs. Die Size Tradeoff Due to EUV Pupil Rotation

Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?

Application-Specific Lithography: Sub-0.0013 um2 DRAM Storage Node Patterning


Checklist to Ensure Silicon Interposers Don’t Kill Your Design

Checklist to Ensure Silicon Interposers Don’t Kill Your Design
by Dr. Lang Lin on 03-20-2023 at 10:00 am

Image1

Traditional methods of chip design and packaging are running out of steam to fulfill growing demands for lower power, faster data rates, and higher integration density. Designers across many industries – like 5G, AI/ML, autonomous vehicles, and high-performance computing – are striving to adopt 3D semiconductor technologies that promise to be the solution. The tremendous growth in 2.5 and 3D IC packaging technology has been driven by high-profile early adopters delivering high bandwidth and latency products.

CPU and Computer chip concept

Benefits of 2.5 and 3D Technology

This trending technology meets the demands of enclosing all functionality in one sophisticated IC package, enabling engineers to meet aggressive high-speed and miniaturization goals. In 3D-IC packaging, dies are stacked vertically on top of each other (e.g. HBM), while 2.5D packaging places bare die (chiplets) next to each other. The chiplets are connected through a silicon interposer and through-chip vias (TSVs). This makes for a much smaller footprint and eliminates bulky interconnects and packaging which can significantly impede data rate and latency performance. Heterogenous integration is another benefit of silicon interposers, enabling engineers to place memory and logic with different silicon technologies in the same package, reducing unnecessary delays and power consumption. Integrating different chips designed in their most appropriate technology nodes provides better performance, cost, and improved time to market when compared to monolithic SOC designs on advanced technology nodes. Monolithic SOCs take longer to design and validate, contributing to increased cost and time to market.

The implementation of silicon interposers allows for more configurable system architectures but also poses additional multiphysics challenges like thermal expansion and electromagnetic interference along with fewer design and production issues.

Challenges of 2.5 and 3D Design

Silicon interposers is a successful and booming advancement in IC packaging technology. This technology will soon replace the traditional methods of chip design. Combining different functional blocks and memory within the same package provides high speed and improved performance for advanced design technologies. But the new considerations with interposers impose unfamiliar challenges and designers must understand the power integrity, thermal integrity and signal integrity interactions between the chiplet dies, the interposer, and the package. System simulation becomes an integral factor for the expected performance of the IC package.

Interposers act as a passive layer with a coefficient of thermal expansion that matches that of the chiplets, which explains the popularity of silicon for interposers. Nevertheless, it doesn’t eliminate the possibility of thermal hot spots and joule heating problems within the design. Interposers are supported by placing them on an ordinary substrate with a different thermal expansion coefficient, which contributes to increased mechanical stress and interposer warpage. That’s where the designer should be worried about the reliability of the system as this stress can easily crack some of the thousands of microbump connections.

Silicon interposers provide significantly denser I/O connectivity allowing higher bandwidth and better use of die space. But as we know, nothing comes for free. Multiple IPs in the same package require multiple power sources, constituting a complex power distribution network (PDN) within the package itself. The PDN runs throughout the entire package and is always vulnerable to power noise leading to power integrity problems. Analyzing the voltage distribution and current signature of every chip in the IC system with an interposer is important for ensuring power integrity.  Routing considerable amounts of power through the vertical connections between elements creates more problems for power integrity. These include TSVs and C4 bumps, as well as tiny micro-bumps, and hybrid bonding connections. Last but not least, many high-speed signals are routed among the chips and interposer which can easily fall victim to electromagnetic coupling and crosstalk. Electromagnetic signal integrity, also for high-speed digital signals, must be on your verification list when designing an IC package with interposer. This technology is a cost-effective, high-density, and power-efficient technique but is still susceptible to EM interference, thermal, signal and power integrity issues.

Figure2: Block diagram of Multiphysics analysis of multi-die system

Power Integrity:  

Power is the most critical aspect of any IC package design. Everything around the package design is driven by the power consumed by chips within the IC package. Every chip has a different power requirement which leads to requirements for the power delivery network. The PDN also has a critical role in maintaining the power integrity of the IC package by minimizing voltage drop (IR-drop) and avoiding electromigration failures. The best way to achieve power integrity is to optimize the power delivery network by simulating the fluctuating current at each IC and the parasitic of passive elements that make up the PDN. It becomes more complicated with an interposer since chips are connected through the interposer. Power and ground trails routed through the interposer impose new challenges when analyzing power integrity. But it is not the only issue. Electromigration issues come hand in hand with PI problems. The current density in each piece of geometry must be modeled and should be below the maximum limit supplied by the foundry. Joule heating of the microbumps and wires has a significant impact on the maximum allowable current density, which implies a degree of thermal simulation for maximum accuracy.

Ansys Redhawk-SC and Totem, can extract the most accurate chip power model to understand the power behavior of chips in a full-system context. If you don’t yet have the chip layout model at the prototyping stage, create an estimated CPM (chip power model) using Ansys Redhawk tools to anticipate the physics at the initial level. Thermal and power analysis shouldn’t be a signoff step, but an ongoing process because making last-minute changes in the design might not work.

Figure3: Power Integrity Analysis using Ansys Redhawk-SC Electrothermal

Thermal Integrity:  It is extremely important to understand the thermal distribution in the interposer design to regulate thermal integrity. Just power and signal integrity might not save your design from thermal runaway or local thermal failure. with multiple chips close together in a 2.5D package the hotter chiplet might heat up the nearby chiplets and change their power profile, possibly leading to yet more heating. Heat is dissipated from the chips to the interposer and further through TSVs to the substrate, which heats up the entire package. To avoid stress and warpage due to the differential thermal expansion, designers should understand the thermal profile of every chip and interposer in the design. These maps will give insight into the thermal distribution across the IC package, allowing the designer to determine thermal coupling among chips through the interposer.

Power dissipation is, of course, driven by activity. Ansys PowerArtist is an RTL power analysis tool that is integrated with in RedHawk-SC Electrothermal to generate the most accurate chip thermal models (CTMs) based on ultra-long, realistic activity vectors produced by hardware emulators. By assembling the entire 3D-IC system including chip CTM, interposer, package, and heat sink, Ansys RedHawk-SC Electrothermal gives the designer an accurate thermal distribution and an understanding of the thermal coupling between chiplets and the interposer. Monitoring temperature gradients needs to start early in the IC package design. The sooner the better. The complete front-to-back flow with gives a clear insight into the thermal distribution over time for the entire package, making your design more reliable.

Figure 4: Different parameter extractions for Silicon Interposer Design

Signal Integrity:  In the IC package, high-speed signals are transmitted from one die to another through an interposer at very high bit rates. The signals are closely spaced and also relatively long (compared to on-chip routing), which makes them vulnerable to electromagnetic interference (EMI) and coupling (EMC). Even digital designers need to follow high speed design guidelines to maintain signal integrity. The only way to control the EMC/EMI is with fast, high-capacity electromagnetic solvers that extract a coupled electromagnetic model including chiplets, signal routing through the interposer, and system coupling effect. With Ansys RaptorHand HFSS easy to analyze all these elements in a single, large model and meet the desired goal of a clean eye diagram. HFSS and Ansys Q3D can also be used to extract RLC parasitics and provide visualization of the electromagnetic fields and scale up to system level extraction beyond the interposer.

Learn more about challenges and solutions for 3D-IC and interposers.

Semiconductor Design and Simulation Software | Ansys

Ansys RedHawk-SC Electrothermal Datasheet

Thermal Integrity Challenges and Solutions of Silicon Interposer Design | Ansys

Also Read:

HFSS Leads the Way with Exponential Innovation

DesignCon 2023 Panel Photonics future: the vision, the challenge, and the path to infinity & beyond!

Exponential Innovation: HFSS


Samtec Lights Up MemCon

Samtec Lights Up MemCon
by Mike Gianfagna on 03-20-2023 at 6:00 am

Samtec Lights Up MemCon

Every conference and trade show that Samtec attends is better for the experience. Samtec has a way of bringing exciting and innovative demos and technical presentations to any event they attend. I personally have fond memories of exhibiting next to Samtec at an early AI Hardware Summit at the Computer History Museum in Mountain View, CA. At the time I was at eSilicon, and we had developed an eye-popping long-reach communication demo with our SerDes and Samtec’s cables. We ran that demo with a cable that connected our two booths – very long reach in action. I don’t think I’ve ever seen a demo span more than one trade show booth since then. The subject of this post is Samtec’s attendance at MemCon, which is also being held at the Computer History Museum. Samtec overall, and Matt Burns, technical marketing manager in particular will be working their magic on March 28 and 29 this year. Let’s see how Samtec lights up MemCon.

MemCon, Then and Now

Thanks to Paul McLellan and his Breakfast Bytes blog, I was able to get some early history of MemCon. Those who have been at the semiconductor and EDA game for a while will remember Denali, an early IP company that focused on memory models. Denali decided to get some more visibility for the company and its offerings, so around 2001 they held the first MemCon at the Hyatt Hotel in the Bay Area. So, this was the birth of the show. The historians among us will also fondly remember the Denali Party, probably the best social event ever held at the Design Automation Conference.

Today, MemCon is managed by Kisaco Research. I have some personal experience with this organization. While at eSilicon, we were one of the early participants at the previously mentioned AI Hardware Summit. Under their leadership, Kisaco Reseach grew this event from a humble and small beginning to one of the premier events in AI for the industry. All this from a location in London. Their reach is substantial, and they are working their magic for MemCon as well.

Expected audience at MemCon

Memories have become a critical enabling technology for many forward-looking applications. Some of the areas of focus for MemCon include AI/ML, HPC, datacenter and genomics. The list is actually much longer. The expected audience at MemCon covers a lot of ground. This is clearly an important conference – registration information is coming.

Samtec at MemCon

At its core, Samtec provides high-performance interconnect solutions for customers and partners. Samtec’s high-speed board-to-board, high-speed cables, mid-board and panel optics, pecision RF, flexible stacking, and micro/rugged components route data from a bare die to an interface 100 meters away, and all interconnect points in between. For the memory and storage sector, niche applications require niche interconnect solutions and that is Samtec’s specialty.

You can learn more about what Samtec does on their SemiWiki page here.

If you’re headed to MemCon, definitely stop by the Samtec booth. You will find talented, engaging staff and impressive demonstrations. Samtec’s own Matt Burns will also be presenting an informative talk on Wednesday March 29 at MemCon:

2:10 PM – 2:35 PM

How Flexible, Scalable High-Performance Interconnect Extends the Reach of Next Generation Memory Architectures

So, this is how Samtec lights up MemCon. If you haven’t registered yet for the show, you can register here. Use SAMTECGOLD15 at check-out to save 15%.


Podcast EP148: The Synopsys View of High-Performance Communication and the Role of Chiplets

Podcast EP148: The Synopsys View of High-Performance Communication and the Role of Chiplets
by Daniel Nenni on 03-17-2023 at 10:00 am

Dan is joined by John Swanson, who is the HPC Controller & Datapath Product Line Manager in the Synopsys Solutions Group. John has worked in the development and deployment of verification, integration, and implementation tools, IP, standards, and methodologies used in IP-based design for over 25 years at Synopsys.

Dan explores the future of high-performance computing with John. What is required for success, and what challenges are faced by designers and applications to get to 1.6T Ethernet leveraging 224 GbE, including FEC, cabling, and standardization.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CTO Interview: Dr. Zakir Hussain Syed of Infinisim

CTO Interview: Dr. Zakir Hussain Syed of Infinisim
by Daniel Nenni on 03-17-2023 at 6:00 am

Zakir Hussain Infinisim

Zakir Hussain is a co-founder of Infinisim and brings over 25 years of experience in the Electronic Design Automation industry. He was at Simplex Solutions, Inc. (acquired by Cadence) at its inception in 1995 through the end of 2000.  He has published numerous papers on verification and simulation and has presented at many industry conferences.  Zakir obtained his Masters degree in Mechanical Engineering and a PhD in Electrical Engineering from Duke University.

What is the Infinisim backstory?
Infinisim, Inc is a privately funded EDA company founded by industry luminaries with over 50 years of combined expertise in the area of design and verification. Infinisim customers are leading edge semiconductor companies and foundries that are designing high-performance SoC, AI, CPU and GPU chips.

Infinisim has helped customers achieve unprecedented levels of confidence in design robustness prior to tape-out. Customers have been able to eliminate silicon re-spins, reduce chip design schedules, and dramatically improve product quality and production yield.

What market segments are you targeting?
The simple answer is leading edge complex SoCs, multi giga hertz CPU, GPU, and domain specific chips (bespoke silicon) with high core counts but we look at three distinct capabilities that our tools provide:

SoC Clock Analysis
Our leading edge clock analysis solution helps customers accurately verify timing, detect failures, and optimize performance of the clock.

Clock Jitter Analysis
Our specialized jitter analytics solution helps customers accurately compute power supply induced jitter of clock domains.

Clock Aging Analysis
Our Clock Aging Analysis helps customers accurately determine the operational lifetime of power-sensitive clocks.

For SoC clock analysis, the leading edge mobile SoC market is a great example where power consumption is critical. For aging, Automotive and other mission critical markets (especially at 6nm and below) where the product lifespan is 5 or more years. For Jitter, high frequency and high performance chips like CPUs, GPUs and large bespoke silicon with lots of cores that require strict accuracy.

What keeps your customers up at night? 
Tape-out confidence keeps everyone up at night! The cost of tape-out is very high below 7nm so errors cannot slip by. For example: timing related rail to rail failures and duty cycle distortion (DCD). Also jitter and aging analysis.

Customers are always looking for a competitive advantage over what everyone else is doing. Tightening margins on performance, power, and area is a risky proposition without a sign-off proven clock analysis tool. Clocks are critical, some designs are all about the clocks and guard banding your way out of complexity hurts the competitive positioning of your product. Especially if your competitor is already working with Infinisim.

What makes your product unique?
The founding Infinisim team had many years experience with IR drop analysis doing very large power grids. Fast Spice did not work since you had to chose between accuracy and speed so a custom spice engine was developed. Rather than approach the general SPICE market, the Infinisim tool set and methodology was developed specifically for clock tree analysis. This is a critical difference between Infinisim and general purpose EDA tools.

Clock is a unique problem and requires a unique tool. Infinism has a special purpose simulator designed specifically for SoC clock analysis, clock jitter analysis, and clock aging analysis. Speed AND capacity AND full SPICE accuracy is the focus so there is no trade off like traditional simulators.

What’s next for the company?
Three things:

1) We are working closely with customers on increasing accuracy, speed, and capacity for the new FinFET nodes. Infinisim is a sign off tool at 14nm down to 5nm and 3nm is in process. The complexity of chips is increasing so this will be a never ending challenge for clocks.

2) We are working closely with customers and foundries on GAA processes which will require a new set of capabilities. FinFET models are public domain and very accessible. GAA models are proprietary and will be tied closely to foundries versus EDA tool companies.  GAA models are much more complicated with more equations due to changing conductance, capacitance, and more non linear effects at the device level.

3) We are collaborating with customers and cloud providers on a cloud based Infinisim solution.

How do customers engage with Infinisim?
Customers generally approach us with a clock problem. Since Infinisim is a single solution evaluations are fairly easy using a targeted approach on customer circuits. For more information or a customer engagement you can reach us at http://infinisim.com/.

Also Read:

Clock Aging Issues at Sub-10nm Nodes

Analyzing Clocks at 7nm and Smaller Nodes

Methodology to Minimize the Impact of Duty Cycle Distortion in Clock Distribution Networks


Must-attend webinar event: How better collaboration can improve your yield

Must-attend webinar event: How better collaboration can improve your yield
by Daniel Nenni on 03-16-2023 at 10:00 am

YieldHub Webinar

In today’s rapidly evolving semiconductor industry, the demand for high-quality and reliable semiconductors at a reasonable cost is increasing. This is why world-class yield management has become more and more important for fabless semiconductor companies and IDMs.

In a must-attend event, yieldHUB will be hosting a webinar in partnership with SemiWiki that will cover why good collaboration is essential in yield management. This will be relevant for both startups and large-scale fabless semiconductor companies and IDMs.

yieldHUB’s experts will provide valuable insights into how your team can work together seamlessly by sharing data and insights to optimize yield and improve production processes.

Teams should be able to share data, collaborate on projects and make data-driven decisions quickly. Collaboration is a key in technology development because it enables individuals to work together effectively, combining their skills, knowledge, and resources to create innovative, high-quality, and high-yielding products.

One of the challenges that yieldHUB’s experts will talk about is the expertise that is often distributed across the world within large-scale companies. When there is disrupted communication, this can have an impact on yield.

Register Here

The webinar is scheduled for March 28, 10am (PST), and it will cover the following topics:

  1. What is collaboration in the context of yield management?
  2. Best collaboration practices in yield management.
  3. How about security?
  4. Why Yield Management Systems (YMS) should evolve.
  5. Examples of companies that have used collaboration to their advantage.

The webinar is suitable for anyone involved in the yield management process within the semiconductor industry.

Attendees will have the chance to submit questions prior to the webinar and get to know two of yieldHUB’s leaders during this presentation. Register now and take the first step towards improving your collaboration efforts.

Register Here

About yieldHUB
yieldHUB was founded by Limerick resident John O’Donnell who studied electrical engineering at UCC and spent more than 17 years at a leading semiconductor company before starting yieldHUB. Fast forward to today and he’s running a company with a platform that’s used by thousands of product and test engineers around the world.

yieldHUB is an all-in-one software platform that was designed by engineers for engineers. It helps semiconductor companies by cleansing data at scale and producing insights that can detect flaws in wafers (which are then cut into tiny microchips) during the manufacturing process.

 Also Read:

It’s Always About the Yield

The Six Signs That You Need a Yield Management System

yieldHUB – Helping Semiconductor Companies be More Competitive