SNPS1670747138 DAC 2025 800x100px HRes

Advanced electro-thermal simulation sees deeper inside chips

Advanced electro-thermal simulation sees deeper inside chips
by Don Dingee on 03-29-2023 at 6:00 am

Advanced electro-thermal simulation in Keysight PathWave ADS

Heat and semiconductor reliability exist in an inversely proportional relationship. Before the breaking point at the thermal junction temperature rating, every 10°C rise in steady-state temperature cuts predicted MOSFET life in half. Yet, heat densities rise as devices plunge into harsher environments like smartphones, automotive electronics, and space-based electronics, while reliability expectations remain high. Higher frequencies and increased power delivery with newer semiconductor materials and packaging schemes push thermal behavior harder. Accurate reliability prediction is only possible with advanced electro-thermal simulation – an emphasis for several Keysight teams working with PathWave Advanced Design System (ADS).

Keysight’s advanced thermal analysis capability stretches back to its acquisition of Gradient Design Automation and its HeatWave thermal solver in 2014. ADS Electro-Thermal Simulation adapted and integrated HeatWave to co-simulate with industry-leading RF circuit simulators in ADS. Gradient’s former VP of Engineering, Adi Srinivasan, now a Product Owner and Principal Engineer for electro-thermal simulation at Keysight, still works with the technology he helped pioneer today. “Coupling is now at least as important as self-heating,” says Srinivasan, and like EM simulation, the co-simulation must span time and frequency domains.

RF designers widely use ADS Electro-Thermal Simulator to avoid thermal hazards and improve design quality, with a long history of solving complex RF intra-die thermal problems. A PCB-level thermal solver, PE-Thermal, is creating similar problem-solving opportunities for power electronics designers. Increasing the reach of thermal simulation for more customers and larger domains like modules is a priority; thermal modeling in foundry process design kits (PDKs) is a strong driver of broader adoption.

 

 

 

 

 

 

 

 

III-V and silicon foundries picking up the thermal pace

Foundries are seeing increased demand for rich-model PDKs as customers crank up more simulations in their EDA workflows. Behind the scenes, Keysight has been working aggressively to evangelize the need for advanced electro-thermal simulation and asking foundries to consider adding thermal models ready for the W3050E PathWave ADS Electro-thermal Simulator to their PDK support.

 

 

 

 

 

 

 

“We’ve recently seen thermal-enabled PDK announcements from III-V foundries like GCS (Global Communication Semiconductors) in the US, and AWSC (Advanced Wireless Semiconductor) and Wavetek in Taiwan,” says Kevin Dhawan, III-V Foundry Program Manager at Keysight. These foundries are leaders in technology used in power and RF electronics, and Dhawan noted that other III-V foundries such as WIN, UMS, and OMMIC also offer PDKs for ADS Electro-Thermal Simulator. Additionally, Dhawan says several high-volume silicon foundries have PDK support for ADS Electro-Thermal Simulator but have yet to make specific announcements.

“Customers are asking about a single solution for both EM and electro-thermal effects, but the domains are a bit different,” Dhawan observes. “EM focuses more on back-end layout, metallization, and bias, while electro-thermal needs to model additional heat from the semiconductor devices themselves.” ADS supports several types of PDKs: ADS-native PDKs, ADS-Virtuoso interoperable PDKs, and interoperable PDKs (iPDKs) based on OpenAccess and customizable via Python and Tcl. Dhawan says to check with the foundries for the latest on ADS Electro-Thermal Simulator-ready PDKs.

Incredible heat densities in power electronics

Power converter designers are chasing extremely dense “bricks” for applications like aerospace, automotive, and device chargers. This chart shows why more designers are turning to III-V processes like GaN and SiC – switching speeds are higher, and power delivery increases.

 

“With tens of kilowatts or more in play, things tend to heat up,” says Steven Lee, Power Electronics Product Manager for Keysight EDA. “Traditional workflows and tools like Spice don’t consider detailed temperature profiles or post-layout thermal impacts.” The result often is a hardware prototyping surprise, where measurements find power transistors pushed beyond their rated junction temperature, triggering an expensive late part change and a re-spin.

Thermal simulation, taking device models, packaging, and board materials and layout into consideration, prevents problems. “Hot transistors next to each other can couple via metal traces and planes and magnify heating problems,” Lee points out. First-pass thermal effects discovered quickly via simulations can validate component selection and guide packaging and layout adjustments before committing to hardware.

 

 

 

 

 

 

 

PE-Thermal implements a thermal resistance editor and extraction tool for more robust temperature modeling capability. “With a design already in the ADS workspace, designers can select the components for thermal analysis, tune models as necessary, and simulate with PE-Thermal in a few minutes,” explains Uday Ramalingam, R&D engineer at Keysight. “Ultimately, designers will be able to go inside a transistor and explore package properties – maybe they have the right part but not the right package for their context.”

 

 

 

 

 

 

“Just like the EM solver in ADS takes metallization into account, the electro-thermal solver does too, and enhancements in future ADS releases will take us all the way to complete board-level layout thermal effects automatically,” Lee wraps up.

An RF design example of advanced electro-thermal simulation

Reliability prediction tools are only as good as the temperature data that goes into them. Excruciating setups can get accurate temperature measurements on physical hardware, but they slow design cycles and don’t preclude the re-spin surprise if bad results appear.

EnSilica (Abingdon, UK) is a fabless chipmaker delivering RF, mmWave, mixed-signal, and high-speed digital designs for their customers in automotive, communications, and other applications. Using Keysight PathWave ADS and PathWave ADS Electro-Thermal Simulator is taking them from a practice of embedding and measuring many temperature sensors on a chip to fully simulating thermal effects with high accuracy.

A Ka-band transceiver project, implemented in an automotive-certified 40nm CMOS foundry process, is EnSilica’s first foray into virtual thermal analysis. An interesting wrinkle is another chip on their board, next to the RF transceiver, creating 3°C boundary heating on one edge, clearly seen on the right side of the heat map produced in the ADS Electro-Thermal Simulator.

 

 

 

 

 

 

 

 

 

Results from the ADS electro-thermal simulation were within 0.7°C of actual measurements (with simulations slightly higher, an excellent conservative result), increasing confidence in meeting 10-year reliability goals. During thermal resistance modeling, EnSilica also found improvements in chip layout and package bumping that lowered operating temperatures in the final product.

Seeing deeper inside chips to avoid design hazards and enhance packages and layouts are powerful justifications for advanced electro-thermal simulation. Keysight’s ability to fit into multi-vendor workflows means high-accuracy thermal analysis is available to more design teams. Please visit these resources for the EnSIlica case study and further background on Keysight’s ADS solutions, including ADS Electro-Thermal Simulator.

 

Design and simulation environment: PathWave Advanced Design System

Thermal simulation add-on: W3050E PathWave Electro-Thermal Simulator

Webinar:  Using Electro-Thermal Simulation in Your Next IC Design

Video:  Using Electro-Thermal Simulation in ADS 2023

Case study:  Predicting Ka-band Transceiver Thermal Margins, Wear, and Lifespan


The Rise of the Chiplet

The Rise of the Chiplet
by Kalar Rajendiran on 03-28-2023 at 10:00 am

Open Chiplet Economy

The emergence of chiplets as a technology is an inflection point in the semiconductor industry. The potential benefits of adopting a chiplets-based approach to implementing electronic systems are not a debate. Chiplets, which are smaller, pre-manufactured components can be combined to create larger systems, offering benefits such as increased flexibility, scalability, and cost-effectiveness in comparison to monolithic integrated circuits. However, chiplets also present new challenges in terms of design, integration, and testing. The technology is still in flux, and there are many unknowns that need to be addressed over the coming years. The success of chiplets will depend on factors such as manufacturing capabilities, design expertise, and the ability to integrate chiplets into existing systems.

While sophisticated packaging and interconnect technologies have been receiving a lot of press, there are many more aspects that are critical too. Designing chiplets-based systems requires a different mindset and skillset than traditional chip design. Many more things need to come together to enable a chiplet-based economy. This was the focus of a recently held webinar titled “The Rise of the Chiplet.” The webinar was moderated by Brian Bailey, Technology Editor/EDA from SemiEngineering.com. The panelists were Nick Ilyadis, Sr. Director of Product Planning, Achronix; Rich Wawrzyniak, Principal Analyst ASIC & SoC, Semico Research Corp; and Bapi Vinnakota, OCP ODSA Project Lead, Open Compute Project Foundation.

The composition of the panel allowed the audience to hear a market perspective, and product perspective as well as the collaborative community perspective for designing efficiency into solutions.

What is needed for chiplet adoption

For chiplet adoption, the industry needs to worry not just about the die-to-die interfaces and packaging technology but the whole chiplet economy.

For example, how to describe a chiplet before building it in order to achieve efficient modularity. On the physical description for a chiplet, the standard things to include are area, orientation, thermal map, power delivery, bump maps, etc., This physical part description is very important when integrating chiplets from multiple vendors. OCP is beginning work with JEDEC to create a standard JEP30 part model to physically describe a chiplet. Some of the other areas to get addressed include: How to address known-good-die (KGD) in business contracts. How to accomplish architecture exploration? How to handle business logistics?

Various workgroups within OCP are focusing on many of these areas and more and making available downloadable worksheets or templates for use by designers. For example, designers can download a worksheet that helps them compare a chiplet-based design to a monolithic design for design costs and manufacturing costs. When it comes to chiplet interfaces, Bunch of Wires (BoW) for example may be the choice for some applications and Universal Chiplet Interconnect Express (UCIe) may be the right one for some other applications. There are tools available for comparing various die-to-die interfaces available in the marketplace.

The following table shows the various areas that need to be addressed.

Another important thing that needs to be understood and addressed is whether all the chiplets to be included in a product need to be from the same process corner. Do chiplets need to be marketed under different speed grades like memories are? If some chiplets are from fast corners and others are from the slow corners, what kind of issues will arise during system simulation and when deployed in the field?

As chiplets technology continues to evolve, companies will be experimenting with different approaches to incorporating chiplets into their products.

eFPGA-Based Chiplet

Embedded FPGA (eFPGA) has been gaining a lot of traction within the monolithic ASIC world. An eFPGA-based chiplet can extend the eFPGA benefits to a full chiplet-based system. Achronix as a leader in the FPGA solutions space is offering eFPGA IP-based chiplets to deliver the following benefits. Unique production solution (different SKUs); support different process technologies in cases where the optimal process technology for the ASIC is not optimal for an embedded FPGA; Utilize the FPGA chiplet across multiple generations of products versus having it in just one monolithic device.

Summary

Chiplets offer a promising new direction for the semiconductor industry. The winning solutions will be determined over the coming years. How many years, that depends on whom you ask. To listen to the entire webinar, check here. The panelists fielded a number of audience questions as well that you may find of interest to you.

Also Read:

Achronix on Platform Selection for AI at the Edge

WEBINAR: FPGAs for Real-Time Machine Learning Inference

WEBINAR The Rise of the SmartNIC


Speculation for Simulation. Innovation in Verification

Speculation for Simulation. Innovation in Verification
by Bernard Murphy on 03-28-2023 at 6:00 am

Innovation New

This is an interesting idea, using hardware-supported speculative parallelism to accelerate simulation, with a twist requiring custom hardware. Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Chronos: Efficient Speculative Parallelism for Accelerators. The authors presented the paper at the 2020 Conference on Architectural Support for Programming Languages and Operating Systems and are from MIT.

Exploiting parallelism using multicore processors is one option for applications where parallelism is self-evident. Other algorithms might not be so easily partitioned but might benefit from speculative execution exploiting intrinsic parallelism. Usually, speculative execution depends on cache coherence, a high overhead especially for simulation. This method bypasses need for coherence, physically localizing task execution to compute tiles by target read-write object, ensuring that conflict detection can be detected locally, without need for global coherence management. Tasks can execute speculatively in parallel; any conflict detected can be unrolled from a task through its child tasks then re-executed without need to stall other threads.

One other point of note here. This method supports delay-based simulation, unlike most hardware acceleration techniques.

Paul’s view

Wow, what a wonderful high-octane paper from MIT! When asked about parallel computation I immediately think about threads, mutexes, and memory coherency. This is of course how modern multi-core CPUs are designed. But it is not the only way to support parallelization in hardware.

This paper proposes an alternative architecture for parallelization called Chronos that is based on an ordered queue of tasks. At runtime, tasks are executed in timestamp order and each task can create new sub-tasks that are dynamically added to the queue. Execution begins by putting some initial tasks into the queue and ends when there are no more tasks in the queue.

Tasks in the queue are farmed out to multiple processing elements (PEs) in parallel – which means Chronos is speculatively executing future tasks before the current task has completed. If the current task invalidates any speculatively executed future tasks then the actions of those future tasks are “undone” and they are re-queued. Implementing this concept correctly in hardware is not easy, but to the outside user it’s beautiful: you just code your algorithm as if the task queue is being executed serially on a single PE. No need to code any mutexes or worry about deadlock.

The authors implement Chronos in SystemVerilog and compile it to an FPGA. Much of the paper is devoted to explaining how they have implemented the task queue and any necessary unrolling in hardware for maximum efficiency. Chronos is benchmarked on four algorithms well suited to a task-queue based architecture. Each algorithm is implemented two ways: first using a dedicated algorithm-specific PE, and second using an off the shelf open source 32-bit embedded RISC-V CPU as the PE. Chronos performance is then compared to multi-threaded software implementations of the algorithms running on an Intel Xeon server with a similar price tag to the FPGA being used for Chronos. Results are impressive – Chronos scales 3x to 15x better than using the Xeon server. However, comparing Table 3 to Figure 14 makes me worry a bit that most of these gains came from the algorithm-specific PEs rather than the Chronos architecture itself.

Given this is a verification blog I naturally zoomed in on the gate-level simulation benchmark. The EDA industry has invested heavily to try and parallelize logic simulation and it has proven difficult to see big gains beyond a few specific use cases. This is mainly due to the performance of most real-world simulations being dominated by load/store instructions missing in the L3-cache and going out to DRAM. There is only one testcase benchmarked in this paper and it is a tiny 32-bit carry save adder. If you are reading this blog and would be interested to do some more thorough benchmarking please let me know – if Chronos can truly scale well on real world simulations it would have huge commercial value!

Raúl’s view

The main contribution of this paper is the Spatially Located Ordered Tasks (SLOT) execution model which is efficient for hardware accelerators that exploit parallelism and speculation, and for applications that generate tasks dynamically at runtime. Dynamic parallelism support is inevitable for simulation and speculative synchronization is an appealing option, but coherency overhead is too high.

SLOT avoids the need for coherence by restricting each task to operate (write) on a single object and supports ordered tasks to enable multi-object atomicity. SLOT applications are ordered, dynamically created tasks characterized by a timestamp and an object id. Timestamps specify order constraints; object ids specify the data dependences, i.e., tasks are data-dependent if and only if they have the same object id. (if there is a read dependency the task can be executed speculatively). Conflict detection becomes local (without complex tracking structures) by mapping object ids to cores or tiles and sending each task to where its object id is mapped.

The Chronos system was implemented in the AWS FPGA framework as a system with 16 tiles, each with 4 application specific processing elements (PE), running at 125MHz. This system is compared with a baseline consisting of 20-core/40-thread 2.4 GHz Intel Xeon E5-2676v3, chosen specifically because its price is comparable with the FPGA one (approx. $2/hour). Running a single task on one PE, Chronos is 2.45x faster than the baseline. As the number of concurrent tasks increases, the Chronos implementation scales to a self-relative speedup of 44.9x on 8 tiles, corresponding to a 15.3x speedup over the CPU implementation. They also compared an implementation based on general purpose RISC-V rather than application specific PEs; PEs were 5x faster than RISC-V.

I found the paper impressive because it covers everything from a concept to the definition of the SLOT execution model to the implementation of hardware and the detailed comparison with a traditional Xeon CPU for 4 applications. The effort is substantial, Chronos is over 20,000 lines of SystemVerilog. The result is a 5.4x mean (of the 4 applications) speedup over software-parallel versions, due to more parallelism and more use of speculative execution. The paper is also worth reading for application to non-simulation tasks; the paper includes three examples.


Power Delivery Network Analysis in DRAM Design

Power Delivery Network Analysis in DRAM Design
by Daniel Payne on 03-27-2023 at 10:00 am

IR drop plot min

My IC design career started out with DRAM design back in 1978, so I’ve kept an eye on the developments in this area of memory design to note the design challenges, process updates and innovations along the way. Synopsys hosted a memory technology symposium in November 2022, and I had a chance to watch a presentation from SK hynix engineers, Tae-Jun Lee and Bong-Gil Kang. DRAM chips have reached high capacity and fast data rates of 9.6 gigabits per second, like the recent LPDDDR5T announcement on January 25th. Data rates can be limited by the integrity of the Power Delivery Network (PDN), yet analyzing a full-chip DRAM with PDN will slow simulation times down too much.

The peak memory bandwidth per x64 channels has shown steady growth across several generations:

  • DDR1, 3.2 GB/s at 2.5V supply
  • DDR2, 6.4 GB/s at 1.8V supply
  • DDR3, 12.8 GB/s at 1.5V supply
  • DDR4, 25.6 GB/s at 1.2V supply
  • DDR5, 51.2 GB/s at 1.1V supply

A big challenge in meeting these aggressive timing goals is controlling the parasitic IR drop issues caused during the IC layout of the DRAM array, and shown below is a plot of IR drop where the Red color is an area of highest voltage drop, which in turn slows the performance of the memory.

IR drop plot of DRAM array

The extracted parasitics for an IC are saved in a SPF file format, and adding these parasitics for the PDN to a SPICE netlist causes the circuit simulator to slow down by a factor of 64X, while the number of parasitic RC elements added by the PDN is 3.7X more than just signal parasitics.

At SK hynix they came up with a pragmatic approach to reduce the simulation run times when using the PrimeSim™ Pro circuit simulator on SPF netlists including the PDN by using three techniques:

  1. Partitioning of the netlist between Power and other Signals
  2. Reduction of RC elements in the PDN
  3. Controlling simulation event tolerance

PrimeSim Pro uses partitioning to divide up the netlist based upon connectivity, and by default the PDN and other signals would combine to form very large partitions, which in turn slowed down simulation times too much. Here’s what the largest partition looked like with default simulator settings:

Largest partition, default settings

An option in PrimeSim Pro (primesim_pwrblock) was used to cut down the size of the largest partition, separating the PDN from other signals.

Largest partition, using option: primesim_pwrblock

The extracted PDN in SPF format had too many RC elements, which slowed down circuit simulation run times, so an option called primesim_postl_rcred was used to reduce the RC network, while at the same time preserving accuracy. The RC reduction option was able to decrease the number of RC elements by up to 73.9%.

Circuit simulators like PrimeSim Pro use matrix math to solve for current and voltages in the netlist partitions, so runtime is directly related to matrix size and how often a voltage change requires recalculation. The simulator option primesim_evtgrid_for_pdn was used, and it reduces the number of times a matrix needs to be solved whenever there are small voltage changes in the PDN. The chart below shown in purple has an X at each point in time when matrix solving in the PDN was required by default, then shown in white are triangles at each point in time that matrix solving is used with the simulator option. The white triangles happen much less frequently than the purple X’s, enabling faster simulation speeds.

Power Event Control, using option: primesim_evtgrid_for_pdn

A final PrimeSim simulator option used to reduce runtimes was primesim_pdn_event_control=a:b, and it works by applying an ideal power source for a:b, resulting in fewer matrix calculation for the PDN.

The simulation runtime improvements by using all of the PrimeSim options combined was a 5.2X speed-up.

Summary

Engineers at SK hynix have been using both the FineSim and PrimeSim circuit simulators for analysis in their memory chip designs. Using four options in PrimeSim Pro have provided sufficient speed improvements to allow full-chip PDN analysis with SPF parasitics included. I expect that Synopsys will continue to innovate and improve their circuit simulator family in order to meet the growing challenges of memory chip and other IC design styles.

Related Blogs


Siemens Keynote Stresses Global Priorities

Siemens Keynote Stresses Global Priorities
by Bernard Murphy on 03-27-2023 at 6:00 am

Space Perspective

Dirk Didascalou, Siemens CTO, gave a keynote at DVCon, raising our perspective on why we do what we do. Yes, our work in semiconductor design enables the cloud and 5G and smart everything, but these technologies push progress for a select few. What about the big global concerns that affect us all: carbon, climate, COVID and conflict? He made the point that industry collectively has a poor record against green metrics: a 27% contributor to carbon, more that 33% in energy consumption, and less than 13% of products recycled.

We need industries globally, for food and clothing, energy, health, education, and opportunity. Returning to a pastoral way of life isn’t an option so we must help industries become greener. While adapting faster to rapidly evolving demands and constraints thanks to geopolitical instability. Add in demographic ageing, relentlessly chipping away at the pool of critical skills in manufacturing. Siemens aims at these global challenges by helping industries to become more efficient, more automated, and more nimble through digital transformation.

Optimizing industry

Manufacturing industries are very process driven. For conventional production flows, global optimization on-the-fly – reworking flows or product mixes – is very difficult. Improvements in these contexts are more commonly limited to local optimizations, tweaking the process recipe where possible. Global optimization through trial-and-error experiments is simply not practical. Auto manufacturers ran into exactly this problem, intrinsic inflexibility in the Henry Ford manufacturing model. To their credit are already adjusting, often with Siemens help.

Digital transformation allows industries to model whole product lines digitally and experiment with options. Not only to model but also to plan how to adapt those lines quickly in real life, and to plan for predictive maintenance. This is the digital twin concept, though going far beyond the familiar autonomous car training example. Here Dirk is talking about a digital twin to model a continuous, context-driven process for business through manufacturing.

Siemens is itself a manufacturing company. They have a factory in southern Germany producing many of the products they use to help other companies in their automation goals.  The Amberg site manufactures 17 million products a year, from of a portfolio of 1,200. Each day they must reconfigure the factory 350 times to serve many different types of order. Siemens put their own digital transformation advice and products to work in this factory, delivering 14X productivity improvements on products with 2X the complexity in the same factory with the same number of people. The World Economic Forum has named that site one of their lighthouse factories.

What difference does this make to the big goals and to what we do? Siemens doesn’t need to produce 14X more products today. For the same product volume, those improvements drive lower energy consumption and therefore a lower carbon footprint. Digital transformation also minimizes need for trial-and-error modeling in the real world, a faster turnaround with less waste to produce better, greener manufactured goods. And it allows for more flexibility in quickly switching product features and mixes. Consumers get exactly the options they want at a similar cost, from more eco-friendly manufacturing. All enabled by digital twin models, sensing, compute and communication technologies and of course AI.

Real applications

One example is Space Perspective, a carbon neutral spacecraft powered by a balloon! It can carry eight people in a 12 mile per hour ascent to 100,000 feet. The craft was designed completely digitally using Siemens SimCenter-STAR CCM. Soon you won’t have to be a billionaire to go to space!

A more widely important example is vertical farming. 80 Acres Farms designed their indoor, vertically stacked farms using Siemens products. An 80 Acres farm can produce up to 300 times more food than a regular farm in the same footprint, using renewable energy, no pesticides, and 95% less water. These farms produce food locally to serve local needs, minimizing trucking costs and consequent impact on the environment.

Where does COVID fit into this story? Remember BioNTech? They produced the Pfizer vaccine, the first widely available shot. Designing the vaccine was a great accomplishment but then needed to be ramped to billions of doses in 6 months. That required more research on boosting immune response. Siemens products assisted with solutions to help simulate the impact of modeled molecular structures on immune response. A combination of simulations, AI, and results from clinical trials led to the vaccine many of us received following a record development and production cycle for biotech.

Northvolt is another example. This is a Swedish company building lithium-ion batteries for EVs and other applications. This is a serious startup with $30 billion in funding, not a wishful one-off. Batteries are integral to making renewable energy more pervasive, but we hear lots of concerns about environmental issues in their manufacture. Northvolt’s mission is to deliver batteries with an 80% lower carbon footprint than those made in other factories, and they recycle material from used batteries into new products. These guys are committed. Again, the whole operation was designed digitally with Siemens – creation, commissioning, manufacture, deployment, and recycling.

There are more examples. Milling machines as a service – yes that’s a real thing. A German company offers a machine which can be de-featured to do just the basics, competing on price with cheap Asian counterparts. When needed you can pay for an upgrade, enabled naturally through an app, which will turn on a more advanced feature. Naturally there are multiple such features 😊.

Closer to home for automotive design, safety analysis and ML training through digital twins is enabled by Siemens EDA. Samsung presented later in the same conference on using Siemens Xcelerator tools to reduce functional safety iterations by 70%. and to generate an integrated final validation report across the formal, simulation and emulation engines they used for ISO 26262 certification.

An inspiring keynote. Next time a relative asks you what you do for a living, aim a little higher. Tell them you design products that ultimately drive greener manufacturing, faster response to pandemic crises, and (who knows) maybe ultimately more constructive approaches to resolving conflict.


Gordon Moore’s legacy will live on through new paths & incarnations

Gordon Moore’s legacy will live on through new paths & incarnations
by Robert Maire on 03-25-2023 at 6:00 pm

Gordon Moore RIP

-Gordon Moore’s passing reminds us of how far we have come
-One of many pioneers of chip industry-but most remembered
-The most exponential & ubiquitous industry of all times
-“No exponential is forever”- Gordon Moore was an exponential

Remembering Gordon Moore

He will be remembered most for his observation (some say prediction) of the exponential growth of semiconductor technology.

This could further be described as cost per transistor or cost per square inch of silicon or whatever metric you use to describe the basic value of the semiconductor industry.

He was much, much more than the “Moore’s Law” phrase he coined, as he was one of the “traitorous eight” that left Shockley Semiconductor to form Fairchild Semiconductor and then later went on to found Intel with Shockley cohort Robert Noyce and Fairchild cohort Andy Grove.

Birth of the semiconductor industry

Some would suggest that the semiconductor industry started 75 years ago with the invention of the transistor. I would suggest that the true start of the semiconductor industry was 65 years ago with the formation of Fairchild which pioneered the “integrated circuit” which built and connected multiple transistors on a single piece of silicon and was the genesis of a new way of building devices at a huge scale.

Had we continued to use discrete transistors, we would have not seen the exponential growth and instead would have just replaced vacuum tubes with smaller more efficient transistors without the potential for exponential growth.

Gordon Moore- Both Pioneer & Father to an industry

Gordon Moore (and many others) were there for not just the birth of the industry but perhaps more importantly the development and advancement that set the industry on a path and trajectory of advancement and growth that later became the famous observation.

It would have been enough if he just helped invent the integrated circuit at Fairchild. If he had just started Intel. If he had just coined the observation of “Moore’s Law”. But he did all this and much more, he helped create an entire industry that 65 short years later has its presence felt in virtually every thing we touch and don’t touch today.

Semiconductors are like the unseen yet enabling oxygen we breathe every minute of every day

We rely on semiconductors in the most critical ways every second of our lives, yet they didn’t exist a mere 65 years ago. The semiconductor industry was created out of thin air to become the most pervasive, fastest growing on the planet.

This is truly the legacy of Gordon Moore and others more so than any single phrase or observation.

“No exponential is forever”- Gordon Moore

We know in both math and everyday life and death that no true exponentials go on forever, and that is sadly the case with Gordon Moore himself. His legacy, however , will go on forever for both the myriad of roles he played in the semiconductor industry as well as his philanthropic endeavors later on in life.

We will miss a truly great person…….

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event

Report from SPIE- EUV’s next 15 years- AMAT “Sculpta” braggadocio rollout

AMAT- Flat is better than down-Trailing tool strength offsets memory- backlog up


Podcast EP149: The Corporate Culture of Axiomise with Laura Long

Podcast EP149: The Corporate Culture of Axiomise with Laura Long
by Daniel Nenni on 03-24-2023 at 10:00 am

Dan is joined by Laura Long, Director of Business Development at Axiomise. She has over 15 years of experience in business development and has built a strong expertise working with clients with a presence and/or residence in various countries of the European Union, in the UK and in the Americas.

Dan explores the corporate culture at formal verification company Axiomise with Laura. Inclusiveness, diversity and collaboration among other topics are discussed. Laura provides a view into the development environment at Axiomise and what impact these strategies can have on results.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Developing the Lowest Power IoT Devices with Russell Mohn

Developing the Lowest Power IoT Devices with Russell Mohn
by Daniel Nenni on 03-24-2023 at 6:00 am

InPlay NanoBeacon Technology

Russell Mohnis the Co-Founder and Director of RF/AMS Design at InPlay Inc. and his team has been using WiCkeD from MunEDA for several years. We thought the rest of the world would like to learn about his experiences.

How did you get started in semiconductors and what brought you to InPlay?
I was initially drawn to analog and mixed-signal chip design because it seemed like a direct path to start using what I had learned in engineering school. I’ve stayed in the same field because there’s always something for me to learn and there are always interesting problems to solve, both of which I really enjoy. I like building things, and I’ve always been fascinated by all the fields that make the microelectronics industry possible: photolithography, material science, physics, robotics, chemistry, microscopy, not to mention all the algorithms, mathematics, and computer science that is pushing breakthroughs in the tools we use. It’s a field that keeps capturing my imagination in new ways. I like the idea of casting a design in a mask and having it produced nearly flawlessly millions of times over. I enjoy the pressure in trying to get it right the first time, and I take pride in the fact that there is a lot at stake. The feeling of getting a new part in the lab and seeing it work as designed is incredibly rewarding. And when there are problems, figuring them out is also rewarding.

I joined InPlay because our current CEO asked me to lead the RF and analog/mixed-signal design for InPlay’s chips at the end of 2016. I had worked with the other co-founders at my previous employer, which had gone through two acquisitions in the previous two years or so. I had a lot of respect for them and enjoyed working with them in the past. I always dreamed of starting my own company, so I thought it was a golden, albeit risky, opportunity. The team had a lot of complementary domain knowledge, and knowing the others were great in their fields gave me the confidence to join.

What does InPlay do?
InPlay is a fabless semiconductor company. We design and develop chips that enable wireless connectivity in applications that require low-latency, many devices, and low power … all at the same time. We are also enabling a new generation of active RFID smart sensors and beacons with our NanoBeacon product line. It doesn’t require firmware. The BOM is tiny. And power consumption is very low, so it can be powered by unique batteries and energy harvesting technologies.

What type of circuits do you design?
We design and develop all the necessary circuits for a radio transceiver. Some examples are low noise amplifiers, mixers, programmable amplifiers, analog to digital converters, digital to analog converters, low-drop out regulators, phase locked loops, power amplifiers. We also design the power management circuit necessary for the chip, which includes DCDC converters, really low power oscillators, references, and regulators.

Which MunEDA tools do you use?
We use WiCkeD and SPT.

How do you apply the MunEDA tools to your day-to-day job?
We’ve done some porting work over the past couple years. It was necessary with the foundry wafer shortage, especially for startup companies like us. Using SPT to get the schematics all ported over has been really helpful.

We also use WiCkeD for both optimization and for design centering over process/voltage/temperature variation. If the circuit is small enough, an opamp for example, after choosing the right topology, the optimizer can do the work of a designer to get the needed performance, all while keeping the design centered over PVT.

We’ve also used it for intractable RF matching/filtering tasks and for worst case analysis on startup issues for metastable circuits.

What value do you see from the MunEDA tools?
I see the MunEDA tools as basically another designer on my team. This is huge since we’re a small team, so the impact has been significant.

How about the learning curve?
MunEDA’s support is really great; they care about their customers, no matter how small. The learning curve is not too bad after some built-in tutorials. I see value from the tools every time I use them, from the first time, until now.

What advice would you give a circuit designer considering the MunEDA tools?
I would advise that they keep an open mind, and really look at the resulting data. I think many designers would be happy by the amount of time they can save, and the insight they can gain into the trade-offs in their designs.

Also Read:

Webinar: Post-layout Circuit Sizing Optimization

Automating and Optimizing an ADC with Layout Generators

Webinar: Simulate Trimming for Circuit Quality of Smart IC Design

Webinar: AMS, RF and Digital Full Custom IC Designs need Circuit Sizing


Mercedes, VW Caught in TikTok Blok

Mercedes, VW Caught in TikTok Blok
by Roger C. Lanctot on 03-23-2023 at 10:00 am

Mercedes VW Caught in TikTok Blok

Thirteen years ago, General Motors announced the introduction of a voice-enabled integration of Facebook in its cars. The announcement reflected the irresistible urge to please consumers and lead the market.

Today, multiple car makers are introducing games, streaming video, and social media apps, the most prominent of which is TikTok – with a billion users across 150 countries, including 200M+ downloads in the U.S. alone. Automotive integration looks like a no-brainer – it is, but not in a good way.

Volkswagen and Mercedes are in the forefront of the movement, Volkswagen with its announced plans for its Harman Ignite app store and Mercedes with its Faurecia Aptoide-sourced app store. Both car companies would do well to look back to the original social media integrations of GM, Mercedes, and others – which included Twitter. It all sounded like a great idea at the time – Facebook and Twitter in the dash! – but very soon, as the British say, there was no joy.

It didn’t take a rocket scientist to perceive that social media is ill-suited for automotive integration – with the possible exception of rearseat use by passengers. Car companies tried to create automated links from navigation apps to Twitter – for posts indicating departures and arrivals – or by emphasizing voice interaction, to no avail. It was soon clear that these apps simply didn’t belong.

The problem is that social media apps demand attention. Their entire business models are built on distraction. They simply don’t belong in cars.

TikTok has the added baggage of being a threat to privacy and national security in the eyes of many governments around the world. I’d argue connected cars are by definition a threat to privacy. Actually, based on the amount of CC-TV deployed around the world I’d say leaving your home is a threat to privacy.

TikTok appears to be a special case because of its ability to spread Chinese government propaganda and misinformation. In other words, it’s not enough that it is distracting and invading privacy, it may also invade and alter users’ political beliefs.

Car companies could not resist the Siren song of TikTok. They simply couldn’t ignore those billion users and included TikTok in their app stores. If ever there were a “red flag” moment in in-car app deployment, this is it.

With governments around the world having either already banned TikTok or with plans to do so, perhaps auto makers will take a hint. The Washington Post details the breadth of the growing official rejection of TikTok.

India – initially banned in 2020, permanent ban in January 2021
U.S. – government agencies have 30 days to delete TikTok from government-issued devices; dozens of state-level bans
Canada – banned on government-issued phones
Taiwan – banned on government devices since last December, considering nationwide ban
European Union – banned on government/staff devices
Britain – banned on government devices
Australia – banned on government staff devices
Indonesia – temporary ban in 2018, later lifted
Pakistan – varous temporary bans
Afghanistan – banned in 2021 – but workarounds possible

As auto makers such as Volkswagen and Mercedes reconsider the wisdom of TikTok integration in cars, maybe they’ll rethink some of the other crazy stuff – or at least confine it to the rearseat or limit access to when vehicles are parked or charging. Angry Birds? Really, Mercedes?

It’s a good time to pause and rethink what we are putting into cars. Car makers have a history of wanting to integrate the latest and greatest tech in their cars, which explains the growing number of announcements regarding in-vehicle ChatGPT and Meta integrations. The good news is that these days, with over-the-air software update technology, apps can be removed as quickly as they can be deployed. Let’s hope so.

Within a year of its launch of Facebook in its dashboards General Motors changed course and dropped the plan. I think we can expect a similar outcome in this case.

Also Read:

AAA Hypes Self-Driving Car Fears

IoT in Distress at MWC 2023

Modern Automotive Electronics System Design Challenges and Solutions


Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow

Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow
by Daniel Payne on 03-23-2023 at 6:00 am

analog Circuit Optimization

Analog IC designers can spend way too much time and effort re-using old, familiar, manual iteration methods for circuit design, just because that’s the way it’s always been done. Circuit optimization is an EDA approach that can automatically size all the transistors in a cell, by running SPICE simulations across PVT corners and process variations, to meet analog and mixed-signal design requirements. Sounds promising, right?

So which circuit optimizer should I consider using?

To answer that question there’s a webinar coming up, hosted by MunEDA, an EDA company started back in 2001, and it’s all about their circuit optimizer named WiCkeD. Inputs are a SPICE netlist along with design requirements, like: gain, bandwidth and power consumption. Outputs are a sized netlist that meets or exceed the design requirements.

Analog Circuit Optimization

The secret sauce with WiCkeD is how it builds up a Machine Learning (ML) model to run a Design Of Experiments (DOE) to calculate the worst-case PVT corner, find the transistor geometry sensitivities, and even calculate the On Chip Variation (OCV) sensitivities. This approach creates and updates a non-linear, high-dimensional ML model from simulated data.

Having a ML model enables the tool to solve  the optimization challenge, then do a final verification by running a SPICE simulation. There are automated iterations until all requirements are met. Now that sounds much faster than the old manual iteration methods. Training the ML model is all automatic, and quite efficient.

Circuit designers will also learn:

  • Where to use circuit optimization
  • What types of circuits are good to optimize
  • How much value circuit optimization brings to the design flow

Engineers at STMicroelectronics have used the circuit optimization in WiCkeD, and MunEDA talks about their specific results in time savings and improvements in meeting requirements. Power Amplifier company Inplay Technologies showed circuit optimization results from the DAC 2018 conference.

Webinar Details

View the webinar replay by registering online.

About MunEDA
MunEDA provides leading EDA technology for analysis and optimization of yield and performance of analog, mixed-signal and digital designs. MunEDA’s products and solutions enable customers to reduce the design times of their circuits and to maximize robustness and yield. MunEDA’s solutions are in industrial use by leading semiconductor companies in the areas of communication, computer, memories, automotive, and consumer electronics. www.muneda.com.

Related Blogs