100X800 Banner (1)

Next Generation of Systems Design at Siemens

Next Generation of Systems Design at Siemens
by Daniel Payne on 11-14-2024 at 8:00 am

New, Unified GUI

Electronic systems design is filled with a wide range of tools used across IC packaging design, multi-board systems, design creation, physical implementation, electro-mechanical co-design, simulation & analysis, and new product introduction. Siemens has been offering tools in this flow for many years now, so I was able to meet by video with David Wiens, Product Marketing Manager to get briefed on their next generation release. The following three EDA tools have a new, unified GUI, along with cloud connectivity and AI smarts:

  • Xpedition – electronics system design for enterprise use
  • HyperLynx – high-speed system analysis and verification
  • PADS Professional – low-cost, integrated PCB design

The vision at Siemens is to enable integrated, model-based system engineering, so that teams of engineers working across the multiple domains of software, electrical, electronics and mechanical can collaborate throughout the design process. Industry trends reveal a workforce in transition, with a general shortage of engineers, mass electrification of industrial products, and volatility in the supply chain across the globe. We are now in a new era where AI techniques are being applied to the electronic design process, the cloud is used to connect the work of teams, and using EDA tools through intuitive GUIs improves productivity.

Next Generation

Across the new release of Xpedition, HyperLynx and PADS Professional tools you quickly notice the consistent GUI, which has a modern look using more icons, arranged in groups based on function. Engineers will experience a short learning curve, making them more productive across the flow of these tools. Users can personalize how their icons are arranged, or even opt to go back to the classic look.

New, Unified GUI

As an engineer is using these tools there are AI-infused, predictive commands appearing in the menu, based on the patterns. Each customer will see their own predictive commands, based on their tool usage, and they can have an expert train their own model and share that within an organization. Engineers can also use natural language to find new components for their system design. Simulations are optimized to use predictive selection, so a design can be optimized without resorting to brute-force simulations across a large number of permutations, allowing you to explore the design space in a reasonable amount of time. Doing SI/PI analysis on a large system can now be run overnight, instead of waiting hundreds of days.

Predictive Commands

These next generation of tools are also integrated with other Siemens products, like: Teamcenter, NX and Simcenter, to support multi-domain design. There is partner PLM integration too, with Dassault and PTC. Model-based engineering happens through requirements decomposition and verification management in Xpedition.

Teams of engineers collaborate in real-time using a cloud-connected environment, enabling easier design reviews, getting insight to supply chain availability, performing component research and sourcing, and even ensure manufacturability through DFM profile management. RoHS compliance can be met using supply chain insights from Supplyframe. Assuring IP integrity, accuracy and reliability is security through managed access control based on each user’s role, permission and geography.

Summary

Siemens has released a new version for Xpedition, HyperLynx and PADS Professional that sports a new, unified, modern GUI, making life more productive for PCB designers. AI features also benefit users, through anticipating their next menu item and optimizing the number of simulations required. Collaboration is improved through cloud connectivity, making communication between team members faster. The PCB tools integrate throughout the systems design flow with both Teamcenter and NX software, enabling multi-domain design and analysis.

Related Blogs


Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit

Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit
by Mike Gianfagna on 11-14-2024 at 6:00 am

Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit

AI is exploding everywhere. We’ve all seen the evidence. The same thing is happening with AI conferences. The conference I will discuss here began in 2018 as the AI Hardware Summit. The initial venue was the Computer History Museum in Mountain View, CA. Like most things AI, this conference has grown substantially in a relatively short period of time.  As you will notice, its name has grown, too to encompass a larger mission. At the recent event, there was significant focus on scalability in the deployment of AI systems. Samtec was there to address this challenge head-on. I’ll provide a summary here of the company’s presence at the show and how Samtec paves the way to scalable architectures at the AI Hardware & Edge AI Summit.

Samtec’s Presentation

At the show, Matt Burns, global director of technical marketing at Samtec presented Optimizing Data Routing in Copper & Optical Interconnects in Scalable AI Hardware Architectures. A rather long title, but there is a lot to address here.

Let’s take a look at the some of the topics Matt covered.

AI Agents

Over the past few years, Gen AI has been a driver in the adoption of AI agents. ChatGPT was just a pivot point. Applications such as text-to-chat/audio/image/video are redefining the customer experience in many industries. The next revolution in AI capabilities will be using AI agents to supplement the user’s experience.  The new “co-pilots” we are seeing from companies like Microsoft are good examples of this.  Other examples are actually improving code generation for simplicity and efficiency in real-time for developers.

Enterprise AI

Similarly, Gen AI has been the driving force behind enterprise AI adoption. However, only a fraction of the Fortune 1,000 has really started implementing AI to improve processes internally.  As enterprises discover how to use AI foundation models or application-specific models with their own internal data, AI will then begin to impact the bottom line for innovative companies.  The hyperscalers are leading the charge, but other companies will eventually follow.

Increasing model sizes requires more compute, but . . .

AI models are growing in size and scale. ChatGPT uses GPT-3.5 which has 175 billion parameters. GPT4 is rumored to approach 1 trillion parameters. Other models will soon approach 2 trillion parameters.  Model sizes are growing exponentially annually.  One GPU can’t handle all this.

Literally, hundreds if not thousands of GPUs need to be linked to parallel process the models. So, what’s the problem? AI compute performance is growing ~4.6x per year, but memory bus speeds are growing only ~1.3x per year and interconnect/fabric bus speeds growing only ~1.2x per year.  Those are the bottle necks.  Routing high-speed protocols like HBMx, CXL, PCIe and others over optics is becoming the trend.  Samtec demonstrated its CXL over optics solution at the show. The focus here is to position Samtec FireFly and Halo for some niche AI hardware applications.

Insatiable data center demand, but how are we going to power them?

More GPUs means more power. GPUs and other AI compute engines are approaching 2kW PER CHIP. That’s a lot of power.  System architects need to figure out how to get massive power into a rack and chassis efficiently and in small form factors at scale.

With these challenges as a backdrop, Matt presented the broad class of solutions for both copper and optical interconnect that Samtec offers. What is interesting about this show is that there are exhibits, but the footprint has always been limited to a table-top style of display. This keeps the focus on technology as opposed to fancy booth construction.

Samtec was at the show again this year, demonstrating its wide range of products for AI enablement.

Samtec booth at the show

To Learn More

If AI system scalability keeps you awake at night, Samtec can help. You can learn more about this unique company on SemiWiki here. And you can get an overview of Samtec’s AI capabilities here. You can even download a complete Artificial Intelligence/Machine Learning Solutions Guide here. As an aside, the conference is changing its name again. Next year’s event will be called AI Infra Summit. You can learn more about this change here.

And that’s how Samtec paves the way to scalable architectures at the AI Hardware & Edge AI Summit.


The Chips R&D Program Seeks to Accelerate Innovation

The Chips R&D Program Seeks to Accelerate Innovation
by Joseph Byrne on 11-13-2024 at 10:00 am

chips timeline

The CHIPS and Science Act has allocated $11 billion for semiconductor R&D, including for advanced packaging and AI-driven design. Companies should apply now.

In 2022, the United States signed the $50 billion Chips and Science Act. Under the act, the National Institute of Standards and Technology (NIST), which is part of the US Department of Commerce, is administering $11 billion for research and development projects. Befitting its name, the Chips R&D effort seeks to foster innovation (research) and commercialization (development). A third goal is to nurture the workforce. Chips R&D targets five areas:

  1. The National Semiconductor Technology Center (NSTC), a public-private consortium to provide R&D facilities and equipment.
  2. The National Advanced Packaging Manufacturing Program (NAPMP).
  3. The Chips Manufacturing USA Institute to develop digital twin technologies for semiconductor manufacturing.
  4. Chips Metrology, which applies the science of measurement (a key part of NIST) to semiconductor materials, packaging, and production.
Funding Opportunities and Deadlines

NIST is doling out R&D awards in a notice of funding opportunity (NOFO) series. NOFOs from earlier this year target package substrates and establishment of the Chips Manufacturing USA Institute. Two open NOFOs include one applying artificial intelligence (AI) and autonomous experimentation (AE) to manufacturing and another targeting advanced packaging. In both cases, applicants’ first step is to submit a concept paper. Due dates are January 13, 2025, and December 20, 2024, for the AI/AE and packaging NOFOs, respectively, as Figure 1 shows. Local to NIST and having a writing background, I’m available to work with applicants on their submissions.

Figure 1. Timelines for Chips R&D packaging and AI/AE funding opportunities.

The Chips AI/AE for Rapid, Industry-Informed Sustainable Semiconductor Materials and Processes (Carissma) competition expects to disburse $100 million to organizations developing semiconductor materials. They must outperform existing materials and be better for the environment. The timeline is short—only five years for an investment to produce something the industry can test. Carissma requires the projects to be university led and apply AI/AE techniques.

Pushing Packaging Boundaries

Part of the NAPMP, the packaging NOFO will provide multiple awards totaling $1.55 billion and spans five R&D areas (RDAs in government lingo):

  1. Equipment, tools, processes, and process integration
  2. Power delivery and thermal management
  3. Connector technology, including photonics and radio frequency (RF)
  4. Chiplets ecosystem
  5. Codesign/electronic design automation (EDA)

Area Four indicates the NOFO’s thrust: extending the multi-die (and multidimensional) packaging technology found in products such as the AMD MI300X, Intel Ponte Vecchio, and Nvidia Blackwell. Examining this area also reveals the program’s vision and assumptions: thousands of wires will connect chiplets, packages will be ultra-large and house a thousand chiplets, and chiplets will be functionally and physically heterogeneous. It’s an unusual vision considering systems today contain tens or possibly hundreds of chips per chassis—not thousands of chips. For a few more details on the chiplet RDA, see my post at https://xpu.pub/2024/10/24/chips-act-packaging/.

The other four areas proceed from this vision. The first has two categories that applicants can address: either a specific step in the packaging flow or an end-to-end process linking the individual steps. The second offers four objectives that applicants can address, including actual power-delivery and thermal-management solutions and models.

Area Three addresses interpackage (not intrapackage) interconnect and covers three scales: less than 25 mm, less than 1 m, and less than 1 km. For the shortest distance, the goal is 100 Gb/s per channel and a shoreline bandwidth density of 10 Tb/s per mm. The latter parameter is the challenging one; 224 Gbps serdes are already in production. For the sub-meter and sub-kilometer scales, the minimum bandwidth is 100 Tb/s. A further challenge for all three distances is to achieve a 0.1 picojoule per bit ceiling. As Figure 2 shows, the interconnect among packages can be wired, wireless (RF), or photonic.

Figure 2. The interconnect area envisions scaling out designs, such as by employing wires, radios, and photonics to connect four thousand-chiplet packages. (Source: NIST)

The final area is for software tools to aid design, security, resilience (e.g., fault tolerance), and integration, verification, and validation (IV&V). The EDA tools must handle designs employing any substrate, a thousand chiplets, 24-high chiplet stacks, a mixture of functionality (digital, analog, and optical), and various other components.

Write 10 Pages, Get $150 Million

It’s unusual for the United States to implement an industrial policy as directly interventionist as the Chips Act. Left alone, companies undoubtedly would develop technologies, but the Chips R&D effort is an opportunity for them to accelerate programs. Although applicants must be US-based and the act aims to bolster US manufacturing, operations and people can also be elsewhere. Recognizing this, attendees at the recent NAPMP NOFO conference came not only from American organizations but also from European and Asian companies with an affiliated US entity.

As noted above, the next step for those interested in the Carissma and NPAMP packaging NOFOs is to submit a concept paper. It must be no longer than 10 pages, making it possible to crank out in less than a month. It must broadly discuss the applicant’s project, including the technical area, execution plan, and budget. The Chips R&D review board will consider submissions on the basis of economic and national security, technical merit, project management, and ultimate impact. Concepts deemed meritorious then must submit a full application for final review. Grants will range up to $150 million, a significant sum for a large company and transformational for a smaller one. I urge US entities to apply. As noted above, I’m available to assist with concept papers and have the advantage of being local to NIST.

Joseph Byrne is an independent analyst and consultant. For more information, see xampata.com.

Also Read:

Synopsys-Ansys 2.5D/3D Multi-Die Design Update: Learning from the Early Adopters

Sarcina Democratizes 2.5D Package Design with Bump Pitch Transformers

Analog Bits Builds a Road to the Future at TSMC OIP

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs


Tier1 Eye on Expanding Role in Automotive AI

Tier1 Eye on Expanding Role in Automotive AI
by Bernard Murphy on 11-13-2024 at 6:00 am

Car EE system

The unsettled realities of modern automotive markets (BEV/HEV, ADAS/AD, radical views on how to make money) don’t only affect automakers. These disruptions also ripple down the supply chain prompting a game of musical chairs, each supplier aiming to maximize their chances of still having a chair (and a bigger chair) when the music stops. One area where this is very apparent is in the tier immediately below the automakers (the Tier1s) who supply complete subsystems – electronics, mechanical and software – to be integrated directly into cars. They are making a play to offer more highly integrated services, as evidenced by a recent announcement from DENSO, the second largest of the Tier1s.

Zonal architecture (image courtesy of Jeff Miles)

More AI will drive more unified systems

There are plenty of opportunities for Tier1s around BEV/HEV power and related electronics (where DENSO also has a story), but here I want to focus on AI implications for automotive systems. AI systems are inevitably distributed around a car, but as capabilities advance, training and testing must comprehend the total system. Which in piecemeal distributed systems will become increasingly impractical and may push towards unified supplier platforms. (There is talk of higher speed communication shifting all AI to the center, but it’s not fast enough yet to meet that goal and I worry about power implications in shipping raw data from many sensors to the center.)

Take side mirrors as an example. Ten years ago the electronics for a side mirror was simple enough; just enough to control mirror orientation from a joystick in the driver arm-rest. But then we added cameras and AI to detect a motorbike or car approaching on the left or right, which at first simply flashed light on the mirror housing to warn us not to change lane. Now maybe we also want visual confirmation outlining the vehicle in the side mirror or the cabin console.

How much of that processing should be in (or near) the side mirror and how much in a zonal or even central controller? Questions like this don’t have pre-determined answers and depend very much on the total car system architecture, latency/safety requirements, communications speeds, and the software and AI models that are a part of that architecture. Is it possible to build a safe system when different suppliers are providing software, models, and hardware for the mirror, zonal controller and central controller? Yes in this limited context, but when this input is one of many on which ADAS or autonomous driving depends and the car crashes or hits a pedestrian, who is at fault?

OEMs already depend on Tier1s to deliver integrated and fully characterized subsystems, hardware and software combined. Perhaps now their scope should not be limited to modules. Distributed AI adds a new kind of complexity which ultimately must be proven in-system. Think about the millions or billions of miles which must be trained and tested in digital twins to provide high levels of confidence and safety. That’s difficult to commit when AI backbone components for sensing, edge NPUs, fusion, and safety systems are under control of multiple suppliers. This objective seems more tractable when the whole system is under the control of one supplier. At least that’s how I think the Tier1s would see it.

DENSO and Quadric

DENSO announced very recently that they will acquire an intellectual property (IP) core license for Quadric’s Chimera general purpose NPU (GPNPU) and that the two companies will co-develop IP for an in-vehicle semiconductor. This announcement is interesting for several reasons. First it was initiated by DENSO, not by Quadric. Press releases from IP companies on license agreements are a dime a dozen, but DENSO had a larger goal, to signal that they are now getting into the semiconductor design game.

Second, DENSO has been an investor in Quadric for several years, to track progress in NPU technologies along with a couple of other contenders. Now this upgrade from being simply an investor to being a licensor and co-developer is an important step forward for both companies.

The press release highlights DENSO’s expectation that in-vehicle SoCs they will build must be able to process large amounts of information at high speed. They are also attracted to Quadric’s Chimera GPNPU ability to support DENSO adding their own AI capabilities in future, requiring support for a wide variety of general-purpose operations. DENSO see this profile as essential to support in-vehicle technologies and to support updates to AI advances in the future.

Feels like an important endorsement for Quadric. You can read the press release HERE.

Also Read:

A New Class of Accelerator Debuts

The Fallacy of Operator Fallback and the Future of Machine Learning Accelerators

2024 Outlook with Steve Roddy of Quadric


Signal Integrity Basics

Signal Integrity Basics
by Daniel Payne on 11-12-2024 at 10:00 am

Digital and analog waveforms

PCB and package designers need to be concerned with Signal Integrity (SI) issues to deliver electronic systems that work reliably in the field. EDA vendors like Siemens have helped engineers with SI analysis using a simulator called HyperLynx, dating all the way back to 1992. Siemens even wrote a 56-page e-book recently, Signal Integrity Basics, so I’ll capture the essence of that in this blog.

Signal Integrity

A digital designer can start out by assuming that a signal has a perfectly shaped waveform, but when they measure that signal as it propagates along a PCB or package to some receiver, the signal has analog distortions, like overshoot, plus there is a time delay to transit the interconnect.

Digital and analog waveforms

Overshoot comes from impedance mismatches and is followed by some ringing. Another waveform issue is Inter Symbol Interference (ISI), where bits sent over a channel start to interfere with each other, causing the receiver to ponder what the correct data is. Here’s what ISI looks like in a serial bit stream.

ISI effects in a serial bit stream

The bits are changing value so rapidly that the high and low level are not reaching proper  values. The eye diagram for this channel has grown quite small, as the orange hexagon indicates, meaning that bit errors will be high.

Small eye diagram

Increasing the length of the channel or increasing the frequency of the channel will close the opening of the eye diagram.

Interconnects used in PCB designs will always have delay, loss, and coupling, which then impact the signal integrity, so modeling this as a transmission line helps to understand and predict the behavior. The typical propagation velocity in a PCB is about 5.9 inches/ns. You can model a transmission line as a collection of resistors, inductors, and capacitors, in order to simulate and predict signal fidelity.

Transmission line model

Two examples of transmission line types in PCB traces are microstrips and striplines. Delay and impedance along these transmission lines are impacted by the parameters of the particular trace. Delay along a microstrip is affected by the interconnect length, the dielectric constants, the height of the dielectric under the trace, and the width of the trace. Delay along a stripline line is affected by the interconnect length and dielectric constant(s). The characteristic impedance, Z0, for both of these transmission line types is impacted by the dielectric constant(s), the height of the dielectric(s) around the trace, and the width of the trace.

Microstrip and stripline examples

These examples used uniform cross sections, however changing the cross section of a trace can introduce an impedance discontinuity, which in turn causes reflections in a signal. The idea is to minimize and manage discontinuities, by:

  • Using short interconnect, relative to rise/fall times
  • Keeping consistent impedance along a trace
  • Avoid or minimize vias

In the following example there’s a 3.3V CMOS driver and receiver connected by a 50 ohm transmission line with a 10ns delay from driver to receiver.

Reflections

The driver is shown in Red, and it rises to 3V, short of the full 3.3V as there’s output impedance on the driver. As the signal propagates along the transmission line it reaches the receiver, which has a high impedance and reflection coefficient of 1, making the Green signal reach 6V. A reflected 3V signal propagates in 10ns back to the driver with a reflection coefficient of -0.85, which bumps the Red driver signal. These reflections continue to bounce back and forth, changing the Red and Green voltages as ringing.

Adding a series terminating resistor close to the source driver can mitigate the overshoot and ringing as shown below:

Series termination

Parallel termination configurations can also reduce overshoot and ringing.

Parallel termination

Multiple PCB traces placed in close proximity exhibit crosstalk, caused by capacitive and inductive coupling. Notice how the victim trace bounces around at the near end and the far end as the aggressor trace is toggled.

Crosstalk example

Adding termination to the traces will mitigate the bouncing, as does moving the traces farther apart, true for both microstrip and stripline configurations.

Differential Pairs

Another type of signal are differential pairs which have two complementary signals, Vpos and Vneg:

Even mode is when both signals are the same, while odd mode has opposite values on the signals. Three termination examples are shown which produce a cleaner Vdiff signal.

Differential pair termination examples

Vias

A basic via is shown below and the signal is in Red color, while the Green vias are stitching vias that connect reference nets together between layers.

Via structure

Analysis of vias on a trace is done in the frequency domain using S-parameters and in the time domain using Time Domain Reflectometry (TDR). S21 is the ratio of the signal out of port 2 when injecting a signal into port 1, called insertion loss. S-parameters have both magnitude and phase components. S11 is the return loss.

Via performance

S21 – the insertion loss, has a dip at 15 GHz, from the stub acting as a quarter-wave resonator.

For the TDR plot it is flat on the left and right, corresponding to the two 50 ohm traces, and in the middle there’s a bounce in the impedance caused by the via.

A number of modifications can be made to a layout that affect via performance. The impacts of a few of these are investigated in the eBook: the presence of non-functional pads, the size of antipads, and stub length.

Timing

PCB traces are characterized by timing parameters like edge rates and propagation delay from a driver to multiple loads, which then impact setup and hold times for digital circuits. Consider the time difference in a differential pair, called skew, which changes the shape of Vdiff, caused by mismatches in the trace layouts. As the skew increases, then the edge rate of Vdiff slows down.

Differential skew example

Increasing the skew also begins to close the eye diagram.

Differential skew eye diagram

Summary

The 56 page e-book from Siemens EDA does a thorough job of introducing signal integrity concepts, challenges and mitigation approaches to PCB professionals. High-speed digital designs have challenges, and with understanding plus analysis, they can have reliable signals.

Read the entire e-book of Signal Integrity Basics, written by John Golding, Sr. AE Consultant, Siemens EDA.

Related Blogs


My Conversation with Infinisim – Why Good Enough Isn’t Enough

My Conversation with Infinisim – Why Good Enough Isn’t Enough
by Mike Gianfagna on 11-12-2024 at 6:00 am

My Conversation with Infinisim – Why Good Enough Isn’t Enough

My recent post on a high-profile chip performance issue got me thinking. The root cause of the problem discussed there had to do with a clock tree circuit that was particularly vulnerable to reliability aging under elevated voltage and temperature. Chip aging effects have always got my attention. I’ve lived through a few of them in my career and they are, in a word, exciting. Perhaps frightening.

This kind of failure represents a ticking time bomb in the design. There are many such potential problems embedded in lots of chip designs. Most don’t ignite, but when one does, things can get heated quickly. I made a comment at the end of the last post about Infinisim and how the company’s technology may have avoided the issue being addressed. I decided to dig into that topic a bit further to better understand the dynamics at play with clock performance. So, I reached out to the company’s co-founder and CTO. What I got was a master class in good design practices and good company strategy. I want to share my conversation with Infinisim and why good enough isn’t enough.

Who Is Infinisim?

You can learn more about Infinisim on SemiWiki here. The company provides a range of solutions that focus on accurate, robust full-chip clock analysis.

Several tools are available to achieve this result. One is SoC Clock Analysis that helps to accurately verify timing, detect failures, and optimize performance of the clock in advanced designs. Another is Clock Jitter Analysis that helps to accurately compute power supply induced jitter of clock domains – a hard-to-trace problem that can cause lots of problems. And finally Clock Aging Analysis that helps to accurately determine the operational lifetime of power-sensitive clocks. It is this last tool that I believe could have helped with the chip issue discussed in my prior blog.

The tools offered by Infinisim use highly accurate and very efficient analysis techniques. The approach goes much deeper than traditional static timing analysis.

My Conversation With the CTO

Dr. Zakir H. Syed

I was able to spend some time speaking with Dr. Zakir H. Syed, co-founder and chief technology officer at Infinisim. Zakir has almost 30 years of experience in EDA. He was at Simplex Solutions (acquired by Cadence) at its inception in 1995 through the end of 2000.  He has published numerous papers on verification and simulation and has presented at many industry conferences.  Zakir holds an MS in Mechanical Engineering and a PhD in Electrical Engineering, both from Duke University.

Here are the questions I posed to Zakir and his responses.

It seems like Infinisim’s capabilities can provide the margin of victory for many designs. How are you received when you brief potential customers?

 Their response really depends on past experiences. If they’ve previously encountered issues—like anomalous clock performance, timing challenges, or yield problems—they tend to quickly see the value Infinisim brings and are eager to learn more. In my experience, these folks are few and far between, however.

This is a bit surprising. Why do you think this is the case?

It’s an interesting point. The issue isn’t that better performance isn’t desirable; it’s that there’s a general trend to accept less-than-optimal performance as the norm. Over time, parameters like timing, aging, jitter, yield, and voltage have been treated as “known quantities” and design teams rely on established margins to work within these expectations.

I’m beginning to see the challenge. If design teams are meeting the generally accepted parameters, why rock the boat?

Exactly. If the design conforms to the required margins, all is well. Designers are rewarded for meeting schedules. CAD teams are recognized for delivering an effective flow. And this continues until there is some kind of catastrophic failure. When that “ticking time bomb” goes off, suddenly every assumption is questioned, and a deep analysis begins.

I get your point. I wrote a blog recently that looked at a high-profile issue that was traced back to clock aging.

Yes, that issue could likely have been discovered with our tools, before the chip was shipped to customers. In that case, aging effects came into play under certain operating conditions. Since N-channel and P-channel devices age differently, the result was a clock duty cycle that began to drift from the expected 50/50 duration. Once the asymmetry became large enough, circuit performance began to fail.

So, what you don’t know can hurt you.

You’re right. But there’s also a bigger opportunity here. It’s not just about preventing catastrophic failures. Advanced nodes are costly, and we pay for that performance. By thoroughly examining circuit behavior across all process corners, we can leverage that investment to its fullest potential instead of leaving performance on the table with excessive margins. The same goes for yield, which directly impacts profitability. In today’s competitive chip design landscape, accepting less performance often means losing out on market share.

OK, the light bulb is going off. Now I see the bigger picture. Using tools like Infinisim’s doesn’t just prevent failures; it’s a strategic move toward maximizing profitability and competitiveness.

I think you’ve got it. When more people within a company—from engineers to executives—embrace this mindset, it leads to a stronger, more competitive organization. By challenging the status quo, companies can achieve more and realize their full potential.

To Learn More

You can learn more about the integrated flow offered by Infinisim here.  My conversation with Infinisim made it clear why good enough isn’t enough.


Build a 100% Python-based Design environment for Large SoC Designs

Build a 100% Python-based Design environment for Large SoC Designs
by Daniel Nenni on 11-11-2024 at 10:00 am

Integrated Python based design environment

In the fast-evolving world of semiconductor design, chip designers are constantly on the lookout for EDA tools that can enhance their productivity, streamline workflows, and push the boundaries of innovation. Although Tcl is currently the most widely used language, it seems to be reaching its limits in the face of the growing complexity of chip designs. In these conditions, Python appears to be the wisest choice among the programming languages and APIs available.

Today, Python is used more and more frequently, especially by young design engineers. Python offers a wide range of advantages. In terms of usability, its ease of debugging and execution speed opens more possibilities compared to Tcl. What’s more, Python benefits from a very active community and a wide choice of open-source libraries. It therefore has a rightful place in the use of EDA tools, and continuing to use a dual Python/Tcl language is counter-productive for design workflows.

1. One Unified Design Environment

Using Python for semiconductor design means working in a single, unified design environment. Indeed, Python gives engineers access to a wide range of libraries, design tools, and frameworks within a single ecosystem. This integration simplifies the design process significantly. Engineers can achieve their goals without having to switch from one language or platform to another. With all tools available in one place, the workflow becomes more cohesive and efficient, allowing for seamless transitions between design, reporting, simulation, and analysis.

2.  Ease of Learning and Use

Python’s simplicity and readability make it an excellent choice. Its straightforward syntax is easy to learn. It allows designers to focus on learning key concepts. Python also offers far more scripting possibilities than Tcl.  This ease of use accelerates the learning curve, enabling engineers to quickly prototype and iterate on their designs.

3. Rich Ecosystem of Libraries and Tools

Python boasts a robust ecosystem filled with libraries specifically tailored for scientific computing, data analysis, and machine learning. Libraries such as NumPy, SciPy, and Pandas provide powerful tools for numerical computations, while TensorFlow and PyTorch can be leveraged for machine learning applications in chip design. This wide range of resources enables engineers to implement sophisticated algorithms and analyses without the hassle of integrating disparate tools.

Semiconductor chip design also involves analyzing large datasets and visualizing complex processes. Python provides robust libraries like Matplotlib and Seaborn for data visualization, helping engineers to better understand their designs and make data-driven decisions. This capability is crucial for optimizing chip performance and functionality.

4. Adoption in Academia and Industry

Python is now a commonly taught subject in higher education establishments. As a result, young engineers who are proficient in Python will be better prepared for the job market. Indeed, industry has also integrated Python into its design flows. Many companies are now specifically looking for candidates with Python skills, making it a valuable asset for career advancement.

5. Defacto’s SoC Compiler is 100% Python compliant

Defacto’s SoC Compiler provides a full support of all its capabilities with an object-oriented Python API. Indeed, Defacto made the choice two decades ago to build its software so that Python would be a built-in API, and today this allows many users to benefit from the power of this language. Defacto estimates that today more than 60% of its users switched to its Python API. Such switch enabled top Semiconductor companies to better integrate new Defacto’s SoC Compiler-based application into their SoC design environment, to develop new additional applications, to fit into general corporate decisions of using Python in EDA, and more.

Defacto engineers are also providing a close support to its customers to help migrate from Tcl to Python API and to build custom Python-based applications.

Figure 1 – Defacto’s SoC Compiler flow

A typical case study using the Defacto Python API is the generation of RTL code with open-source libraries to help generating and building a complete SoC at RTL.

Figure 2 below illustrates an example of a design environment for RTL code generation by using open-source libraries (like Chisel) and Defacto’s SoC Compiler which provides the capabilities to edit and build top level subsystems and SoCs. This ONE-STOP design and debug environment 100% python-based increases design efficiency for SoC architects and RTL designers.

Figure 2: Python-based Integrated Design Environment

Python is the future of EDA industry. With this two decades maturity of providing Python API, Defacto’s SoC Compiler is a strong weapon to build next generation SoC build flows.

For more information about the Defacto products, reach out to their website: https://defactotech.com/


Keysight EDA 2025 launches AI-enhanced design workflows

Keysight EDA 2025 launches AI-enhanced design workflows
by Don Dingee on 11-11-2024 at 6:00 am

Keysight ADS 2025 enables AI-enhanced design workflows

The upcoming Keysight EDA 2025 launch has three familiar tracks: RF circuit design, high-speed digital circuit design, and device modeling and characterization. However, this update features a common thread between the tracks – AI-enhanced design workflows. AI speeds modeling and simulation, opening co-optimization for complex designs. It also gives design teams more freedom to incorporate Keysight EDA tools into their workflows with Python customization. Here is a preview of what designers can expect, including some short videos on each of the three tracks, with more details to come in a multi-region, multi-track live and archived webinar event.

RF circuit designers move into a 3DHI co-design cockpit

Keysight Advanced Design System (ADS) is unmatched as the state-of-the-art platform for RF design and multi-domain co-simulation. Python scripting features already in ADS provide the capability for automating tasks and customizing the user interface. However, RF design complexity continues to grow, typified by the emergence of 3D heterogeneous integration (3DHI) techniques with dense multi-technology packaging.

Rising complexity creates a pressing need to insert RF designs into appropriate system contexts for simulation. However, workflows cannot tolerate the potential of spiraling simulation run times for comprehensive, realistic evaluations with more data points and swept parameters, which could force users to limit how frequently crucial RF simulations execute. Leaving unpredictable real-world effects undetected until physical prototypes is a poor choice.

Fortunately, it’s a choice ADS users won’t face. The previous phase of Keysight EDA research concentrated on broadening the analysis types in ADS, unifying measurement science with Keysight’s test and measurement instrumentation, and speeding simulations with innovative algorithms such as compact test signals, fast envelope techniques, and distortion EVM.

This new phase in Keysight EDA 2025 re-engineers the core simulation platform in ADS to provide external programmatic simulation control through an application programming interface (API), including Jupyter Notebook support. The API also enables new levels of Python customization, including user interfaces, importing layout or modeling data for simulation, creating visualizations for simulation results, and training artificial neural network (ANN) models. The newly re-engineered core delivers as much as 6x improvement in simulation times.

The result transforms ADS into a co-design cockpit where teams can efficiently manage multi-domain RF design and simulation in one open environment. This cockpit minimizes design manipulation while enabling comprehensive, accurate simulation as often as desired earlier in workflows. It also prepares ADS for future growth in RF design complexity and AI-driven command invocation. Floating license packs can set up multiple users for parallel basic analyses, a power user for high-performance specialized analysis, or any combination that makes sense for a design workflow.

High-speed digital circuit design gets enhanced crosstalk analysis

One of the most prominent 3DHI techniques is chiplets, with many teams interested in or pursuing designs based on the Universal Chiplet Interconnect Express (UCIe) specification. UCIe seeks to create an ecosystem where chiplets from different technology nodes can interoperate within a single package, and ongoing enhancements to the specification target optimized die-to-die signaling, improving performance.

Signal integrity is the biggest issue in achieving reliable UCIe designs. As interconnect speeds increase, signal integrity concerns are growing. Teams must carefully analyze UCIe designs, examining all metrics simultaneously to avoid the pitfall of optimizing one metric at the expense of degrading others. To make a comprehensive die-to-die interconnect layout and analysis possible, Keysight created Chiplet PHY Designer, an extension to ADS that provides UCIe simulation and enhanced analysis of the voltage transfer function (VTF) and forward clocking.

In the EDA 2025 update, one Keysight focus for chiplets is enhanced support for quarter data rate (QDR) clocking. Approved as an addition to the UCIe 2.0 specification in August 2024, QDR provides a path to lower UCIe clock rates, reducing design risk while still offering high-performance data transfer rates. Simulating QDR in ADS essentially repeats PHY analysis four times, once for each clock phase. AI enters the equation to help Chiplet PHY Designer visualize VTF crosstalk and VTF loss masks for different data rates and automatically model and optimize link design parameters for best results.

Device model re-centering improves speed by an order of magnitude

Creating process design kits (PDKs) for advanced semiconductors such as III-V technology can be tedious. Engineers try to fit a basic set of measurements into an existing model for a previous device in what is known as model re-centering. However, the fit is often less than ideal and may only work for a tightly bounded set of operating conditions, such as bias voltages or frequency ranges. If the application context changes, a new set of exhaustive measurements could take months. Without more measurement data, partially re-centered models can lack fidelity, leading to inaccurate simulation results.

Model re-centering fidelity is imperative with devices applied in more complex designs for wireless standards featuring higher-order modulation and broader bandwidths. Too much of a difference between simulations and measurements under parameter sweeps manifests as a significant risk of prototype failure.

The EDA 2025 update includes a refresh of Keysight IC-CAP with its ANN Toolkit leveraging AI to quickly re-center models spanning more parameters without exhaustive measurements, reducing the model re-centering process to hours instead of weeks and lowering the expertise required to obtain accurate modeling results.

Learn more at the Keysight EDA 2025 launch event

These are just some of the capabilities in Keysight EDA 2025. It’s also important to note that many EDA 2025 RF circuit design discussions apply to Cadence, Siemens, and Synopsys design platform users considering Keysight RFPro Circuit (with its similar next-generation core simulation technology) or ADS, depending on their workflow.

To help current and future users understand the latest enhancements in Keysight EDA 2025, including AI-enhanced design workflows for RFICs, chiplets, and PDKs, Keysight is hosting two live online launch events on December 3rd in European and American time zones. Designers can register for a track at either live event and view other tracks on demand.

See the Keysight EDA 2025 event page for more information and registration:

Keysight EDA 2025 Product Launch Event


Podcast EP260: How Ceva Enables a Broad Range of Smart Edge Applications with Chad Lucien

Podcast EP260: How Ceva Enables a Broad Range of Smart Edge Applications with Chad Lucien
by Daniel Nenni on 11-08-2024 at 10:00 am

Dan is joined by Chad Lucien, vice president and general manager of Ceva’s Sensing and Audio Business Unit. Previously he was president of Hillcrest Labs, a sensor fusion software and systems company, which was acquired by Ceva in July 2019. He brings nearly 25 years of experience having held a wide range of roles with software, hardware, and investment banking.

Dan explores the special requirements for smart edge applications with Chad. Both small, low power embedded AI as well as more demanding edge applications are discussed. Chad describes the three pillars of Ceva’s smart edge support – Connect, Sense and Infer.

Dan explores the capabilities of the new Ceva-NeuPro™- Nano NPU with Chad. This is the smallest addition to the product line that focuses on hearable, wearable and smart home applications, among others. Chad explains the benefits of Ceva’s NPU line of IP for compact, efficient implementation of AI at the edge.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Bijan Kiani of Mach42

CEO Interview: Bijan Kiani of Mach42
by Daniel Nenni on 11-08-2024 at 6:00 am

bijan (1)

Bijan’s role includes overseeing all product groups, field engineering, customer support, strategy, sales, marketing, and business development. His previous appointments include VP of Marketing at Synopsys Design Group and CEO of InCA. He holds a PhD in Electrical Engineering.

Tell us about your company 
Mach42 is a verification acceleration company. The company is a spinout company from Oxford University, delivering the next step-change in simulation acceleration. We are an early-stage software company developing state-of-the-art machine learning and artificial intelligence technology to simplify, automate, and accelerate simulation tasks. We leverage proprietary neural network technology to accelerate expensive calculations, and we do it with minimal data and high accuracy, providing many orders of magnitude speedup in verification acceleration. Our platform is already delivering a substantial competitive advantage to our early customers.

The company’s innovative technology has been covered in scientific articles such as Nature Physics and Science Magazine. In May 2023, the company announced First Light Fusion, the University of Oxford, the University of York, Imperial College London, and Mach42 will collaborate under a $16 million grant award from UK Research and Innovation’s Prosperity Partnership program (more details here). Our solution is selected to support the above consortium.

We have offices in the UK and California. Our core R&D team is based in the UK, and our US office provides business and technical support. We closed our pre-series A funding in September 2023, and the link below provides more details about our vision, its investors, and its co-founders: Machine Discovery Secures $6 Million to Deliver AI Tools For Semiconductor Design (prnewswire.com).

Mach42 was previously known as Machine Discovery.

What problems are you solving? 
Our flagship product, the Discovery Platform, allows you to exhaustively explore the design space in minutes, enabling you to identify potential out-of-spec conditions. As a companion to SPICE engines, the Discovery Platform leverages our breakthrough AI technology for faster and exhaustive design verification.

What application areas are your strongest? 
The Discovery Platform has demonstrated its shift-left ROI benefits in multiple complex applications, including PMIC, SerDes, and RF designs. It delivers accurate and secure design representations to explore the entire space in minutes.

Applications:
– Quickly and efficiently explore the design space in minutes
– Generate an AI-powered model of your design
– Analyze chip, package, and board-level LRC effects
– Generate a secure model of your design to share with third parties

What does the competitive landscape look like and how do you differentiate?
Mach42 is the first to market with its AI-powered platform to accelerate complex verification tasks.

In the coming years, trillions of dollars of revenue will be generated from new product developments in the engineering market. With this increase in supply, establishing early design insights using multi-physics simulation solutions will be vital to getting new products to market. We are uniquely positioned as a pure-play AI company servicing the semiconductor industry.

What new features/technology are you working on? 
Our vision is to cut the semiconductor design development cycle in half, leveraging our proprietary artificial intelligence technology. Thanks to the team’s experience and expertise, we are in an ideal position to drive the development of new technology to accelerate and improve all levels of product development, from design to verification, test development, and IP security.

By combining advanced simulation technology, cloud computing, and neural network technology, we make it possible to predict analog circuit design performance at the click of a button.

How do customers normally engage with your company?
Via our website Mach42 or email info@mach42.ai

Also Read:

An Important Advance in Analog Verification