DAC2025 SemiWiki 800x100

A Modeling, Simulation, Exploration and Collaborative Platform to Develop Electronics and SoCs

A Modeling, Simulation, Exploration and Collaborative Platform to Develop Electronics and SoCs
by Daniel Payne on 03-26-2024 at 10:00 am

Demo Chiplet System with CPU, DSP, GPU, IO, AI

During the GOMACTech conference held in South Carolina last week I had a Zoom call with Deepak Shankar, Founder and VP Technology at Mirabilis Design Inc. to ask questions and view a live demo of VisualSim – a modeling, simulation, exploration and collaborative platform to develop electronics and SoCs. What makes VisualSim so distinctive is that it comes bundled with about 500 high-level IP blocks ready to use, including 35 ARM processors, some 100 processors, and over 30 different interconnects. Users of VisualSim quickly connect these IP blocks together visually to create their systems, complete with networks. An automotive designer can model the entire network, including 5G communications, Ethernet, SDA and OTA updates with VisualSim.

A high-level model allows for quickest architectural exploration and making architectural trade-offs, way before implementation even begins with RTL code. You can model complex activities like a bus, memories and even cache, measuring things like end-to-end delays and latency. Engineers can measure what their cache hit/miss ratio is, and what happens with requests to L2 caches. All the popular network protocols are modeled: AXI, CHI, CMN600, Arteris NOC, UCIe, etc.

With this modeling approach an architect can model an SoC, complete aircraft or automotive system, and then begin to measure it’s performance to see if it meets the requirements. VisualSim is a multi-domain simulator that can integrate analog, software, power systems, digital and networking into a single model.

For the live demo Deepak showed me a chiplet-based design that had separate chiplets for the  DSP, GPU, AI processor and CPU all connected together using UCIe, and each IP block was parameterized to allow for customization and exploration.

Demo Chiplet System with CPU, DSP, GPU, IO, AI

Pushing into the UCIe block there was an IP called a UCIe switch, and a user can customize this block with five parameters, all at a high level.

UCIe Switch parameters

A router IP block had 10 parameters for customization.

Router parameters

To find each IP block there was a scrollable list on the left-hand side of the GUI, with each family of IP in the library. In a matter of seconds you can browse, select and start customizing an IP.

IP block list

In VisualSim you are connecting each IP in the dataflow, staying at a high level. The next live demo was for a multimedia system design, and to simulate 20 ms took about 15 seconds of wall time, running on a laptop. While the simulation is running you can view the system performance as instantaneous power, measure pipeline utilization, cache utilization, memory usage, and even view a timing diagram. This real time simulation triggered 7.5 million events, and the customer built this model in under 2 weeks, which included the entire SoC.

Multimedia system, timing diagram

Another customer example that Deepak mentioned include 45 masters and was completed in about 4 weeks, fully tested.

You can look inside any of the IP blocks and analyze metrics like pass/fail, then understand why it failed. There’s even an AI engine to help analyze data more efficiently, like finding a buffer overflow which caused a failure. While your model is running there are analytics captured to help measure system performance and identify architectural bottlenecks.

VisualSim is updated twice per year, and then there are patch updates for when new IP blocks are added. An architect defines requirements in an Excel file, with metrics like latency limits and buffer occupancy.

Requirements file

Users of VisualSim can define the range of payload size in terms of bytes, speed ranges and preferred values. Your system model can be swept across the combinations to find the best set of parameters. The simulator even understands how to explore the min, max, and preferred values. You get to define which system parameters will be explored. A multimedia system demo was shown next and then simulated live.

Multimedia System

For an FPGA block you choose the vendor and part number, and then you can see the latency for each Task and the channel statistics of the NOC after a simulation has been run. A power plot was shown for 1 second of operation when using Xilinx Versal parts.

Power Plot

All of the live demos were being run on a Windows laptop. Other supported OSes are: Unix, Mac. Running VisualSim requires a minimal HW infrastructure, because the models are high level.

VisualSim users receive over 500 examples that are pre-built to help get you started quickly, like a complete communication system with an Antenna, Transceiver, FPGA with baseband, and Ethernet interface. System architects using VisualSim can collaborate with all the low-level specialists, like RTL designers.

System-level trade-offs can be modeled and evaluated, like:

  • Changing from 64-QAM to QPSK modulation
  • Faster to slower processor
  • Changing Ethernet specs

If you start with VisualSim to model, implement, then measure, expect to see 95% accuracy compared to RTL implementation results. The promise of using high level models is to eliminate performance issues prior to implementation or integration. There really is no coding required for an entire system model.

Mirabilis has 65 customers worldwide so far and some 250 projects completed. Some of the well-known clients include: NASA, Samsung, Qualcomm, Broadcom, GM, Boeing, HP, Imagination, Raytheon, AMD, Northrup Grumman.

Summary

In the old days a systems designer may have drawn out their ideas on a napkin while eating at a restaurant, and then go back to work and cobble together some Excel spreadsheets with arcane equations to create a model. Today there’s a new choice, and that’s giving VisualSim from Mirabilis a try. You can now model an entire system in a just a few weeks, along with making architectural trade-offs while running actual simulations, all before getting into detailed implementation details.

Related Blogs


Weebit Nano Brings ReRAM Benefits to the Automotive Market

Weebit Nano Brings ReRAM Benefits to the Automotive Market
by Mike Gianfagna on 03-26-2024 at 6:00 am

Weebit Nano Brings ReRAM Benefits to the Automotive Market

Non-volatile memory (NVM) is a critical building block for most electronic systems. The most popular NVM technology has traditionally been flash. As a discrete part, the technology can be delivered in various form factors. For embedded applications flash presents scaling challenges, however. A new NVM technology developed by Weebit Nano is called ReRAM. Sometimes called RRAM, this approach stores bits as resistance vs. the typical approach of using charge that is prevalent in other memory technologies. NVM is used in many parts of automotive systems, as shown in the diagram at the top of this post. The problem is automotive systems present many challenges around things like operating temperature, safety and reliability. Using ReRAM for embedded applications has been hampered by these hurdles, until recently. Read on to see how Weebit Nano brings ReRAM benefits to the automotive market.

Weebit Nano Opens Access to Automotive Applications

Back in November of last year, Weebit Nano announced that its ReRAM IP achieved high temperature qualification in SkyWater Technology’s 130nm CMOS (S130) process. The announcement detailed qualification up to 125 degrees Celsius – the temperature specified for Grade-1 automotive applications. This temperature range also opens up application for industrial, aerospace and other high-temp applications. You can read the details of the announcement here.

Last month, the company raised the bar on automotive access by detailing high reliability and endurance at extreme temperatures and after extensive cycling. Specifically, high endurance was demonstrated at 100K flash-equivalent cycles and high-temperature stability was demonstrated at 150 degrees Celsius lifetime operation, including cycling and retention. The details are shown in the image, below. This clearly moves ReRAM much closer to mainstream use in automotive applications.

Image: Resistance distribution after 100K cycles at 150C. The Weebit performance demonstrates good BER throughout the entire 100K cycles at hot temperatures.

Coby Hanoch, Weebit Nano’s CEO commented, “The performance levels we’re achieving align with requirements specified by automotive companies. Demonstrating the resilience of Weebit ReRAM under these conditions will continue to enhance our position in this domain. Our latest results reaffirm the viability of Weebit ReRAM for use in microcontrollers and other automotive components, as well as numerous other applications requiring high-temperature reliability and extended endurance. Weebit ReRAM is ideal for these applications, offering advantages including ease of integration, cost effectiveness, power efficiency and tolerance to radiation and electromagnetic fields.”

You can read the full text of the announcement here.

A Closer Look at the Technology and the Challenges

According to the International Roadmap for Devices and Systems, 2022 Edition:

One challenge is the need of a new memory technology that combines the best features of current memories in a fabrication technology compatible with CMOS process flow and that can be scaled beyond the present limits of SRAM and FLASH.

Weebit Nano’s ReRAM technology offers a very cost-effective solution to this NVM need. Some specifics of the technology include:

  • Two-mask adder
    • Very few added steps compared to other NVM technologies
    • Lower wafer cost than competing NVM technologies
  • Fab-friendly materials
    • No contamination risk, No special handling, etc.
  • Using existing deposition techniques and tools
    • Easy to integrate into any CMOS fab
  • BEOL technology
    • Stack between any two metal layers
    • No interference with FEOL – Easier to embed with existing analog and RF circuits
    • Easy to scale from one process variation to another

Some of the growing needs for emerging automotive NVM application include code storage, trimming and data logging. Weebit ReRAM delivers high-temperature reliability, immunity to EMI, endurance, fast switching speed, longevity, and secure operation. And the technology can scale to the most advanced process nodes.

Automotive chips have unique requirements, such as design for safety, security and longevity. Devices must be reliable against extreme temperatures, EMI, vibration, and humidity. Fast boot, instant response, frequent over-the-air updates must also be supported. All these requirements mean advanced process nodes are adopted quickly, and this is where Weebit Nano’s technology shows great promise.

General ICs are qualified according to JEDEC standards – this is the baseline for consumer application markets. The automotive industry follows AEC-Q100 standards (Stress Test Qualification for Integrated Circuits). For automotive qualified ICs, tests are much stricter than those of an industrial or commercial IC. These stringent qualification tests assure reliable operation and long lifetimes in harsh automotive environments.

This is why Weebit Nano’s advanced testing work is so significant for automotive applications. The technology is also relevant for a wider range of applications, as shown in the figure below.

ReRAM Addresses a Broad Range of Application Requirements

To Learn More

You can learn more about the benefits of ReRAM technology here. You can also learn about the application of Weebit Nano’s ReRAM to power management here. WeeBit Nano recently presented at the recent IEEE Electron Devices Technology and Manufacturing (IEEE EDTM) Conference. You can view this presentation here. And that’s how Weebit Nano brings ReRAM benefits to the automotive market.


2024 DVCon US Panel: Overcoming the challenges of multi-die systems verification

2024 DVCon US Panel: Overcoming the challenges of multi-die systems verification
by Daniel Nenni on 03-25-2024 at 10:00 am

Dvcon 2024

2024 DVCon was very busy this year. Bernard Murphy and I were in attendance for SemiWiki, he has already written about it.  Multi die and chiplets was again a popular topic. Lauro Rizzatti, a consultant specializing in hardware-assisted verification, moderated an engaging panel, sponsored by Synopsys, focusing on the intricacies of verifying multi-die systems. The panel, which attracted a significant audience, included esteemed experts such as Alex Starr, a Corporate Fellow at AMD; Bharat Vinta, Director of Hardware Engineering at Nvidia; Divyang Agrawal, Senior Director of RISC-V Cores at Tenstorrent; and Dr. Arturo Salz, a distinguished Fellow at Synopsys.

Presented below is a condensed transcript of the panel discussion edited for clarity and coherence.

Rizzatti: How is multi-die evolving and growing? More specifically, what advantages have you experienced? Any drawback you can share?

Starr: By now, I think everybody has probably realized that AMD’s strategy is multi-die. Many years ago, ahead of the industry, we made a big bet on multi-die solutions, including scalability and practicality. Today, I would reflect and say our bet paid off to the extent that we’re living that dream right now. And that dream looks something like many dies per package, being able to use different process geometries for each of those dies as well to exploit the best out of each technology in terms of I/O versus compute, power/performance trade-offs.

Vinta: Pushed by the demand for increasing performance from generation to generation, modern chip sizes are growing so huge that a single die cannot accommodate the capacity we need any longer. Multi-die, as Alex put it, is here right now, it’s becoming a necessity not only today, but into the future. A multitude of upcoming products are going to reuse chiplets.

Agrawal: Coming from a startup, I have a slightly different view. Multi-die gets you the flexibility of mixing and matching different technologies. This is a significant help for small companies since we can focus on what our core competency is rather than worrying about the entire ecosystem.

Salz: I agree with all three of you because that’s largely what it is. Monolithic SoCs are hitting a reticle limit, we cannot grow them any bigger, they are low-yield, high-cost designs. We had to switch to multi-die and the benefits include the ability to mix and match different technologies. Now that you can mount and stack chiplets the interposer has no reticle limit, hence, there is no foreseeable limit for each of these SoCs. Size and capacity become the big challenge

Rizzatti: Let’s talk about adoption of multi-die design. What are the challenges to adopt the technology and what changes have you experienced?

Starr: We have different teams for different chiplets. All of them work independently but have to deliver on a common schedule to go into the package. While the teams are inherently tied, they are slightly decoupled in their schedules. Making sure that the different die work together as you’re co-developing them is a challenge.

You can verify each individual die, but, unfortunately, the real functionality of the device requires all of those dies to be there, forcing you to do what we used to call SoC simulation – I don’t even know what a SoC is anymore – you now have all of those components assembled together in such multitude that RTL simulators are not fast enough to perform any real testing at this system level. That’s why there has been a large growth in emulation/prototyping deployment because they’re the only engines that can perform this task.

Vinta: Multi-die introduces a major challenge when they all share the same delivery schedules. To meet the tapeout schedule, you not only have to perform die-level verification but also full chip verification. You need to verify the full SoC under all use cases scenarios.

Agrawal: I tend to think of everything backwards from a silicon standpoint. If your compute is coming in a little early you may have a platform on which do silicon bring up and not wait for everything else to come in. What if my DDR is busted? What if my HBM is busted? How do you compare, how do you combine, mix and match those things.

Salz: When you get into system level, you’re not dealing with just a system but a collection of systems communicating through interconnect fabrics. That’s a big difference that RTL designers are not used to thinking about. You have jitter or coherency issues, errors, guaranteed delivery, all things engineers commonly deal with in networking. It really is a bunch of networks on the chip but we’re not thinking about it that way. You need to plan this out all the way at the architectural level. You need to think about floor planning before you write any RTL code. You need to think about how you are going to test these chiplets. Are you going to test them each time we integrate one? What happens to different DPM or yields for different dies? Semiconductor makers are opportunistic. If you build a 16 core engine and two of them don’t work, you label it as an eight core piece and sell it. When you have 10 chiplets, you can get a factorial number in the millions of products. It can’t work that way.

Rizzatti: What are the specific challenges in verification and validation? Obviously, you need emulation and prototyping, can you possibly quantify these issues?

Starr: In terms of emulation capacity, we’ve grown 225X over the last 10 years and a large part of that is because of the increased complexity of chiplet-based designs. That’s a reference point for quantification.

I would like to add that, as Arturo mentioned, the focus on making sure you’re performing correct-by-construction design is more important now than ever before. In a monolithic chip-die environment you could get away with SoC level verification and just catch bugs that you may have missed in your IP. That is just really hard to do in a multi-die design.

Vinta: With the chiplet approach, there is no end in sight for how big the chip could grow to. System-level verification of full chip calls for huge emulation capacity requirements, particularly for the use cases that require full system emulation. It’s a challenge not only for emulation but also for prototyping. The capacity could easily increase an order of magnitude from chip to chip. That is one of my primary concerns, in the sense of “how do we configure emulation and prototyping systems that could handle these full system level sizes?”

Agrawal: With so many interfaces connected together, how do you even guarantee system level performance? This was a much cleaner problem to address when you had a monolithic die, but when you have chiplets then the performance is the least common denominator of all the interfaces, hoops that a transition has to go through.

Salz: That’s a very good point. By the way, the whole industry hinges on having standard interfaces. The future when you can buy a chiplet from a supplier and integrate it into your chip is only going to be possible if you have standard interfaces. We need more and better interfaces, such as UCIe.

By the way you don’t need to go to emulation right away. You do need emulation when you’re going to run software cycles, at the application-level, but for basic configuration testing you can use a mix of hybrid models and simulation. If you throw the entire system at it, you’ve got a big issue because emulation capacity is not growing as fast as these systems are growing, so that’s going to be a big challenge too.

Rizzatti: Are the tools available today adequate for the job? Do you need different tools? Have you developed anything inhouse that you couldn’t find on the market?

Starr: PSS portable stimulus is an important tool for chiplet verification. It’s because a lot of functionality of these designs is not just in RTL anymore, you’ve got tons of firmware components, and you need to be able to test out the systemic nature of these chiplet-based designs. Portable stimulus is going to give us a path to have a highly efficient, close to the metal stimulus that can exercise things at the system-level.

Vinta: From the tools and methodologies point of view, given that there is a need to do verification at the chiplet-level as well as at the system-level, you would want to simulate the chiplets individually and then, if possible, simulate at full system-level. The same goes for emulation and prototyping. Emulate and prototype at the chiplet-level as well as at the system-level if you can afford to do it. From the tools perspective, chiplet-level simulation is pretty much like monolithic chip simulation. Verification engineers are knowledgeable and experienced to that methodology.

Agrawal: No good debug tools are out there where you could combine multiple chiplets and debug something.

From a user standpoint, if you have a CPU-based chiplet and you’re running a Spec benchmark or 100 million instructions per workload on your multi-die package then something fails, maybe it’s functional performance, where do you start? What do you look at? If I bring that design up in Verdi it would take forever.

When you verify a large language model and run a data flow graph and you’re placing different pieces or snippets of the model across different cores, whether Tenstorrent cores or CPU cores, and you have to know at that point whether your placement is correct, how can you answer that question? There’s absolute lack of good visibility tools that can help verification engineers moving to multi-die design right now.

Salz: I do agree with Alex that portable stimulus is a good place to start because you want to do scenario testing, and that’s well suited for doing scenario testing with consumer-producer schemes that pick snippets of code needed for the test.

There are things to do for debug. Divyang, I think you’re thinking of old style waveform dumping for the whole SoC, and that is never going to work. You need to think about transaction level debug. There are features in Verdi to enable transaction level debug, but you need to create the transactions. I’ve seen people grab like a CPU transaction which typically is just the instructions and look at it and say, there’s a bug right there, or no, the problem is not in the CPU. Most of the time, north of 90%, the problem sits in the firmware or in the software, so that’s a good place to start as well.

Rizzatti: If there is such a thing as a wish-list for multi-die system verification, what would that wish-list include?

Starr: We probably need something like a thousand times faster verification, but typically we see perhaps a 2X improvement per generation in these technologies today. The EDA solutions are not keeping up with the demands of this scaling.

Some of that’s just inherent in the nature of things in that you can’t create technologies that are going to outpace the new technology you’re actually building. But we still need to come up with some novel ways of doing things and we can do all the things we discussed such as divide and conquer, hybrid modeling, and surrogate models.

Vinta: I 100% agree. Capacity and throughput need to be addressed. Current platforms are not going to scale, at least not in the near future. We would need to figure out how to divide and conquer, as Alex noted. Making sure with the given footprint how do you get more testing done, more verification up front. And then on top of it, address the debug questions that Divyang and Arturo have brought up.

Agrawal: Not exactly tool specific, but it would be nice to have a standard for some of these methodologies to talk to each other. Right now, it’s vendor specific. It would be nice to have some way of plugging in and playing different standards together so things just work right so people can focus on their core competencies rather than having to deal with what they don’t know.

Salz: It’s interesting that nobody’s brought up the “when do know you’re done?”

It’s an infinite process. You can keep simulating/verifying and that brings to mind the question of coverage. We understand some coverage at the block level, but at the system level is scenario driven. You can dream up more and more scenarios, each application brings something else. That’s an interesting problem that we have not yet addressed.

Rizzatti: This concludes our panel discussion for today. I want to thank all the panelists for offering their time and for sharing their insights into the multi-die verification challenges and solutions.

Also Read:

Complete 1.6T Ethernet IP Solution to Drive AI and Hyperscale Data Center Chips

2024 Signal & Power Integrity SIG Event Summary

Navigating the 1.6Tbps Era: Electro-Optical Interconnects and 224G Links


Andes Technology: Pioneering the Future of RISC-V CPU IP

Andes Technology: Pioneering the Future of RISC-V CPU IP
by Frankwell Lin on 03-25-2024 at 6:00 am

Table 1

On September 13, 2021, Andes Technology Corporation successfully issued its GDR (Global Depositary Receipt) public offering on the Luxembourg Stock Exchange. At the time it made Andes the only international public RISC-V Instruction set architecture (ISA) CPU IP supplier. This allowed investors around the world to participate in the growth Andes envisioned for RISC-V. This capital infusion would fuel Andes ambition to become a leader in the rapidly evolving, high-growth, open standard RISC-V market. In 2015 recognizing the vast potential for the RISC-V ISA, Andes had become a Founding and Premier Member of RISC-V International.

As of April 2, 2023; Unit: Shares %
Table 1. Composition of Andes Technology Corp. Shareholders

The investment has paid off significantly particularly because it coincided with ratification of the RISC-V Vector Extension in November 2021. This event marked a turning point in the evolution of the RISC-V instruction set architecture. RISC-V vector extension came at a time when data center computing was changing from general purpose processing to AI processing, which handles extremely large data sets. Vector processing excels in efficient processing of large arrays or structured data. Vector processing has the potential to make RISC-V the next major worldwide ISA.

A vector processor’s highly parallel architecture reduces latency and overhead. It achieves better energy efficiency by maximizing CPU resource utilization and minimizing idle cycles, thus realizing higher performance per watt. Moreover, the hardware to implement RISC-V Vector processing units (VPUs) and vector registers is simpler than highly parallel architectures used for graphics processing. And VPUs provide a far less complex programming model.

The Andes R&D teams in both the North American operation and the expanded Taiwan staff have been focused on developing cutting-edge architectures for high-end RISC-V processors. Notably, the two achieved a significant milestone by developing the first RISC-V vector (RVV) engine, the AndesCore™ NX27V, based on the RISC-V International RVV specification.  Showcasing the agility and innovation of the Andes engineering team, the design was completed within a year and based on version V0.8 of the RISC-V vector extension specification, and later on modified to version V1.0 when RVV was ratified. This accomplishment led to a few major OEM design wins.

Last year at the International Symposium on Computer Architecture (ISCA) 2023 conference in Orlando, Florida, META presented its paper, “MTIA: First Generation Silicon Targeting Meta’s Recommendation Systems,” which is the company’s data center, AI servers project. There are 64 processing elements (PE) in the server design that support MRETA’s custom-built proprietary accelerator. Each PE contains two processors: one scalar and one vector. Both are Andes IP that META engineers highly customized, using Andes Custom Extensions (ACE) to produce a completely unique solution targeted at META’s specific AI computing requirements.

The design validated the efficacy of RISC-V with Vector Extensions as a powerful solution to AI data center computing needs at a time when demand for data center processing hardware is exploding. According to Future Market Insights‘ report “Data Center CPU Market Outlook (2023 to 2033),” The data center CPU market is expected to grow significantly over the next few years, driven by the increasing demand for cloud computing, big data analytics, and artificial intelligence (AI). Key drivers of this growth include the need for faster data processing, increased efficiency, and reduced costs.

In 2021, in addition to vector extensions RISC-V International ratified 11 more extensions. Figure 1 illustrates the Andes product roadmap to support these extensions. Along the way to end of 2022 N25F-SE, 27 Series, and 45 series have since been delivered, in 2023, Andes delivered six new RISC-V cores to the market, such as D25F-SE, D23, N225, NX45V, AX45MPV as well as AX65. The road map spans from low-power and highly secured entry-level RISC-V processor AndesCore™ D23 to the AX65, the first in the 60 series, which was released in 2023 Q4 and is now shipping for customers designs.

Figure 1. Andes Technology Corp. Product Roadmap

The AX65 is a 13-stage, 4-way 64 bit out of order processor with RVA 22 profile (RVA22U64 profile specifies the ISA features available to user-mode execution environments in 64-bit applications processors). Equipped with 13-stage pipeline, 4-wide decode, 8-wide out-of-order execution, the series targets the Linux application processor sockets of computing, networking, and high-end controllers.

The AX65 allows multicore clusters from one to four to eight cores. The performance is world class, operating at 2.4 gigahertz clock frequency in seven nanometers TSMC process. The spec integers (Specint 2006) performance is 8.25 per gigahertz, and a SpecFp2006 is 10.2 per gigahertz, which are the best-known SPEC CPU® 2006 performance with two level cache design. The AX66, AX63 and AX67 will be delivered thereafter.

One other area Andes has made significant investment in is high performance automotive-grade RISC-V CPU IP. The penetration of RISC-V SoCs in automotive designs is projected to reach 21.4% by 2030, according to The SHD Group “RISC-V Market Report: Application Forecasts in a Heterogeneous World.” Andes developed functional safety-compliant products, include its N25F-SE, the world’s first fully ISO 26262 compliant RISC-V CPU IP; D25F-SE, which supports DSP extension instructions; and the 45-SE series processors that meet the highest ASIL level, ASIL D. ACE function will be enhanced to add support for 45-series processors.

On the strength of the demand Andes RISC-V products have experienced, the company continues to remain profitable and continues to enjoy rapid growth. From 2021 to 2023, Andes revenue showed nearly 30% growth. This was fueled by over 300 commercial licensees and over 600 signed license agreements with geographically distributed customers in Taiwan, China, Korea, Japan, Europe, and the USA. The company’s worldwide headcount grew nearly 70% over the same period.

Conclusion

In an era defined by rapid technological evolution, Andes Technology Corp. stands at the forefront of innovation in the RISC-V CPU IP market. From its pioneering issuance of overseas depositary receipts (GDR) to its groundbreaking advancements in RISC-V architecture, Andes Technology continues to redefine industry standards and shape the future of computing. As the demand for efficient, high-performance computing solutions continues to rise, Andes Technology remains committed to delivering unparalleled RISC-V solutions to drive transformative change across the global technology landscape.

Also Read:

LIVE WEBINAR: RISC-V Instruction Set Architecture: Enhancing Computing Power

WEBINAR: Leverage Certified RISC-V IP to Craft ASIL ISO 26262 Grade Automotive Chips

LIVE WEBINAR: Accelerating Compute-Bound Algorithms with Andes Custom Extensions (ACE) and Flex Logix Embedded FPGA Array


Podcast EP213: The Impact of Arteris on Automotive and Beyond with Frank Schirrmeister

Podcast EP213: The Impact of Arteris on Automotive and Beyond with Frank Schirrmeister
by Daniel Nenni on 03-22-2024 at 10:00 am

Dan is joined by Frank Schirrmeister, vice president of solutions and business development at Arteris. He leads activities in the industry verticals including automotive and technology horizontals like artificial intelligence, machine learning, and safety. Before Arteris, Frank held senior leadership positions at Cadence Design Systems, Synopsys, and Imperas, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives, and customer engagement.

In this far-reaching discussion, Frank explains the impact Arteris NoC technology has on system design. He dives into its impact on automotive design, discussing many aspects of the market including how Arteris simplifies safety. Arteris support for cache coherent design is also discussed. Frank goes beyond automotive and explains the impact of this technology across many markets.

Looking to the future, Frank discusses recent Arteris acquisitions that expand the company’s footprint beyond its traditional markets. Engagements with tier-1 customers are discussed, along with an explanation of the engagement process Arteris uses with new customers.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Ganesh Verma, Founder and Director of MoogleLabs

CEO Interview: Ganesh Verma, Founder and Director of MoogleLabs
by Daniel Nenni on 03-22-2024 at 6:00 am

MicrosoftTeams image (27)

A thought leader with demonstrated history in multiple roles- project life cycles, ideation, implementation, and closing projects delivering business value and delighting stakeholders, crafting technical aspects of the company’s strategy for aligning with the business goals perfectly, discovering and implementing technologies to yield competitive advantage in the digital landscape.

Being a Six Sigma Green Belt holder, I hold a wide exposure and experience of working in an agile environment, managing and executing scrums and enabling teams to set new milestones within a minimum timeframe while delivering superior outputs on the customer’s end. Besides that, I am a result-oriented and business-focused with hands on experience on broad range of next-gen technologies.

 Tell us about your company. 
MoogleLabs is an organization that offers services in all revolutionary technologies including AI, ML, DevOps, Data analytics, metaverse, blockchain, Web3, and several more. When I was working in the IT industry, I saw a major gap between what was being offered as solutions to organizations and where the latest technology was at that point. Limited organizations were working with revolutionary technologies as the adoption rate is slow.

With MoogleLabs, we change that. With primary focus on the latest technology, we offer state-of-the-art services to our clients, creating the most advanced products for them every time. I ideated the startup back in 2020, and since then have not looked back. Now, we have over 100 employees who are working on a range of products and I couldn’t be happier.

What problems are you solving? 
At the core, MoogleLabs is an IT company. So, we are focused on assisting organizations in creating innovative solutions to improve their operations. What we do differently is that we offer services in the tech stack that are not readily available in the market. It also means that we are better equipped to create solutions that will be as per the market standards for years to come. Think of it this way: when you buy a mobile, if you choose to purchase an older model because it is still relevant, you save money. However, you compromise on the longevity of the product. The updates will stop coming soon, and in only a year or two, you will not have a relevant model. However, when you opt for the latest model, it will get the updates and stay relevant longer. We do the same but for software. We create solutions that will not turn obsolete in a matter of a few years if not months.

What application areas are your strongest?
The three technologies that we have been working on constantly are AI, ML, and Blockchain. Artificial Intelligence and machine learning are two technologies that will revolutionize world operations as we know them. They can automate a range of tasks for businesses, making them more efficient. Also, blockchain can make the internet more secure and safer for users.

We are currently working on creating AI-enabled IoT that will assist with business operations. Additionally, we are working on blockchains, crypto bots, metaverse, and much more. However, I think I am biased toward our capabilities to offer AI services.

What keeps your customers up at night? 
Our customers are innovators. Always looking for ways to make their business better, ensuring that it stands out among the competitors and offers something different. So, I guess ways to bring innovation to their business are what keeps them awake at night.

Once they start working with us to create the appropriate solution, the worries go away.

What does the competitive landscape look like, and how do you differentiate?
The number of IT organizations in the world is increasing as we speak. So, it is a seriously competitive market. We differentiate from others because we are dedicated to working with the latest technologies. For more IT organizations, the biggest pitfall is not evolving the latest technologies. For older companies, it is easier to fall back on what they know and not pay attention to what new has entered the market, ultimately losing in the race. As a startup, we are current in the tech stack. Additionally, with the focus being on revolutionary technologies at all times, the trend will continue. This is what helps set us apart from others and get so many results.

What new features/technology are you working on? 
Well, there are a few already completed projects that I am currently proud of. For one, we have created an application that extracts information from uploaded documents to get insights for better decision making. It is called Alluvium. Other projects we have worked on include screen damage detection, fully automated assessment solutions, and more. We have also worked on agile DevOps projects, NFTs, blockchain wallets, and many others. We are working on several other concepts in this section and aim to provide the more innovative solutions in the future.

 How do customers normally engage with your company?
Companies that want to connect for work can get in touch through our MoogleLabs’ Contact Us page or can email us at info@mooglelabs.com Then, our team gets in touch with them to discuss the work details.

Also Read:

CEO Interview: Larry Zu of Sarcina Technology

CEO Interview: Michael Sanie of Endura Technologies

Outlook 2024 with Dr. Laura Matz CEO of Athinia


Semiconductor CapEx Down in 2024

Semiconductor CapEx Down in 2024
by Bill Jewell on 03-21-2024 at 2:00 pm

Delayed Wafer Projects

U.S. President Biden announced on Wednesday an agreement to provide Intel with $8.5 billion in direct funding and $11 billion in loans under the CHIPS and Science Act. Intel will use the funding for wafer fabs in Arizona, Ohio, New Mexico, and Oregon. As reported in our December 2023 newsletter, the CHIPS Act provides a total of $52.7 billion for the U.S. semiconductor industry, including $39 billion in manufacturing incentives. Prior to the Intel grant, the CHIPS Act had announced grants totaling $1.7 billion to GlobalFoundries, Microchip Technology, and BAE Systems, according to the Semiconductor Industry Association (SIA).

Grants under the CHIPS Act have been slow in coming, with the first grants announced over a year after passage. Some major fab projects in the U.S. have been delayed due the slow disbursement. TSMC also cited difficulties in finding qualified construction personnel. Intel said the delay was also due to slowing sales.

Other nations have also allocated funds to promote semiconductor production. The European Union in September 2023 passed the European Chips Act which provides for 43 billion euro (US$47 billion) of public and private investment in the semiconductor industry. In November 2023, Japan allocated 2 trillion yen (US$13 billion) for semiconductor manufacturing. Taiwan in January 2024 enacted a law to provide tax breaks for semiconductor companies. South Korea in March 2023 passed a bill to provide tax breaks to strategic technologies including semiconductors. China is expected to create a $40 billion fund backed by the government to subsidize its semiconductor industry.

What is the outlook for capital expenditures (CapEx) in the semiconductor industry this year? The CHIPS Act was designed to spur CapEx, but much of the effect will not occur until after 2024. After a disappointing 8.2% decline in the semiconductor market last year, many companies are cautious about CapEx in 2024. We at Semiconductor Intelligence estimate total semiconductor CapEx in 2023 was $169 billion, down 7% from 2022. Our forecast is a 2% decline in CapEx in 2024.

The major memory companies are generally increasing CapEx in 2024 as the memory market recovers and new applications such as AI are expected to increase demand. Samsung plans relatively flat spending in 2024 at $37 billion but did not cut CapEx in 2023. Micron Technology and SK Hynix cut back CapEx significantly in 2023 and are planning double-digit increases in 2024.

The largest foundry, TSMC, plans to spend about $28 billion to $32 billion in 2024, with the midrange of $30 billion down 6% from 2023. SMIC is planning flat CapEx while UMC plans a 10% increase. GlobalFoundries expects a 61% cut in 2024 CapEx but will ramp up spending in the next few years as it builds a new fab in Malta, New York.

Among integrated device manufacturers (IDMs), Intel plans to increase CapEx in 2024 by 2% to $26.2 billion. Intel will increase capacity for foundry customers as well as for internal products. Texas Instruments’ CapEx is roughly flat. TI plans to spend about $5 billion a year through 2026, primarily for its new fabs in Sherman, Texas. STMicroelectronics will cut CapEx 39% while Infineon Technologies will cut by 3%.

The three largest spenders – Samsung, TSMC and Intel – will account for 57% of semiconductor industry CapEx in 2024.

What is the appropriate level of CapEx relative to the semiconductor market? The semiconductor market is notoriously volatile. Over the last 40 years, annual change has ranged from 46% growth in 1984 to a 32% decline in 2001. Although the industry has become somewhat less volatile as it has matured, in the last five years it has shown a 26% increase in 2021 and a 12% decrease in 2019. Semiconductor companies need to plan their capacity several years out. It takes about two years to build a new wafer fab and additional time for planning and financing. As a result, the ratio of semiconductor CapEx to the semiconductor market varies greatly, as shown below.

The semiconductor CapEx to market size ratio has varied from a high of 34% to a low of 12%. The five-year average ratio ranges between 28% and 18%. Over the total period of 1980 to 2023, the total CapEx was 23% of the semiconductor market. Despite the volatility, the long-term trend of the ratio has been fairly consistent. Based on expected strong market growth and a drop in CapEx, we expect the ratio to drop from 32% in 2023 to 27% in 2024.

Most forecasts for semiconductor market growth in 2024 are in the range of 13% to 20%. Our Semiconductor Intelligence forecast is 18%. If 2024 turns out to be as strong as expected, companies will likely increase their CapEx plans as the year progresses. We could then see positive change in semiconductor CapEx in 2024.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Strong End to 2023 Drives Healthy 2024

CHIPS Act and U.S. Fabs

Semiconductors Headed Toward Strong 2024


2024 Outlook with John Lee, VP and GM Electronics, Semiconductor and Optics Business Unit at Ansys

2024 Outlook with John Lee, VP and GM Electronics, Semiconductor and Optics Business Unit at Ansys
by Daniel Nenni on 03-21-2024 at 10:00 am

John Lee Headshot

We have been working with Ansys since SemiWiki was founded in 2011. It has been a richly rewarding relationship in all regards. I always say the semiconductor industry is filled with the most intelligent people in the world and Ansys is an excellent proof point. I have known John Lee for 30+ years and he is one of my trusted few, absolutely.

Tell us a little bit about yourself and your company.
I have been an EDA person for most of my career. I was really blessed to be a graduate student of Professor Ron Rohrer at Carnegie Mellon University, in a research group that launched the careers of many highly successful students, including three CEOs and two Kaufman Award winners.

I’m also blessed to be part of Ansys. We have over 6,000 employees worldwide and our products power innovation that drives human advancement across a very broad array of industries and technical challenges. For example, we have an ongoing partnership with Oracle Red Bull Racing to help them remain the most competitive Formula 1 (F1) team in the world. Oracle Red Bull Racing used Ansys Fluent for streamlining simulations, Ansys Granta for optimizing material choices, and Ansys LS-DYNA to verify safety parameters. They were able to design and optimize a race car that driver Max Verstappen used to make history in 2023, winning 21 out of 23 races and secure the Drivers’ and Constructors’ Championships with a massive 413-point margin.

Another example is how NASA engineers used capabilities from Ansys AGI to steer the James Webb Telescope by simulating the complex gravitational perturbations acting on its orbit and estimate station-keeping requirements. The project also relies on Ansys Zemax optical simulation to align the segmented mirror optics and Ansys Mechanical to model the natural vibration modes of the mirror as it is being pointed.

I hope these examples give you an idea of the amazingly broad range of physics simulation solutions supported by Ansys, which include Thermal, Mechanical, Semiconductors, Electromagnetics, Optics, Photonics, Fluid Dynamics, Acoustics, 3D Design, Materials, Safety, Digital Twins, Autonomous Vehicles, Embedded Software, and Mission Engineering.

My responsibility at Ansys is for the Electronics, Semiconductors and Optics products. Our focus has been very rewarding, as the need for open and extensible Multiphysics platforms and solutions has become very clear over the last five years. We’ve been fortunate to establish ourselves as both thought leaders and trusted partners with market-shaping customers.

What was the most exciting high point of 2023 for your company?
The rapid adoption of 2.5D and 3D heterogeneous designs has been both very challenging and very rewarding. Many of these systems are being driven by the AI revolution. These systems consume massive amounts of power, which naturally generates tremendous thermal effects, which induce mechanical effects such as stress and strain that affect both performance and reliability.

The thermal-centric challenges of 3D-IC are even more profound, and if you look at the new systems being designed by the leaders in AI, you can see why 2023 was very exciting and rewarding for Ansys.

A beneficiary of this work is that the AI methods we’ve developed at Ansys are further accelerated. For example, we combined AI with our revolutionary sigmaDVD technology in RedHawk-SC, to enable significantly better PPA from place and route tools. Our leading customers deployed this in 2023, and we see accelerated adoption in 2024. As power and thermals strongly affect advanced process node design, using AI + sigmaDVD has become a very, very exciting high point!

What was the biggest challenge your company faced in 2023?
Ansys has grown significantly during the pandemic, both organically and inorganically.  Since 2020, for example, we’ve announced 10 acquisitions, and we’ve grown rapidly in new offices, including Athens, Vancouver, and Rwanda!

So, figuring out how to onboard many new hires across many regions and also integrate great high-performing teams from our acquisitions has been a challenge and focus of mine in 2023.

How is your company’s work addressing this biggest challenge?
Culture and engagement have been a real focus on the leadership team at Ansys before, and especially during the pandemic. It has been very rewarding to see the focus on this challenge pay off. For example, we are proud that the Wall Street Journal named Ansys one of the 250 best-managed companies of 2023, and Newsweek recognized Ansys as one of the top 100 Most Loved Workplaces.

What do you think the biggest growth area for 2024 will be, and why?
The insatiable and pervasive demand for compute is driving the convergence between silicon and systems, which is creating great opportunities for companies like Ansys.

First, there is an accelerated need for open and extensible Multiphysics platforms – such as Ansys AEDT and SeaScape (for system and chip designers) to partner with leading design platforms – and our investments in the areas of physics, platform and partnerships is driving deep customer value.

Second, both semiconductor and system companies are focused on the benefits of optimized software + silicon systems. This is occurring in automotive, with software defined vehicles; in communications, with 5G and 6G systems; and in the data center, with power and thermal limited compute. Concepts which are missing from the EDA lexicon – such as MBSE, SPDM, SIL and digital twins – are a big opportunity for companies like Ansys.

How is your company’s work addressing this growth?
The Ansys team and portfolio is a great combination of core physics, advanced computational sciences (including AI, cloud and platform), and system-level products that deliver on the need for model-based systems engineering, functional and cyber-security, powerful digital twin models between component and system designers. You’ll see many exciting announcements in this area this year!

What conferences did you attend in 2023 and how was the traffic?
We’ve been excited by the interest in co-packaged optics (CPO), which is a key trend for anyone designing products for the data center and AI. So, Photonics West and Optical Fiber Conference (OFC) were great.

DesignCon is another event that has really rebounded since the pandemic, and the convergence between silicon and system is driving lots of interest in Ansys Multiphysics.

Will you attend conferences in 2024? Same or more?
Mobile World Congress is one that we just came back from, and it’s been exciting to see how Ansys technologies are helping 5G communication systems deploy using our Ansys RF Channel Modeler and AGI Systems Toolkit (STK). For classic EDA folks like me, it’s very exciting to see how computational physics and missions planning are playing a vital role in connecting the world.

Also Read:

Ansys and Intel Foundry Direct 2024: A Quantum Leap in Innovation

Why Did Synopsys Really Acquire Ansys?

Will the Package Kill my High-Frequency Chip Design?

Keynote Speakers Announced for IDEAS 2023 Digital Forum


Simulating the Whole Car with Multi-Domain Simulation

Simulating the Whole Car with Multi-Domain Simulation
by Bernard Murphy on 03-21-2024 at 6:00 am

Simulating the Whole Car with Multi-Domain Simulation

Next significant automotive blog in a string I will be posting (see here for the previous blog).

In the semiconductor world, mixed simulation means mixing logic sim, circuit sim, virtual sim (for software running on the hardware we are designing) along with emulation and FPGA prototyping. While that span may seem all-encompassing, in fact, it’s still a provincial view. OEMs like auto companies develop complete products in which software plays an outsize role, governing what in effect is a highly distributed compute system across the car. Developing and testing this software on (software-based) digital twins allows for faster experimentation and high levels of parallelism than possible with hardware prototypes but requires collaboration between many kinds of domain specific simulators. A very diverse group of companies are planning to launch a working group under Accellera to define an enabling standard to serve this “Federated Simulation” need.

Who Wants Federated Simulation and Why?

At DVCon I met with an impressive group: Mark Burton (Vice-Chair of the proposed working group, also Qualcomm), Yury Bayda (Principal Software Engineer at Ford Motor Company, previously Intel), Trevor Wieman (System Level Simulation Technologist at Ford Motor Company, also previously Intel), Lu Dai (Accellera Chair and Qualcomm) and Dennis Brophy (needs no introduction). What follows is primarily a synopsis of inputs from Mark, Yury and Trevor.

A car is a network of interconnected computers developed by multiple suppliers; the auto OEM software team must develop/refine and debug software to make this whole system run correctly. Perhaps the infotainment system is based on a Qualcomm chip, communicating with zonal controllers, in turn talking to edge sensors, drivetrain MCUs and other devices around the car, all communicating through CAN or automotive Ethernet. To meet acceptable software simulation times this total system model must run on abstracted virtual models for each component. Suppliers provide such models in a variety of formats: proprietary instruction set simulators or representations based on different virtual modeling tools.

Which raises the perennial problem of blending all these different models into a unified virtual runtime. Maybe someday all suppliers will provide models with TLM-compliant interfaces but until then, can we build better bridges/wrappers to couple between all these models? That’s what the Federated Simulation initiative aims to address. Yury provided a compelling example for why we need to make this work. Over the Air (OTA) software updates are a must-have for software-defined vehicles but what happens if something goes wrong in an update – if the update bricks your car or some part of your car? System-level scenarios like this must be considered during design to mitigate such problems and must tested exhaustively.

Bottom-line, system software development must start early before hardware is available and is completely dependent on reliable high-level simulation abstractions to underpin total system simulations.

Not just for electronic systems

A car is not just electronic circuits; neither is a plane or a spacecraft or an industrial robot. Still, electronics plays an increasing role, now interacting with mechanical systems and with the surrounding environment. Antilock braking must behave appropriately under different levels of traction on a dry road, in rain or in snow. From ADAS to autonomy, driving systems must be tested against a vast array of scenarios. The CARLA simulator is an important component of such testing, modeling urban and other layouts across many environment conditions and providing streaming video, LIDAR, and other sensor data as input to full system simulations.

A federated simulation solution must couple to simulators like CARLA. Ultimately it must also couple with standards in other verticals such as OpenCRG to describe road surfaces, VISTAS/VHTNG for avionics, SMP2 for space applications, and FMUs for mechatronics. Each is well established in its own domain and unlikely to be displaced. A federated simulation standard must respect and smoothly interoperate with these standards – I’m guessing in incremental steps. That said there is already enthusiastic support from many quarters to be involved in this effort.

 

Accellera

Agnisys, Inc.
Airbus
AMD
Aptiv
Cadence Design Systems, Inc.
Collins
Doulos Ltd.
Ford
Huawei Technologies Sweden AB
IEEE

Intel Corporation

IRT-Saint Exupery
Marvell International Ltd
Microsoft Corporation
MachineWare GmbH
NXP Semiconductors
Qualcomm Technologies, Inc.

Robert Bosch GmbH
Renesas Electronics Corp.
S2C
Shokubai
Siemens EDA

Spacebel
STMicroelectronics
Synopsys
Shanghai UniVista Industrial Software GroupTexas Instruments
Vayavya Labs
Zettascale

Core team membership for the initial definition

 What will it take?

This is an ambitious goal but it’s worth noting that the US DoD launched a similar effort called HLA in the 1990s which has continued to grow. Airbus has built their own architecture with similar intent, including a physical prototype of an aircraft for hardware in the loop testing. At the electronic systems level, Mark, Yury and Trevor have all previously been involved in multi-simulator projects at Intel and Qualcomm, and more recently with Ford (Yury and Trevor). They do not see this as an impossible goal though I’m guessing it will likely evolve from modest expectations through multiple releases.

The core concept as described to me is based on cloud deployment with a container instance for each simulation and Kubernetes for resource allocation (CPUs, GPUs, hardware accelerators, etc.) and orchestration. The Accellera team don’t plan to reinvent any standards (or emerging standards) that already work well. Instead they intend to leverage existing transport layers, adding only applications layers above that level such that a simulator instance can publish streams of activity to other subscribing simulators, and subscribers can be selective about what data they want to see.

Very interesting. You can learn more HERE and HERE

Also Read:

An Accellera Functional Safety Update

DVCon Europe is Coming Soon. Sign Up Now

Accellera and Clock Domain Crossing at #60DAC


QuantumPro unifies superconducting qubit design workflow

QuantumPro unifies superconducting qubit design workflow
by Don Dingee on 03-20-2024 at 10:00 am

Superconducting qubit design workflow in QuantumPro

To create quantum computing chips today, a typical designer must cobble various tools together, switching back and forth between them for different tasks. By contrast, EDA solutions such as Keysight Advanced Design System (ADS) unify a design workflow in a single interface with automated data exchange between features. In an industry-first, Keysight QuantumPro brings five different functions for quantum design together in a superconducting qubit design workflow, reducing design cycle time and prototyping and yield risk for optimized quantum chips. Keysight’s Quantum EDA solution focuses on accurate electromagnetic modeling, ensuring that simulation and measurement outcomes align effectively.

Adding superconducting qubits presents a yield challenge

Superconducting quantum computers rely on qubits constructed from Josephson junctions held at cryogenic temperatures, which exhibit non-linear inductance needed in constructing two-level systems. Qubits interconnect by meander line coplanar waveguide resonators with frequencies often in the 4 to 10 GHz range. The resonators serve two primary functions: indirectly reading out the state of the qubits and entangling them with each other. Quantum amplifiers with gain and unique ultra-low noise characteristics amplify the output signal from qubits to improve the readout fidelity. Quantum processing units (QPUs), comprising arrays of qubits and resonators, are positioned to showcase quantum advantage, surpassing the reach of classical CPUs.

The advantage of quantum computing arises from its distinct exponential increase in computational capacity as the number of entangled qubits grows, with n qubits taking on 2n stable states. Scaling the number of qubits in quantum computing presents a formidable challenge, with the stability and coherence of qubits becoming more complex with each addition. By tying four design themes – structural layout, electromagnetic analysis, complex quantum circuit optimization, and system-level exploration and troubleshooting – into a single quantum design workflow, designers can thoroughly exercise every part of a design, make data-based adjustments, and re-simulate to verify improvements.

Unlike digital circuit design, the challenge of designing quantum chips is more than step and replication. For entanglement to work correctly, resonance frequencies must be unique within and between all nearby qubits. If two (or more) resonance frequencies overlap, qubits entangle improperly due to unpredictable cross-coupling, and the quantum chip becomes a yield failure. Factors such as minimizing environmental interference, maintaining qubit entanglement, and managing errors due to decoherence pose significant obstacles. Debugging problems discovered after fabricating chips and cryogenic testing become expensive and time-consuming.

Enabling a five-point superconducting qubit design workflow

A better solution is a shift left for quantum chip design – integrating layout and simulation tools to predict and optimize resonance frequencies accurately in virtual space. RF designers are familiar with these workflows, but quantum designers are just beginning their adoption. “QuantumPro bridges the gap from ad-hoc quantum chip design with inherent yield risks to confidence in layout and simulation for predictable parts,” says Mohamed Hassan, Quantum Solutions Planning Lead at Keysight.

QuantumPro integrates five functions built on the ADS platform to streamline superconducting qubit designs, including schematic design, layout creation, electromagnetic (EM) analysis, non-linear circuit simulation, and quantum parameter extraction. Beginning with the schematic interface, users can effortlessly drag and drop components from the built-in quantum artwork. Subsequently, a layout can be generated automatically from the schematic.

Within QuantumPro, two distinct analyses are available. First, the Full EM Analysis facilitates a frequency sweep of circuits, producing s-parameters at input and output ports. The platform supports both the finite element method (FEM) and the method of moments (MoM) for the Full EM analysis. Second, the Energy Participation Analysis allows for finding the eigenmodes of the system with the FEM solver. Instead of solving for the electric field over the entire volume, the MoM solves only for the currents on the metal surface, significantly cutting computational costs.

Quantum parameter extraction is automatic in QuantumPro, with quasi-static, black box quantization, and energy participation ratio (EPR) methods. A simplified layout of a four-qubit design shows the transmon qubits (Q1 through Q4) and meander line resonators (R1 through R4). Note the unique resonance frequency values extracted by each of the three different methods in QuantumPro.

EM simulation of superconducting qubits requires one extra step. Superconductors exhibit kinetic inductance, an additional inductance large enough to sway results compared with perfect electric conductors. “Designers can’t ignore kinetic inductance – it can cause a miss in resonance frequency by as much as 40% in some cases of thin film superconductors,” says Hassan. Superconductor material editors in ADS and EMPro allow designers to describe materials for accurate kinetic inductance capture.

A core feature from ADS in QuantumPro is a Python console that gives users more control over workspaces and user interfaces and the ability to script and automate repetitive tasks.

Scaling quantum chips to hundreds or thousands of qubits

QuantumPro assures superconducting quantum chip designers get the results they expect at prototyping after simulation and optimization of designs. QuantumPro arrives at a pivotal point in quantum computing development as designers seek to move from designs with tens of qubits into the hundreds and perhaps thousands. As in conventional semiconductors, reducing prototype re-spins and improving yields can help move ideas from research to commercialization more quickly.

Designers will also see a productivity boost in their superconducting qubit design workflow with QuantumPro, which may also lower the learning curve for designers from other disciplines. Resources on the Keysight website explain more about the science behind superconducting qubits and the EM analysis and parameter extraction methods in QuantumPro.

QuantumPro webpages, with links to an application note, technical overview, and videos:

Quantum EDA: Faster design cycles of superconducting qubits

W3037E PathWave QuantumPro