DAC2025 SemiWiki 800x100

Can We Auto-Generate Complete RTL, SVA, UVM Testbench, C/C++ Driver Code, and Documentation for Entire IP Blocks?

Can We Auto-Generate Complete RTL, SVA, UVM Testbench, C/C++ Driver Code, and Documentation for Entire IP Blocks?
by Kalar Rajendiran on 07-11-2022 at 6:00 am

IDSNG1

Whether it is fully autonomous driving, or wrinkle-free fabric, or ambient energy harvesting for powering electronic devices, each industry is chasing after its respective ultimate goal. For the semiconductor design industry, its goal is the capability to generate complete chip or IP in executable format from a high-level behavioral description. It is interesting to note that many decades ago, when schematic capture was the predominant way of specifying designs, many companies had special projects to work on behavioral language compilers. Of course, even a very complex chip in those days was much, much simpler than even the simplest chip of today.

Those were the days when EDA tools were developed in-house at IDMs and ASIC companies. The third-party EDA industry as we know today was in its nascent stage. The primary motivation for the IDMs and ASIC companies was to get the chip to production as quickly as possible. This meant that the special projects didn’t get the full-fledged investment and attention. Just as the chip complexities started growing rapidly, HDLs such as VHDL and Verilog started gaining fast traction. And of course, the third-party EDA industry started burgeoning as well. All the wonderful RTL-level tools from the EDA industry have come in handy to implement even the most complex of chips today.

Of course, the above progress has put a strain on a couple of areas. One is the manual conversion of the high-level specification of a design to VHDL or Verilog. And the other is the amount of effort/time taken for verification. Is there a way to kill two birds with one stone?

Has the time arrived? Can a tool be developed that can auto-generate RTL, SystemVerilog Assertions (SVA), UVM testbench/tests, C/C++ driver code, and documentation for an entire IP block or chip? If this tool deploys correct-by-construction methodology, wouldn’t that reduce the time and effort needed for verification? Or would it? Agnisys claims it would. Can we make that leap of faith? Even in the traditional flow using time-tested layout tools, layout is verified against the netlist using a LVS verification tool. Bring your questions to their booth at DAC 2022. Agnisys will be showcasing a demo of a tool that they have been building using crowdsourced inputs and trials.

The company says that this tool is the next step in its ever-increasing solution for specification automation. With register automation well established some years ago, Agnisys turned their attention next to sequence automation for both SystemVerilog/UVM and C/C++. They released a technology called iSpec.ai, available at https://www.ispec.ai, that deploys machine learning (ML) techniques to auto-convert English assertions into proper SVA. It can also convert SVA into English and convert English to a programming sequence. Agnisys created a library of IP for standard functions that generates the design, UVM testbench and tests, C/C++ code, and documentation. They even created a tool to connect the IP blocks together at the top level of an SoC automatically.

Agnisys’ vision is to fully automate specification to implementation across design and verification, software, and device drivers. With register automation, verification automation, and interconnect automation under their belt, they are now seeking to expand specification automation to cover complete IP cores. The idea is for a system architect to create a specification and then press a button to generate the entire IP in an executable format. The spec could cover state machines, datapaths, and more in addition to registers. The output of this tool is to include the RTL code, the UVM verification environment and testbench/tests, C/C++ driver code, and documentation. Anyone developing an IP, FPGA, ASIC, or SoC will find this capability of interest.

Sounds too good to be true? The only way to find out is to visit them at DAC, see their demo, ask questions, poke holes, and see if their story can hold water. Here are a couple of screenshots from the demo.

The tool capability may have to mature over time just as all of today’s greatest tools had to go through their own maturation process based on customers’ feedback. You can learn more about Agnisys here.

Given Agnisys’ track record, you can expect to see something interesting at their booth. So go check them out at booth number 2512 at DAC 2022 in San Francisco.

Also Read:

ISO 26262: Feeling Safe in Your Self-Driving Car

DAC 2021 – What’s Up with Agnisys and Spec-driven IC Development

AI for EDA for AI


Interface IP in 2021: $1.3B, 22% growth and $3B in 2026

Interface IP in 2021: $1.3B, 22% growth and $3B in 2026
by Eric Esteve on 07-10-2022 at 10:00 am

IP 2017 2026

If you want to remember the key points for Interface IP in 2021, just consider $1.3B, 22%, $3B. Interface IP category has generated $1 billion 300 million in 2021, or 22.7% year to year growth, thanks to high runner protocols PCIe, DDR memory controller and Ethernet/SerDes. Even more impressive is the forecast, as IPnest predict the category to weight $3 billion in 2026. Also interesting in this category is the battle of two strategy models, Synopsys “One-Stop-Shop” and “Stop-For-Top” for Alphawave IP.

 

IPnest Forecast Interface IP Category Growth to $3B in 2026

The beginning of the 2010 years has been dominated by the wireless mobile, and a large part of interface IP revenues were generated in this market segment. High-end mobile was still dynamic in the end of 2010, and in the 2020 include many interface IP, like LPDDR5X, MIPI camera/display interfaces, PCIe 3 / 4, UFS 3.1, eUSB and USB 3.1/DP, but the data-centric segments like HPC, datacenter, AI or storage are booming and sustaining the growth in protocols like DDR memory controller (DDR5, LPDDR5, HBM), PCIe and CXL (PCIe 5 adoption in datacenter when automotive and mobile are still using PCIe 3 / 4) and Ethernet/SerDes (112G SerDes design starts in 2021 have been significant).

Ten years ago, “One-Stop-Shop” model was the mantra for IP vendors, and the strategy has been extremely beneficial for Synopsys, enjoying 55.6% market share in interface IP category. If we look at the other vendors following the “One-Stop-Shop” model, it’s more questionable. Cadence with 14% or Rambus with 3.4% market share have not been as successful as Synopsys. To benefit from this model, you need to support almost all protocols, by definition, but you also need to be the leader, with #1 revenues in every supported protocol. That we have seen since 2005, is that a small size vendor can survive and grow only if he can put a strong focus on the supported IP protocol. If the same vendor tries to support five, six or more protocol, as “One-Stop-Shop” model require, the risk of failure will be very high.

Alphawave IP, created in 2017 by a team of SerDes experts, has develop DSP based PAM4 112G and generated IP revenues of $89 million in 2021 or +102% YoY growth, after +75% in 2020. It’s an excellent example of the new strategy model, “Stop-For-Top”. The IP vendor is concentrating on very demanding product, targeting bleeding edge protocol and technology node, thanks to a strong engineering team. Alphawave IP has been “lucky”, as they are providing interconnect IP to an industry who is moving fast to become more and more data-centric and need to compute always more data, store it and interconnect these data at system level (PCIe and CXL) or long range, via Ethernet. Lucky to be at the right place at the right time, but the engineering team excellence is not luck based, it’s the result of long experience in SerDes design.

It can be interesting to compare the ROI generated by the two models… We can notice that both strategies can lead to success, as illustrated by Synopsys adopting the “One-Stop-Shop” model, with interface IP revenues ($727 million in 2021) and dominant market share of 55.6%, when Alphawave IP has been impressively fast to reach almost $100 million in IP revenues.

As usual, IPnest has made the five-year forecast (2022-2026) by protocol and computed the CAGR by protocol (picture below). As you can see on the picture, most of the growth is expected to come from three categories, PCIe, memory controller (DDR) and Ethernet & D2D, exhibiting 5 years CAGR of resp. 22%, 21% and 19%. It should not be surprising as all these protocols are linked with data-centric applications! This forecast predicts top 5 interface IP protocol to pass the $2.5B mark in 2026, or 2.5 multiplication factor in 5 years. It’s a 18.6% CAGR for 2026 to 2021 high-end interface IP revenues…

 

 

This is the 14th version of the survey, started in 2009 when the Interface IP category market was $250 million (in 2021 $1306 million), and we can affirm that the 5 years forecast stayed within +/- 5% error margin! IPnest predict in 2022 that the interface IP category in 2026 will be in the $3000 million range (+/- $200), and this forecast is realistic.

If you’re interested by this “Interface IP Survey” released in June 2022, just contact me:

eric.esteve@ip-nest.com .

Eric Esteve from IPnest

Also Read:

5G for IoT Gets Closer

Verifying Inter-Chiplet Communication

Using an IDE to Accelerate Hardware Language Learning


ASML- US Seeks to Halt DUV China Sales

ASML- US Seeks to Halt DUV China Sales
by Robert Maire on 07-10-2022 at 6:00 am

China Semiocnductor Ban DUV EUV

-If you can’t beat them, embargo them
-It has been reported US wants ASML to halt China DUV tools
-US obviously wants to kill, not just wound China chip biz
-Is this embargo the alternative to failed CHIPS act?
-Hard to say “do as I say, not as I do”- but US does anyway

First EUV ban now DUV ban? Are process & yield tools next?

News reports suggest that the US government wants the Netherlands government to prevent ASML from even shipping DUV tools to China.

Reuters – ASML shares fall on report US wants to restrict sales to China

It seems as if the US just wants an outright ban on all lithography tool sales to China. We would image that it is likely that the US has contacted both Nikon and Canon who also make DUV tools but not the more advanced EUV tools as just banning ASML would be pointless.

A ban on both DUV and EUV tools would put China back in the stone age of chip manufacturing somewhere in the 1990’s at .25 micron and worse.
This is not an attempt to hobble China’s chip industry, its an attempt to kill it outright. They would be all but out of business….China 2025 would become China never never land.

It suggests that a ban of US produced semi equipment is not far behind

If you are going to an litho equipment, you might as well ban process tools, such as those made by Applied Materials and Lam and yield management tools like those sold by KLA.

Maybe China could get its act together and develop a half baked DUV tool or buy tools on the secondary market but stopping associated yield management tools and process tools would be the coup de grace for China’s chip efforts.

“Do as I say, Not as I do” is hard to support when its an outright ban

We think its obviously quite hard for the US to make the case to the Netherlands to halt all sales to China when the US continues to sell billions of dollars of tools to China which is the number one market and fastest grower for US semiconductor tool makers. Its beyond hypocritical. Its almost laughable.

Obviously stopping the sales of litho tools will be the most effective but you should lead by example and halt your tool sales as well….no reason not to.
Obviously its a bit more palatable to hurt a foreign company than US companies that spend millions lobbying in Washington.

You could make the case that the drive laser in EUV tools which comes from former Cymer in San Diego borders on military technology for high power lasers but you can’t make the same case for DUV as “military grade”. The US obviously has some left over leverage from approving the Cymer sale in the form of some non public veto power.

Collateral impact of CHIPS Act failing?

In a note we had written not too long ago we suggested that if the CHIPS Act fails that the US would resort to sanctions and embargoes on China to get similar results at little to no cost. We can basically kill off the US’s primary competition in the chip space for near zero dollars without having to spend to bolster the US industry…all it takes is some sanctions and embargoes. Kinda like how organized crime deals with the competition, they sleep with the fishes.

Semiconductor supply and demand is a zero sum game

Investors and many others seem to forget that the semiconductor industry is a global zero sum game. Meaning that if China can’t produce chips needed by the world then they will be produced elsewhere…in Taiwan, Japan, the US, Europe etc;.

The world will still turn and we will still get the next generation of chips for our iPhones and AI driven electric cars.

In addition ASML will still get to sell all the litho tools it can make. If ASML can’t sell their litho tools to China there will be other non China chip makers waiting in line to take up the slack, buy the litho tools and churn out chips at a slightly higher cost than China would have.

Little to no impact

The bottom line is this has little to no impact on ASML’s long term strength or success. The only thing that will happen is that the shipping address of ASML’s tools will change. Any competition ASML was going to have from locally made litho tools in China was going to happen anyway.

Process tools are easier for China to duplicate but still very difficult so we don’t see a lot of fall out for AMAT or LRCX any time soon. KLA is next after ASML in terms of complexity and difficulty to copy.

Is likely very good for the US as we won’t become entirely dependent on China like we are for pharmaceuticals, solar panels and LEDs…which are industries they took over.

The Stocks

ASML’s stock was off 7% on the news, which is a gross over reaction of a knee jerk by those investors who don’t understand the concept of the zero sum game of the chip industry and the fact that ASML will just sell the same tools to other countries with hardly a missed beat.

They still have a monopoly. That’s not changing. People still need to move down the Moore’s Law curve. That’s not changing either. There is no other game in town…… end of story.

Other stocks were off in sympathy to ASML which was also a bit of an over reaction. Obviously investors continue to look for bad news in any story. Not much bad news here…probably more positive for the US (and other countries) chip industry in the long run…..

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also read:

Micron kicks off the down cycle – Chops 2023 capex – Holding inventory off street

Semiconductor Hard or Soft Landing? CHIPS Act?

CHIPS for America DOA?


Podcast EP93: The Unique Role of EMD Electronics to Enable Technology Advances Across Our Industry

Podcast EP93: The Unique Role of EMD Electronics to Enable Technology Advances Across Our Industry
by Daniel Nenni on 07-08-2022 at 10:00 am

Dan is joined by Dr. Jacob Woodruff, Head of Technology Scouting and Partnerships with EMD Electronics, where he works to find and advance external early stage and disruptive technologies in the semiconductor and display materials space. Dr. Woodruff is an experienced technologist, having managed global R&D groups developing semiconductor deposition materials at EMD Electronics. Previously at ASM, Jacob led ALD process technology teams, and at SunPower and Nanosolar he managed R&D labs and developed processes for solar cell manufacturing. He holds a Masters in Materials Science and Engineering and a PhD in Physical Chemistry from Stanford University.

EMD Electronics has recently joined as the newest Silicon Catalyst Strategic Partner, details can be found at https://siliconcatalyst.com/silicon-catalyst-welcomes-emd-electronics-as-newest-strategic-partner, with a focused search for startups developing innovative electronic materials required for next-generation semiconductor devices.

Dan discusses the structure and focus of EMD Electronics with Jacob. The company’s primary areas of research and innovation and the far-reaching impact their work has across a large part of the semiconductor value chain are explored.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Barry Paterson of Agile Analog

CEO Interview: Barry Paterson of Agile Analog
by Daniel Nenni on 07-08-2022 at 6:00 am

Barry Paterson Agile Analog CEO

Barry Paterson is the CEO of UK-based analog IP pioneer, Agile Analog. He has held senior leadership, engineering and product management roles at Dialog Semiconductor and Wolfson Microelectronics. He has been involved in the development of custom, mixed-signal silicon solutions for many of the leading mobile and consumer electronics companies across the world. He has a technical background in Ethernet, Audio, Haptics and Power Management and is passionate about working with customers to deliver high quality products. He has an honours bachelor degree in electrical and electronic engineering from the University of Paisley in Scotland.

Agile Analog had a funding round of $19 Million last year. What was the key USP that investors liked? We have a disruptive technology called Composa™. This is an internal tool that enables us to automatically generate analog IP solutions to exactly meet customer specifications on their specific process technology.

What makes Agile Analog’s approach so different to existing analog solutions?

Until now, analog IP was typically custom designed or ported from a different process node. We re-use our tried and tested analog IP circuits that are within our Composa library. This enables us to build a customer solution using our IP for a specific process node taking into account exact customer requirements of Power, Performance and Area. Effectively, the design-once-and-re-use-many-times model of digital IP can be applied to analog IP for the first time.

What are the benefits of your new approach to the customer?

First, the analog IP circuits in the Composa library have been extensively verified and used in previous designs. The verification is automated and applied to every build of the IP core. Second, our Composa automated approach creates bespoke analog IP solutions in a fraction of the time that it would normally take develop them using tradition analog design methodologies. Third, we enable customers to license and integrate analog IP into their products allowing them to focus on faster time to market and differentiation within their market.

You have used the phrase process-agnostic in a recent press release. What does that mean for customers?

Composa can simply regenerate an analog IP solution using the PDK selected by a customer. This may be on a different process technology, for example when switching to a different foundry or taking advantage of advances in process technology to improve their products. This allows customers to select the optimum process technology and capacity options they want to use without being constrained by the analog IP. Historically the availability of analog IP has been a constraint in process selection. Our aim at Agile Analog is to remove that constraint.

Why is analog IP becoming so important?

There is a huge demand for analog ICs, that has been estimated to be valued at 83.2 billion USD in 2022 according to IC Insights’ 2022 McClean Report. We aim to address this space by making it very easy to integrate the functions of these discrete analog ICs onto the main SoC to reduce overall BOM costs, complexity, size and power consumption.

The drive to integrate analog functionality into System on Chip solutions is increasing as consumers expect a better user experience and more functionality. Some customers of Agile Analog have been exclusively been focused on digital products and now have a need to integrate some external analog functionality. This is where our foundation IP can be used. Other customers are developing devices that have increasing amounts of inputs and outputs to sensors and the real world. This invariably requires analog signals and the conversion to and from the digital domain. In this case, our data conversion IP is an ideal solution. In addition to data conversion, there is also a need to optimise power conversion to address the supply requirements of internal and external features. This is where Agile Analog’s power conversion IP can be deployed. Finally, for some customers, they need a system solution where the integration of several analog IP cores interfaces directly to their digital cores. For this, we offer a number of bespoke subsystems.

 What IP do you offer?

We are building up our portfolio of foundation IP building blocks which provide some of the basic analog housekeeping that customers require. We also offer data conversion IP that includes a number of ADCs and DACs. In the power conversion space, we have LDOs and voltage references and we have future plans for buck and boost converters. Running alongside those IP cores, we want to build IP subsystems for IoT and wearables that will look like digital blocks to an EDA system to make it easy to drop them into the digital design flow. Customers can review our current IP portfolio on our website at agileanalog.com

 I hear that you are moving to new larger offices, why is that?

As the Agile Analog team has grown, we have a requirement to find a larger office with more space for collaboration and innovation. We looking forward to moving into a significantly larger office at the iconic Radio House building in the heart of Cambridge, UK. This will enable us to scale to over a hundred staff as we grow. Our new office will be our global headquarters and has been designed with collaboration and team networking in mind. We will continue to have a hybrid working model with a large number of staff working from both from home and at the office so that geography is not a barrier to recruitment of top analog engineers as well as the software and digital engineers that we also need.

So are you actively recruiting now?

Yes, we have just started a major recruiting drive with the aim to increase our engineering head count by over 50%. We are looking for a number of engineers across multiple domains. If there are analog design engineers that want to work on advanced process nodes, developing different IP cores and using our latest Composa technology then they should look at the latest vacancies on our website.

Why do you think analog engineers would beat a path to your door?

Engineers always like a challenge and to be doing something different. Our approach to analog IP is completely different and is revolutionising the way that the vital analog interfaces are being incorporated into next generation semiconductors. We are working on multiple IP cores and multiple processes therefore we offer the opportunity to build experience in many exciting areas.

https://www.agileanalog.com/contact

Also read:

CEO Interview: Vaysh Kewada of Salience Labs

CEO Interview: Chuck Gershman of Owl AI

CEO Interviews: Dr Ali El Kaafarani of PQShield


Altair at #59DAC with the Concept Engineering Acquisition

Altair at #59DAC with the Concept Engineering Acquisition
by Daniel Nenni on 07-07-2022 at 10:00 am

Altair HPC Banner

The Design Automation Conference has been the pinnacle for semiconductor design for almost 60 years. This year will be my 38th DAC and I can’t wait to see everyone again. One of the companies I will be spending time with this year is Altair.

Last month Altair acquired our friends at Concept Engineering, the leading provider of electronic system visualization software. Prior to that Altair acquired our friends at Runtime Design Automation. The Runtime people are still at Altair which is a very good sign. Prior to the Runtime acquisition I had little contact with Altair but over the last two years I have developed a great amount of respect for what they have accomplished, absolutely.

Altair will be at DAC this year in a very big way, which I greatly appreciate. Here is a quick preview from their DAC landing page:

Compute Intelligence for Breakthrough Results Visit us at #59DAC!

Join us to learn more about Altair’s world-class, high-throughput solutions for every step of the semiconductor design process (and more!).

Altair solutions are used by leading companies all over the globe to keep EDA, HPC, and cloud compute resources running smoothly and efficiently. We care about the same critical components you do — including cores, licenses, and emulation — and know that even the most capable hardware can’t do its job without the right tools to enable top performance and high throughput.

Schedule Meeting

Rapidly Advancing Electronics: Altair Solutions Enable Rapid Growth for Wired Connectivity Leader Kandou

Faced with rapid growth, the team at Kandou needed to manage workloads and licensing for their expensive EDA tools. The team chose Altair® Accelerator™ for job scheduling and Altair® Monitor™ for real-time license monitoring and management, resulting in improved product development, getting to market faster, and saving money on expensive EDA tools.

Read the Customer Story

Inphi Corporation Speeds Up Semiconductor Design with Altair Accelerator

The team at Inphi understands the importance of HPC and EDA software performance optimization better than most. They evaluated several competing solutions before selecting Altair Accelerator, which stood out among the competition for superior performance and Altair’s reputation for excellent customer service.

Read the Customer Story

CEA Speeds Up EDA for Research: Powering R&D at the French Alternative Energies and Atomic Energy Commission

CEA Tech, the Grenoble-based technology research unit for the French Alternative Energies and Atomic Energy Commission (CEA) is a global leader in miniaturization technologies that enable smart digital systems and secure, energy-efficient solutions for industry.

Read the Customer Story

Using I/O Profiling to Migrate and Right-size EDA Workloads in Microsoft Azure

Semiconductor companies are taking advantage of Microsoft Azure HPC infrastructures for their complex electronic design automation (EDA) software. When one of the largest semiconductor companies asked for help using Azure to run its EDA workloads, Microsoft teamed up with Altair. This presentation outlines how Microsoft used Altair Breeze™ to diagnose I/O patterns, choose the workflow segments best suited for the cloud, and right-size the Azure infrastructure. The result was better performance and lower costs for our semiconductor customer.

Watch Now

Measuring Success in Semiconductor Design Optimization: What Metrics Matter?

There are few fields in the world as competitive as semiconductor design exploration and verification. Teams might run tens of millions of compute jobs in a single day on their quest to bring new chips to market first, requiring vast quantities of compute and, increasingly, cloud and emulator resources, as well as expensive EDA licenses, and the all-important resource, time. In this roundtable, experts will discuss the license-, job-, compute-, and host-based metrics, highlighting the optimization strategies that edge out the competition and drive up profitability.

Learn More

I hope to see you there!

Also Read:

Future.HPC is Coming!

Six Essential Steps For Optimizing EDA Productivity

Latest Updates to Altair Accelerator, the Industry’s Fastest Enterprise Job Scheduler


CXL Verification. A Siemens EDA Perspective

CXL Verification. A Siemens EDA Perspective
by Bernard Murphy on 07-07-2022 at 6:00 am

CXL Verification

Amid the alphabet soup of inter-die/chip coherent access protocols, CXL is gaining a lot of traction. Originally proposed by Intel for cross-board and cross-backplane connectivity to accelerators of various types (GPU, AI, warm storage, etc.), a who’s who of systems and chip companies now sits on the board, joined by an equally impressive list of contributing members. The standard enables coherent memory sharing between a central processor/CPU cluster with its own cache coherent memory subsystem, with memory/caching on each of multiple accelerator systems. This greatly simplifies life for software developers since memory consistency is managed in hardware. No need to worry about this in software; it’s all just one unified memory model, whether software is running on the processor or on an accelerator.

CXL and PCIe

As an Intel-initiated standard, CXL layers on top of PCIe (as does NVMe, but that’s another story). PCIe already provides the physical interface standard, also the protocols and traffic management for IO communication. CXL builds on top of this for memory and cache communication between devices. This makes it a complex protocol to verify out of the gate, requiring PCIe compliance just as a starting point.

CXL layers on top of PCIe three protocols:

  • io for configuration and a variety of administrative functions
  • cache providing peripherals with low-latency access to host memory
  • memory allowing the host to coherently access memory attached to attached CXL devices

The coherency requirement adds more complications such as compliance with the associated coherency protocol (eg MESI). Also add in Integrity and Data Encryption (IDE) to ensure secure connection and computing. Put all of this together and it is clear that CXL protocol checking is a very complex beast, for which a well-defined VIP would be enormously helpful.

Questa VIP for CXL

Siemens EDA have built a Questa VIP to address this need. QVIP can model any or all the CXL-compliant components in a system, including IDE, generating fully compliant stimulus in host, device, or passive device roles. The VIP comes with a comprehensive verification plan covering simple and complex scenarios. The VIP comes with predefined sequences to support generating these scenarios. Checkers are provided to validate compliance with the coherency protocol of choice, also to validate data integrity through cache reads, writes, and updates.

When a problem is found, possibly elsewhere in the system, the VIP provides detailed logging, from both device to host and from host to device. This logs all information on the CXL interconnect by timestamp, which simplifies tracking problems back to transactions. It is also possible to enable detailed debug messages. Once you know roughly where you want to look you can trigger detailed transaction information in both directions.

Finally, for coverage, the testplan supplied with the VIP is designed to guide you to high coverage over your CXL compliance testing. Table entries define the main test objective, and each objective comes with predefined coverpoints. You can tweak weights for these as appropriate to your verification goals. So, it’s an all-in-one package: VIP, testplan, debug support, and coverage. You just have to dial in your menu choices.

CXL looks likely to be the multi-chip/chiplet solution of choice for coherent memory sharing. This means that you should expect to see this play a larger role in verification planning. If you want to learn more about the Questa Verification IP solution, click HERE.


What Quantum Means for Electronic Design Automation

What Quantum Means for Electronic Design Automation
by Kelly Damalou and Kostas Nikellis on 07-06-2022 at 10:00 am

Ansys quantum blog Image1

In 1982, Richard Feynman, a theoretical physicist and Nobel Prize winner, proposed the initial quantum computer; Feynman’s quantum computer would have the capacity to facilitate traditional algorithms and quantum circuits with the goal of simulating quantum behavior as it would have occurred in nature. The systems Feynman wanted to simulate could not be modeled by even a massively parallel classical computer. To use Feynman’s words, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.

Today, companies like Google, Amazon, Microsoft, IBM, and D-Wave are working to bring Feynman’s ambitious theories to life by designing quantum hardware processing units to address some of the world’s most complicated problems—problems it would take a traditional computer months or even years to solve (if ever). They’re tackling cryptography, blockchain, chemistry, biology, financial modeling, and beyond.

The scalability of their solutions relies on a growing number of qubits. Qubits are the building blocks of quantum processing; they’re similar to bits, the building blocks of traditional processing units. IBM’s roadmap for scaling quantum technology shows the 27-qubit IBM Q System One release in 2019, and less than 5 years later, they expect to release the next family of IBM Quantum systems at 1,121 qubits.

Achieving a sufficient level of qubit quality is the main challenge in making large-scale quantum computers possible. Today, error correction is a critical operation in quantum systems, and it preoccupies the vast majority of qubits in each quantum processor. Improving fault tolerance in quantum computing requires error correction that’s faster than error occurrence. Beyond error correction, there are plenty of challenges on the road to designing a truly fault-tolerant quantum computer with exact, mathematically accurate results. Qubit fidelity, qubit connectivity, granularity of phase, probability of amplitude, and circuit depth are all important considerations in this pursuit.

While quantum computing poses a major technological leap forward, there are similarities between quantum designs and traditional IC designs. Those similarities allow the electronic design automation (EDA) industry to build on existing knowledge and experience from IC workflows to tackle quantum processing unit design.

Logic Synthesis in Quantum and RFIC Designs

In quantum designs on superconductive silicon, the basic building block is the Josephson Junction. In radio-frequency integrated circuit (RFIC) chips, that role is played by transistors. In both situations, these fundamental building blocks are used to build gates that ultimately form qubits in quantum and bits in RFIC.

Image source: “An Introduction to the Transmon Qubit for Electromagnetic Engineers”, T. E. Roth, R. Ma, W. C. Chew, 2021, arXiv:2106.11352 [quant-ph]

Caption: From the Josephson junction to the quantum processor

In RFICs, the state of a bit can be read with certainty—it’s either 0 or 1. Determining the state of a qubit is much more complicated. Yet, it’s a critical step for accurate calculations. Due to the peculiar laws of quantum mechanics, qubits can exist in more than one state at the same time—a phenomenon called superposition. Superposition allows a qubit to assume a value of 0, 1, or a linear combination of 0 and 1. It’s instrumental to the operations of a quantum computer because it provides exponential speedups in memory and processing. The quantum state is represented inside the quantum hardware, but when qubits are measured, the quantum computer reports out a 0 or a 1 for each.

Entanglement is another key quantum mechanical property that describes how the state of one qubit can depend on the state of another. In other words, observing one qubit can reveal the state of its unobserved pair. Unfortunately, observation (i.e., measurement) of the state of a qubit comes at a cost. When measuring, the quantum system is no longer isolated, and its coherence—a definite phase relation between different states—collapses. This phenomenon, quantum decoherence, is roughly described as information loss. The decoherence mechanism is heavily influenced by self and mutual inductance among qubits, which must be modeled with very high accuracy to avoid chip malfunctions.

Quantum processors are frequently implemented using superconductive silicon because it’s lower in cost and easy to scale. Further, it offers longer coherence times compared to other quantum hardware designs. In this implementation, integrated circuits (ICs) are designed using traditional silicon processes and cooled down to temperatures very close to zero Kelvin. Traditional electromagnetic solvers struggle with the complexity and size of quantum systems, so simulation providers need to step up their capacity to meet the moment.

Image credits: IBM

Caption: An IBM quantum computer

Modeling Inductance in Quantum and RFIC Designs

It’s worth noting that superconductors are not new, exotic materials. Common metals like niobium or aluminum are found in superconducting applications. Once these metals are cooled down to a few millikelvin, using a dilution refrigerator, a portion of their electrons do not flow as they normally would. Instead, they form cooper-pairs. This superconductive current flow results in new electromagnetic effects that need to be accurately modeled. For example, inductance is no longer simply the sum of self and mutual inductance. It includes an additional term, called kinetic inductance:

This summation is not as straightforward as it looks. Kinetic inductance has drastically different properties than self and mutual inductance, which are frequency independent and temperature dependent. In a similar fashion, the minimal resistance in a superconductor has different properties than the ohmic resistance of conductors (i.e., proportional to the square of frequency). Electromagnetic modeling tools must account for these physical phenomena both accurately and efficiently.

Scale also poses challenges for electromagnetic solvers. Josephson Junctions, the basic building block of the physical qubit, combine with superconductive loops to form qubit circuits. The metal paths form junctions and loops with dimensions of just a few nanometers. While qubits only need a tiny piece of layout area, they must be combined with much larger circuits for various operations (e.g., control, coupling, measurement). The ideal electromagnetic modeling tool for superconductive hardware design will need to maintain both accuracy and efficiency for layouts ranging from several millimeters down to a few nanometers to be beneficial in all stages of superconductive quantum hardware design.

Image source: “Tunable Topological Beam Splitter in Superconducting Circuit Lattice”, L. Qi, et.al., Quantum Rep. 2021, 3(1), 1-12

Caption: An indicative quantum circuit

 

Looking Forward (or backward – It’s hard to tell with Quantum)

Designers in the quantum computing space need highly accurate electromagnetic models for prototyping and innovation. Simulation providers need to rise to the challenge of scaling to accommodate large, complex designs that push the boundaries of electromagnetic solvers with more and more qubits.

Ansys, the leader in multiphysics simulation, recently launched a new high-capacity, high-speed electromagnetic solver for superconductive silicon. The new solver, RaptorQu, is designed to interface seamlessly with existing silicon design flows and processes. Thus far, our partners are particularly pleased with their ability to accurately predict the performance of their quantum computing circuits.

Caption: Correlation of RaptorQu with HFSS on inductance (left) and resistance (right) for a superconductive circuit

Interested? For updates, keep an eye on our blog.

Dr. Kostas Nikellis, R&D Director at Ansys, Inc., is responsible for the evolution of the electromagnetic modeling engine for high speed and RF SoC silicon designs. He has a broad background in electromagnetic modeling, RF and high-speed silicon design, with several patents and publications in these areas. He joined Helic, Inc. in 2002, and served as R&D Director from 2016 to 2019, when the company was acquired by Ansys, Inc. Dr. Nikellis received his diploma and PhD in Electrical and Computer Engineering in 2000 and 2006 respectively, both from the National Technical University of Athens and his M.B.A. from University of Piraeus in 2014.

Kelly Damalou is Product Manager for the Ansys on-chip electromagnetic simulation portfolio. For the past 20 years she has worked closely with leading semiconductor companies, helping them address their electromagnetic challenges. She joined Ansys in 2019 through the acquisition of Helic, where, since 2004 she held several positions both in Product Development and Field Operations. Kelly holds a diploma in Electrical Engineering from the University of Patras, Greece, and an MBA from the University of Piraeus, Greece.

Also Read:

The Lines Are Blurring Between System and Silicon. You’re Not Ready.

Multiphysics, Multivariate Analysis: An Imperative for Today’s 3D-IC Designs

A Different Perspective: Ansys’ View on the Central Issues Driving EDA Today


Multi-FPGA Prototyping Software – Never Enough of a Good Thing

Multi-FPGA Prototyping Software – Never Enough of a Good Thing
by Daniel Nenni on 07-06-2022 at 8:00 am

PlayerPro EN

Building a multi-FPGA prototype for SoC verification is complex with many interdependent parts – and is “always on a clock”.  The best multi-FPGA prototype implementation is worthless if its not up and running early in the SoC design cycle, where it offers the highest verification ROI terms of minimizing the cost of bug fixes and accelerating the SoC time-to-market.  So, any automation software that enables a more accurate, higher performing prototype implementation in less time should be warmly welcomed by the SoC verification people prototyping large SoCs.

There are at least three pertinent challenges to the implementation of multi-FPGA prototypes;

  1. Cutting large SoC designs into blocks that will “fit” into each FPGA of a multi-FPGA prototyping platform,
  2. Assuring the overall timing integrity of the multi-FPGA prototype when all the FPGAs are connected together, and
  3. Managing the trade-off between the scarcity of FPGA I/O pins that limits the amount of logic in a partition “cut” when the design is spread across several FPGAs, and the prototype performance.

Adding to these prototype implementation challenges are other second-order challenges, like connecting thousands of debug probes, which consumes FPGA connectivity and impacts utilization, and connections to real-world target systems, which consumes FPGA connectivity and FPGA I/O, that impact how easy, or difficult, it is to compile all the FPGAs into a multi-FPGA prototype in an acceptable amount of time with manageable effort.  The tighter you pack the FPGAs (higher utilization), the harder it is for the FPGA compiler tools to find a place and route solution that meets timing targets, and the longer they will take to complete.  But, we’ll defer discussion of these challenges to a future blog.

Automation tools for partitioning large SoC’s for multi-FPGA prototyping should offer a spectrum of “level-of-automation”, from heavily-assisted partitioning, where the user chooses to “guide” the partitioning process with specific design knowledge that will enable a specific partitioning result, to fully automatic partitioning, where the user kicks off a partition run and goes for coffee while the partitioner does its thing.  The basis for choosing the level of automation may be as simple as project schedule, where the designer wants to get to a working multi-FPGA prototype in a hurry and is willing to sacrifice prototype performance for fast compile times.  Some SoC designs lend themselves to intuitive partitioning across multiple FPGAs, and the partition “cut lines” are easily imagined by the designer, while other designers choose higher automation due to the complexity of the critical timing paths, or the prototype target performance, or an aggressive project schedule.  Partitioning at the RTL level is great for early estimations of performance and prototype fit into a multi-FPGA hardware platform, while heavy designer involvement in partitioning may go straight to the gate-level and render unnecessary the need for RTL partitioning.

As unimaginable as it may be today, early commercial multi-FPGA prototyping products did not include integrated timing analysis.  Correct prototype timing in the early days was achieved by applying input stimulus to the prototype and observing the prototype output waveforms with debug probes, and then manually adjusting the relative-edge timing of failed-timing paths by inserting additional FPGA logic gates into the failed-timing path to fix hold-time violations.  That approach by FPGA prototype product providers quickly drew the wrath of early users and led to integrated timing analysis into the FPGA prototyping flow.  Today’s complex multi-FPGA prototypes would be unmanageably difficult without system-level timing analysis that considers the prototype timing of multiplexed FPGA I/O pins, and interconnect cables between FPGAs.

The scarcity of FPGA I/O pins continues to be the bane of multi-FPGA prototyping, even with the new massively large prototyping FPGAs from Intel and Xilinx (up to 80M usable gates per FPGA), because the number of “natural partition cut” interconnections between SoC design partitions often far exceeds the available I/O pins on the FPGAs.  The number of partition interconnections can number in the tens of thousands, whereas the number of available I.O pins on the latest prototyping FPGAs is only a few thousand (1,976 max single-ended HP I/O’s for the Xilinx VU19P, and 2,304 maximum user I/O pins for the Intel Stratix GX 10M.  Consequently, multi-FPGA prototyping must often resort to pin-multiplexing the FPGA I/O pins to implement a multi-FPGA prototype.  The pin-multiplexing is usually accomplished with TDM soft-IP that is implemented with FPGA logic gates with the embedded multiplexors run at the upper limit of the FPGA’s switching speeds.  Different levels of pin-multiplexing (2:1, 4:1, etc.) effectively expands the effective FPGA I/O but sacrifices higher prototype performance.

So, it goes without saying that more automation for multi-FPGA prototype implementation is a good thing, and it comes as no surprise that S2C would offer more of a good thing to its customers by continuing to advance its multi-FPGA prototyping software.  Hence, S2C has recently announced a new release of its Prodigy Player Pro-7TM prototyping software – for use with its Logic System and Logic Matrix families of multi-FPGA prototyping hardware platforms.  S2C has been in production with these multi-FPGA hardware platforms now for a while that incorporate the largest available prototyping FPGAs, like the Xilinx VU19P and the Intel Stratix GX 10M.

According to S2C, the salient features of the new Player Pro-7 software include;

  • RTL Partitioning and Module Replication to support Parallel Design Compilation and reduce Time-to-Implementation
  • Pre/Post-Partition System-Level Timing Analysis for Increased Productivity
  • SerDes TDM Mode for Optimal Multi-FPGA Partition Interconnect and Higher Prototype Performance

The new Player Pro-7 software suite is organized into three separate tools; Player Pro-CompileTimeTMPlayer Pro-DebugTimeTM, and Player Pro-RuTimeTM.  While the new releases of DebugTime and RunTime software include upgrades nfor multi-FPGA debug probing and trace viewing, and strengthening prototype hardware platform control and test, respectively – the most significant multi-FPGA prototyping feature improvements are in the new CompileTime software.

Previous releases of the Player Pro software supported design partitioning at the gate-level, so RTL partitioning is a big step forward for S2C, simplifying the management of multi-core design implementations, and enabling an early assessment of the number of prototype FPGAs required.

For more information about S2C’s multi-FPGA prototyping hardware and software, please visit S2C’s web site at www.s2cinc.com.  Or, stop by S2C’s booth at the 2022 Design Automation Conference from July 11th to July 13th at the Moscone Center in San Francisco.

Also read:

Flexible prototyping for validation and firmware workflows

White Paper: Advanced SoC Debug with Multi-FPGA Prototyping

Prototype enables new synergy – how Artosyn helps their customers succeed


Accellera Update: CDC, Safety and AMS

Accellera Update: CDC, Safety and AMS
by Bernard Murphy on 07-06-2022 at 6:00 am

logo accellera min

I recently had an update from Lu Dai, Chairman of Accellera, also Sr. Director of Engineering at Qualcomm. He’s always a pleasure to talk to, in this instance giving me a capsule summary of status in 3 areas that interested me: CDC, Functional Safety and AMS. I will start with CDC, a new proposed working group in Accellera. To manage hierarchical CDC analysis back in my Atrenta days, you would first analyze a block, then use that analysis to define pseudo constraints on ports of the block, and so on up through the hierarchy. These pseudo constraints might capture things like internal input or output synchronization with related clock info. Sort of a CDC-centric abstraction of the block.

We should have guessed that other tool providers would do something similar, with their own constraint extensions. Which creates a problem when using IP from multiple vendors, each of whom use their own tools for CDC. Maybe you would have to re-do the analysis from scratch for a block? Which may not be possible for encrypted RTL. This is an obvious candidate for standardization – defining abstractions in a common language. SDC-based, no doubt, since these constraints must intermingle with the usual input, output and clock constraints. A worthy effort in support of CDC verification teams.

Functional Safety

It might seem that ISO 26262 is the final word in defining functional safety (FuSa) requirements for electronic design for vehicles. In fact, like most ISO standards ISO 26262 is more about process than detailed guidelines. As tools, IPs and Systems development have advanced to comply with FuSa needs it has become obvious that we need more rigor in those expectations. Take a simple example. What columns should appear in an FMEDA table, in what order and with what headings? Or could this information be scripted instead? None of this is nailed down by ISO 26262. Formats/scripting approaches are completely unconstrained, creating a potential nightmare for integrators.

More generally, there is a need to ensure standardized interoperability in creating and exchanging FuSa information between suppliers and integrators. Which should in turn encourage more automation. So when I claim my IP meets some safety goal, you don’t just have to take my word for it. You can run your own independent checks. On a related note, the methodology should support traceability (a favorite topic of mine). Allowing for validation across the development lifecycle, from IPs to cars. Incidentally there is a nice intro to Accellera work in this area from DAC 2021.

Lu mentioned a related effort in IEEE. I believe this is IEEE P2851, looking at some fairly closely related topics. Lu tells me the Accellera and IEEE groups have had a number of discussions to ensure they won’t trip over each other. His quick and dirty summary is that Accellera is handling the low-level tool and format details while IEEE is aiming somewhat higher. I’m sure that eventually the two efforts will be merged in some manner.

UVM-AMS

The stated objective of this working group is to standardize a method to drive and monitor analog/mixed-signal nets within UVM. Also to define a framework for the creation of analog/mixed-signal verification components by introducing extensions to digital-centric verification IP.

In talking with Lu, the initial objective is to align with existing AMS efforts, in Verilog, SystemVerilog and SystemC. There’s a nice background to the complexities of AMS modeling in simulation HERE for those of us who might have thought this should be easy to solve. Even the basics of real number modeling are still not frozen. Analog signals are not just continuous variants of digital signals; think of the complex number representations common in RF. So there’s history and learning which the standard should leverage yet not disrupt unnecessarily.

AMS teams want the benefits of UVM methodologies, but they don’t want to start from scratch. Aligning those benefits with existing AMS requirements is the current focus. Lu says that many of these requirements aren’t language specific. The working group is figuring out the semantics of the methodology first, then will look more closely at syntax issues.

Accellera will be presenting more on this topic at DAC 2022 so you’ll have an opportunity to learn more there.