DAC2025 SemiWiki 800x100

An EDA AI Master Class by Synopsys CEO Aart de Geus

An EDA AI Master Class by Synopsys CEO Aart de Geus
by Daniel Nenni on 08-19-2022 at 8:00 am

Aart de Geus White House

I consider Dr. Aart de Geus one of the founding fathers of EDA and one of the most interesting people in the semiconductor industry. So it is not a surprise that Aart was chosen to attend the CHIPs Act signing at the White House.

Here is his current corporate bio:

Since co-founding Synopsys in 1986, Dr. Aart de Geus has expanded Synopsys from a start-up synthesis company to a global high-tech leader. He has long been considered one of the world’s leading experts on logic synthesis and simulation, and frequently keynotes major conferences in electronics and design automation. Dr. de Geus has been widely recognized for his technical, business, and community achievements with multiple awards including Electronic Business Magazine’s “CEO of the Year,” the IEEE Robert N. Noyce Medal, the GSA Morris Chang Exemplary Leadership Award, the Silicon Valley Engineering Council Hall of Fame Award, and the SVLG Lifetime Achievement Award. He serves on the Boards of the Silicon Valley Leadership Group, Applied Materials, the Global Semiconductor Alliance, and the Electronic System Design Alliance.

You should know that Aart is also an accomplished musician and one of the driving forces behind the band Legally Blue which my beautiful wife and I follow.

As an avid reader of the earning call transcripts for leading companies inside the semiconductor ecosystem, this quarter’s SNPS call simply stands out as one of the best I have read in some time.

Synopsys, Inc. (SNPS) CEO Aart de Geus on Q3 2022 Results – Earnings Call Transcript

You should read the whole thing but here are some intriguing AI snippets:

Aart de Geus
Leading the way is our award-winning DSO.ai artificial intelligence design solution, which is revolutionizing chip design. First to market over two years ago with technology that is still unmatched today, it delivers outstanding productivity improvements that are already driving substantial increases in customer commitments.

DSO.ai is also driving very significant low-power improvements exemplified by a large automotive chip maker, achieving a 30% power reduction. These compelling outcomes are driving a high pace of adoption for production tape-outs across verticals and a broad set of process nodes.

Daniel Nenni
As I have mentioned before AI is the #1 driver for the semiconductor industry and that now includes EDA. Not only will most every chip be touched by AI, the entire semiconductor ecosystem will be fueled by it. The AI talk continued in the Q&A:

Jason Celino
So Aart, the references for DSO.ai, the customer wins, they’re quite impressive. How are customers using DSO.ai today, is it more proof-of-concept type work? Is it leading edge type work? And then are these customers evaluating Cadence Cerebrus simultaneously?

Aart de Geus
Well, the reason I mentioned that it has impact on our business is because they’re using this in production. And yes, of course, the most advanced people have always been the people that first pick up on the most capable new tools. And so, these are very advanced often large companies that are doing now many designs with this capability because the value it’s high, and they are definitely seeing the issue of insufficient talent. And so that’s sort of the main space.

I don’t know actually that we see much of our competition, not to put them down or anything like that. I’m sure they’re doing good stuff. But the advances that we’ve made in the last year, even in my own book, are quite remarkable and are broadening, by the way, to more and more capabilities going forward.

So I think we’re into a whole next phase of what EDA will mean to our customers. And very often, advanced users try very quickly and then they’re very careful. They tried very quickly, and they’re absolutely adopting.

Jay Vleeschhouwer
Aart, a technology question for you. So, on subject of AI, two things. First, could you talk about how you do your own internal development for AI? That is for DSO.ai. The reason I ask is, as I’m sure you’re well aware, there’s an arms race across multiple software company, each claiming to have some AI.

And obviously, you do. It’s in production. But I’m curious as to how you distinguish or carve out your own internal AI, specifically for EDA purposes, as compared to the developments you do for the tools themselves. And then more broadly, how do you think about the implications of AI for the IP business? The reason I asked that is Synopsys and a recent technology presentation at — actually an ANSYS Conference spoke about, for example, AI in the context of design reuse, design remastering, all of which would seem to have some implication for IT and which, of course, you’re number one, at least in EDA.

Aart de Geus
Okay. Let me start with AI. The first thing to understand with AI is AI is a very advanced, different way of programming the solution to a variety of problems. And of course, we use the traditional approach, but we also use what’s called pattern matching where you find situations — where the recognition of the situation allows you to improve something for the better. Now that statement applies to the domain that you apply it to.

And so if we took our DSO.ai and say, “Hey, tomorrow morning, we’re going to do, I don’t know, blood diagnostics and learn something about patients,” we would have initially 0 to offer because the AI needs to be matched in its intent to the area of the problem.

And by the way, I — in fact you alluded to that a minute ago on the question of why the AI chips, all these people are essentially optimizing for their domain, right? Well, we have optimized for our domain, and our domain is unbelievably complex because we have, arguably, some of the most complex search spaces, meaning those are all the potential solutions finding the right one in any field. And so it’s really the combination of the understanding of what we do and then the exploration with AI that fits together.

Secondly, AI for IP, of course, we use it ourselves. And a very simple reason would be one could consider Synopsys as one of the most advanced design companies in the world for what we do. And so, we don’t use our designs to put chips on the markets. We don’t design chips. We design IP blocks. But the concept is actually similar.

Third, you mentioned something interesting that I’m well familiar with, which is the need and the desire to sometimes take an existing design and migrate it to a different technology node. Sometimes it’s called remastering. Sometimes it’s called retargeting. That’s the word you used, I think. And initially, we did some experiments already a year ago for knowing — going from one node to another node that was pretty similar.

And we’ve got excellent results, and we’ve got them fast. And we could learn from the existing design and apply it to the new one. Meanwhile, we’ve vastly improved on that because we’ve been able to move many clicks forward in terms of nodal technology and still get much better results. And so I’m the first one to say we’re at the beginning of a big journey. But so far, it’s a pretty cool journey.

Daniel Nenni
The other notable in the call is that Synopsys will break the $5B revenue mark this year which in itself is historic for EDA. If you every get a chance to see or hear Aart speak I would strongly suggest you do. In fact, he will speak at the GSA Executive Forum on September 27th. I hope to see you there.

Also read:

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

DSP IP for High Performance Sensor Fusion on an Embedded Budget

Intelligently Optimizing Constrained Random


CEO Interview: Jay Dawani of Lemurian Labs

CEO Interview: Jay Dawani of Lemurian Labs
by Daniel Nenni on 08-19-2022 at 6:00 am

JayDawani

Jay Dawani is the co-founder & CEO at Lemurian Labs, a startup developing a novel processor to enable autonomous robots to fully leverage the capabilities of modern day AI within their current energy, space, and latency constraints.

Prior to founding Lemurian, Jay had founded two other companies in the AI space. He is also the author of the top rated “mathematics for deep learning” book.

Jay has also served as the CTO of BlocPlay, a public company building a blockchain-based gaming platform, and served as Director of AI at GEC, where he led the development of several client projects covering areas from retail, algorithmic trading, protein folding, robots for space exploration, recommendation systems, and more. In his spare time, he has also been an advisor at NASA FDL.

Can you give us the backstory on Lemurian?

We started Lemurian because of the observation that the robotics field is moving towards the adoption of software-defined robots over the large stationary fixed function robots which have been the norm in the last few decades. The main advantage here is the ability to give robots new capabilities over time through training with more simulated data and over the air updates. Three of the biggest drivers for this shift are deep learning and reinforcement learning; more powerful compute; and synthetic data. Most robotics companies are unable to fully leverage the advancements in deep learning and reinforcement learning because of a lack of sufficient compute performance within the power consumption and latency they require. Our roadmap is aligned to these customer needs, and we are focused on building the processor that would address these concerns. In some ways, we are building the processor we would need if we were to launch a robotics company.

There have been over 100 companies created in the last 10 or so years that are focusing on AI hardware, what makes Lemurian different?

We are developing a processor that enables AI in robots with far less power and lower latency by leveraging custom arithmetic to do matrix multiplication differently so that it is reliable, efficient, and deterministic. Our approach is well suited to address the needs of the growing autonomous robotics industry which can include anything from a home vacuum cleaner to a materials handling robot in a warehouse or a vehicle outdoors performing last mile delivery. What many of these applications have in common is the need to respond rapidly to changes in its local environment using very low power, and cannot wait for a signal from a data center in the cloud. These applications need to be programmed for their particular context, with high precision and deterministic actions. Determinism in our case means generating the same answer every time given the same inputs, which is essential for safety. General purpose AI processors, as others are building them, do not address these essential requirements.

Are you saying that the robotics industry needs a dedicated processor that is different from what most AI hardware companies are building?

Absolutely! Most companies focusing on edge AI inference are over-optimized for computer vision, but the challenge with robotics is that it is more than computer vision where conventionally the objective is for example to detect whether something is present in an image or to classify it. A robot on the other hand is something that interacts with the real world. It has to perceive, decide, plan, and act based on often incomplete and often uncertain information.

For example, a bin picking and sorting robot needs to be able to perceive the difference between objects, and interact with them appropriately with high speed and accuracy. With the availability of a domain-specific compute platform, robots will be able to process more data from sensors in less time which will allow many mobile robots to complete longer missions or tasks, and react to changes in the environment more quickly too. In some applications, it is hard to collect enough good data to train a robot so companies are using behavior cloning which is where a robot learns by observing demonstrations from a human in a supervised setting.

These autonomous robotic applications require an entirely new approach such as the one we are taking with our processor, which has been designed from first principles. Our solution is software-defined, high precision, deterministic, and energy efficient. That is why we are generating so much interest in this market segment from some of the leading companies. Fundamentally, we are doing for deep reinforcement learning inference at the edge what NVidia did for deep learning training in the data center.

Very cool. So what is unique about the technology that you are building?

Fundamentally, we are building a software managed, distributed dataflow machine that leverages custom arithmetic, which overall reduces power consumption and increases silicon efficiency. The demands of AI are so severe now it is breaking the old way of doing things, and that is creating a renaissance in computer architectures reviving ideas like dataflow and non-Von Neumann. A lot of these ideas are commonplace in digital signal processing and high performance data acquisition because these systems are constrained by silicon area or power.

For our target workloads, we were able to develop an arithmetic that is several orders of magnitude more efficient for matrix multiplications. It is ideally suited to modern day AI which depends heavily on linear algebra algorithms, and allows us to make better use of the transistors available. Other linear algebra-dominated application verticals, such as computer-aided engineering or computer graphics require floating-point. But floating-point arithmetic as we know is notoriously energy inefficient and expensive.

What is the benefit of this approach over those being taken by other companies?

The arithmetic we designed has roughly the same precision as a 16-bit float but consumes a fraction of the area. In a nutshell we’re able to get the efficiency of analog while retaining all the nice properties of digital. And once you change the arithmetic as we have, you can back off the memory wall and increase your performance and efficiency levels quite significantly.

Single precision floats have been very effective for training deep neural networks as we have seen, but for inference most AI hardware companies are building chips for networks that have been quantized to 8-bit integer weights and activations. Unfortunately, many neural network architectures are not quantizable to anything below 16-bit floats. So if we are to squeeze out more performance from the same amount of silicon as everyone else, we need new arithmetic.

Taking some of the newer neural network topologies as an example, the weights and activations in different layers have different levels of sensitivity to quantization. As a result most chips are forced to accommodate multi-precision quantization and have multiple arithmetic types in their hardware which in turn reduces overall silicon efficiency. We took this into account when designing our custom arithmetic. It has high precision, is adaptive and addresses the needs of deep learning to enable training, inference, and continual learning at the edge.

Why do you think other companies haven’t innovated in arithmetic?

High-performance systems always specialize their arithmetic and computational pipeline organization. However, general-purpose processors need to pick a common type and stick with it, and ever since IEEE standardized floating-point arithmetic to improve application interoperability among processor vendors in 1985, these common types have been floating-point and integer arithmetic. They work for the general case, but these types are suboptimal for deep learning.

Over the decades companies developing GPUs have had many different types and arithmetic optimizations in the lighting equations, geometry stages, and rasterization stages, all optimizing for area because of the need to multiply these units millions of times. The nature of the number system is the true innovation. The awareness that a particular computation has a particular opportunity to sample more efficiently is a nontrivial exercise. But when the vertex and pixel shaders made the GPU more general purpose, it progressed to the same common arithmetic as the CPUs.

So there has been innovation in arithmetic, but we haven’t made the progress in it that we should have. And now we are in an era where we need to innovate not just on microarchitecture and compilers, but arithmetic as well to continue to extract and deliver more performance and efficiency.

You just closed your seed round. What can we expect to see from Lemurian in the next 12-18 months?

We did indeed close an oversubscribed seed round. This was a pleasant surprise given the market situation this spring, but we are starting to hear more use cases and more enthusiasm for our solution from our target customers. And investors are increasingly open to novel approaches which may not have gotten attention years ago before the difficulties of the current approaches were commonly known.

We have built out our core engineering team and are forging ahead to tape out our test chip at the end of the year which will demonstrate our hypothesis that our hardware, software and arithmetic built for robotics can deliver superior processing, at lower energy usage and in a smaller form factor than competitors. We will be taping out our prototype chip at the end of 2023, which we will get into our early customers hands for sampling.

Also read:

CEO Interview: Kai Beckmann, Member of the Executive Board at Merck KGaA

CEO Interview: Jaushin Lee of Zentera Systems, Inc.

CEO Interview: Shai Cohen of proteanTecs


The Metaverse: Myths and Facts

The Metaverse: Myths and Facts
by Ahmed Banafa on 08-18-2022 at 10:00 am

Metaverse Gartner 2022

Any new technology involves a certain amount of ambiguity and myths. In the case of the Metaverse, however, many of the myths have been exaggerated and facts were misrepresented, while the Metaverse vision will take years to mature fully, the building blocks to begin this process are already in place. Key hardware and software are either available today or under development; and definitely stakeholders need to address Safety, Security and Privacy (SSP) concerns, and collaborate to implement open standards that will make the Metaverse safe, secure, reliable and interoperable, and allow the delivery of secured and safe services as seamlessly as possible.

Despite the buzz about the Metaverse, many still don’t completely understand it. For some, it is the future, while others think it is gimmicky. For now, the Metaverse is an interface or a platform that allows digital realities of people to come together to work, play and collaborate. Metaverse hopes to transcend geographical boundaries and become the next ‘thing’. That said, there are plenty of misconceptions about the Metaverse, and here are a few [6]:

Myth #1: No One Knows What is the Metaverse

In recent months, it has become clear that there is no single definition of the Metaverse. Well-known experts refer to it as “the internet of the future” or point to immersive devices to demonstrate various platforms and user experiences. [2]

In simple terms, the Metaverse is the future of the internet: A massively scaled, interactive and interoperable real-time platform comprising interconnected virtual worlds where people can socialize, collaborate, transact, play and create. There are 5 billion internet users in the world and crypto has emerged as both the infrastructure layer and the zeitgeist that will fill in the blanks: digital currency, fully functioning digital economies, ownership of digital goods and true interoperability across countless interconnected systems, all this defines the Metaverse [3].

Myth #2 The Metaverse is Only Gaming

The Metaverse is not gaming. Gaming is an activity you can do within the Metaverse, there are 3 billion gamers in the world. Today, when people talk about the Metaverse, they often describe gaming platforms like Roblox, and Minecraft as examples. While gaming remains one of the leading experiences, consumers are increasingly looking for entertainment and shopping in the virtual world. One in five Metaverse users has attended virtual live events such as concerts and film festivals. [4]

Myth #3 The Metaverse is Only Virtual Reality

Saying the Metaverse is virtual reality (VR) is like saying that the internet is only your smartphone, it’s a way of interfacing with the internet. In the same way, you can imagine experiencing the Metaverse through VR, but you can also imagine experiencing the Metaverse through your laptop or desktop. [4]

Myth #4 The Metaverse Will Replace the Real World

No, this is not the “Matrix”, the Metaverse won’t replace the real world. It will be additive to the real world, an expansive virtual environment where you can do any number of different things: work, socialize, play, create, explore and more. [4]

Myth #5 The Metaverse is a Fad

The Metaverse is a fad in the same way the internet considered as a fad at some point of time. We’re still years away from a fully realized Metaverse and the technology we’ll need is far from complete. But even today, we’re already living in a very primitive version: we work remotely, we socialize and learn virtually and we find entertainment without leaving our homes. However, as always, how we meet those needs will continue to evolve as our technology advances. [4]

Myth #6 The Metaverse Will Be a Monopoly

Companies like Meta and Microsoft are two of the world’s most valuable companies because they’re perceptive. They have a skill for skating to where the puck will be and they’re able to scale fast. But jumping on the bandwagon early doesn’t mean they’ll control the Metaverse, the field is to big to be controlled by handful companies. [4]

Myth #7 The Speed of Technology Will Set the Pace for Adoption

Many people believe that the broad adoption of the Metaverse is hindered because technology is not keeping pace. There remains low penetration of immersive devices among consumers, and there are infrastructure barriers in the way of a truly scaled, immersive Metaverse future. Close to one-third of Metaverse users see technology as severely limiting their dream experience.

#VR is the most accessible technology at just 20 percent penetration, yet the adoption curve to date follows the trajectory of other technologies that became widely available over time. Penetration for recent breakthroughs such as smartphones, tablets, and social media grew from 20 percent to 50 percent in only a handful of years. Lower cost, increasing content, and improved usability are driving adoption. [4]

Myth #8 The Metaverse is Already Here

The Metaverse is an infinitely large (future) virtual world that connects all other virtual sub-worlds. You can see the Metaverse as the next phase of the internet as we know it: the currently two-dimensional, flat Internet will be changing into a three-dimensional, spatial form. We are moving from the web of pages to the web of coordinates. And from the web of information to the web of activities. In the future, people can meet as avatars in the Metaverse, to get to know each other for example, to network, provide services, collaborate, relax, game, shop and consume. The Metaverse also offers the opportunity to build, create and participate in a virtual economy. In the future, we won’t be going “on” the Internet, but “in” the spatial Internet. The Metaverse can be seen as the world that connects all (existing) virtual worlds. That world, however, isn’t here yet, the Metaverse is still a thing of the future. [5]

Myth #9 The Metaverse is Inevitable

It is clear that the Metaverse is actively being developed. The key players in the world of technology have their eyes on it. But they are facing a number of challenges; interoperability for example – where users must be able to move easily between different worlds – being one of them. This means that companies must work intensively on open standards. In the Metaverse, you have to be able to work, attend concerts and play games with the greatest of ease. Not such an easy feat, particularly because many companies will be reluctant to collaborate on open standards and give up their intellectual property. In addition, the growth of the Metaverse will also require substantial hardware innovations. [5]

Myth #10 The Metaverse is Suitable for Everything

This is another aspect that remains to be seen. In the future, the different variants of the internet will simply coexist – just as you sometimes read a book on paper, and sometimes on your screen. The internet as we know it will continue to exist. It will be accessible on your smartphone, computer or tablet. For some things such as shopping, playing games, and social interaction, the #metaverse will be extremely suitable. [5]

What is The Future of the Metaverse?

The Metaverse “is bringing together people, processes, data, and things (real and virtual) to make networked connections more relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries”. In simple terms, the Metaverse is the intelligent connection of people, processes, data, and things. It describes a world where billions of objects have sensors to detect, measure, and assess their status, all connected over public or private networks using standard and proprietary protocols.[1]

Data is embedded in everything we do; every business needs its flavor of data strategy, which requires comprehensive data leadership. The Metaverse will create tens of millions of new objects and sensors, all generating real-time data which will add more value to their products and services for all the companies who will use Metaverse as another avenue of business. As a result, enterprises will make extensive use of Metaverse technology. As a result, there will be a wide range of products sold into various markets, vertical and horizontal, an endless list of products and services.

For example, in e-commerce, the Metaverse provides a whole new revenue stream for digital goods in a synchronous way instead of the current traditional 2D way of clicking and buying. In human resources (HR), significant training resources will be done with virtual reality (VR) and augmented reality (AR) that are overlaying instructions in a real-world environment and giving somebody a step-by-step playbook on how to put a complex machine together or run a device or try a new product all will be done with virtual objects at the heart of the Metaverse. While in sales/marketing, connecting with customers virtually and sharing the virtual experience of the product or service will be common similar to our virtual meetings during the past two years in the middle of Covid, but the Metaverse will make it more real and more productive.

Finally, similarly to Cloud Computing, we will have Private-Metaverse, Hybrid-Metaverse, and Public-Metaverse with all possible applications and services in each type. Companies will benefit from all options based on their capabilities and needs. The main goal here is to reach Metaverse as a Service (MaaS) and add a label of “Metaverse Certified “on products and services. [1]

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

[1] https://www.bbntimes.com/science/the-Metaverse-a-different-perspective

[2] https://www.mckinsey.com/industries/retail/our-insights/probing-reality-and-myth-in-the-Metaverse

[3] https://venturebeat.com/2022/03/24/5-common-Metaverse-misconceptions/

[4] https://www.mckinsey.com/industries/retail/our-insights/probing-reality-and-myth-in-the-Metaverse

[5] https://jarnoduursma.com/blog/7-misconceptions-about-the-Metaverse/

[6] https://analyticsindiamag.com/misconceptions-about-Metaverse-mark-zuckerberg-virtual-reality-augmented-real-world-gaming/

Also read:

Quantum Computing Trends

Facebook or Meta: Change the Head Coach

The Metaverse: A Different Perspective


SoC Verification Flow and Methodologies

SoC Verification Flow and Methodologies
by Sivakumar PR on 08-18-2022 at 6:00 am

Electronic System

We need more and more complex chips and SoCs for all new applications that use the latest technologies like AI. For example, Apple’s 5nm SoC A14 features 6-core CPU, 4 core-GPU and 16-core neural engine capable of 11 trillion operations per second, which incorporates 11.8 billion transistors, and AWS 7nm 64-bit Graviton2 custom processor contains 30 billion transistors. Designing such complex chips demands a standard and proven verification flow that involves extensive verification at every level, block to IP to Sub-system to SoC, using various verification methodologies and technologies.

In this article, let me walk you through various verification methodologies we use for verifying IPs, Sub-systems, and SoCs and explain why we need new methodologies/standards like PSS.

Understanding how we build electronic systems using SoCs is essential for the verification engineers who deal with SoC verification flow, whether doing a white-box verification at the IP level, gray-box verification at the sub-system level, or black-box verification at the SoC level.

How do we build electronic systems using SoC?

Any chip, a simple embedded microcontroller, or a complex system-on-a-chip [SoC] will have one or more processors. Figure1 shows a complex electronic system composed of both hardware and software needed for electronic devices like smartphones.

Electronic System

Figure1 : Electronic System and System-On-Chip

The hardware is made up of a complex SoC that incorporates almost all the components needed for the device. In the case of the smartphone, we integrate all the hardware components called IPs [Intellectual Properties] like CPUs, GPUs, DSP, Application Processors, Interface IPs like USB, UART, SPI, I2C, GPIO, and subsystems like System Controllers, Memories with controllers, Bluetooth, and WiFi, etc. and create the SoC. Using SoC helps us to reduce the size and power consumption of the device while improving its performance.

The software is composed of application software and system software. The application software provides the user interface, and the system software provides the interface to application software to deal with the hardware. In the smartphone case, the application software could be mobile apps like YouTube, Netflix, GoogleMap, etc, and the system software could be the operating system [OS] like ios or android. The system software provides everything like firmware and protocol stack along with the OS needed for the application software to interface with the hardware. The OS manages multiple application threads in parallel, memory allocation, and I/O operations as a central component of the system software.

Let me explain how the entire system, like a smartphone works. For example, when you invoke an application like a calculator on a smartphone, the operating system loads the executable binary from the storage memory into RAM. Then it immediately loads its starting address into the program counter [PC] of its processor. The processor [ARM/x86/RISC-V] executes the binary loaded in the RAM/Cache pointed by the PC [address of RAM]. This precompiled binary is nothing but the machine language of the processor, and therefore the processor executes the application in terms of its instructions [ADD/SUB/MULT/LOAD] and calculates the results.

Understanding the SoC design process using processors can help the verification engineers deal with any complex sub-system/chip verification at the system level. As part of the SoC verification process, verification engineers may need to deal with various things like virtual prototyping for system modeling, IP, subsystem and SoC functional verification, hardware-software co-verification, emulation, ASIC prototyping, post-silicon validation, etc. in their long-term career. So it demands a cohesive and complete knowledge and understanding of both hardware and software to work independently as verification experts and sometimes to work closely with software teams as well to deal with the software, RTOS/firmware/stacks for the chip/system level verification.

Now let us explore various verification methodologies.

IP Verification

IPs are the fundamental building blocks for any SoC. So IP verification demands exhaustive white-box verification that demands methodologies like formal verification and random simulation, especially for the processor IPs as everything is initiated and driven by them as a central component in any SoCs. Figure 2 shows how we verify a processor IP using an exhaustive random simulation by a SystemVerilog-based UVM TB. All the processor instructions can be simulated with various random values, generating functional, assertion, and code coverage. We use coverage to measure the progress and quality of the verification and then for the final verification sign-off. IP level verification demands good expertise in HVL programming, Formal and Dynamic ABV, Simulation debugging, and using VIPs and EDA tools.

Figure 2 RISC-V UVM Verification Environment

ABV- Assertion Based Verification, VIP – Verification IP

UVM-Universal Verification Methodology  UVC-UVM Verification component

BFM-Bus Functional Model  VIP-Verification IP RAL-Register Abstraction Layer

Sub-System Verification

Sub-systems are composed of mostly pre-verified IPs and some newly built IPs like bridges and system controllers that are specific to the chip. Figure 3 shows how we build an SoC from a sub-system that integrates all the necessary interface IPs, bridges, and system controllers using an on-chip bus like AMBA. In this case, we prefer simulation-based Gray-box verification, especially random simulation using verification IPs. All the VIPs like AXI, AHB, APB, GPIO, UART, SPI, and I2C UVCs [UVM Verification Component] will be configured and connected with the respective interfaces. As shown in the figure-3, we create other TB components like reference models, scoreboards, and UVM RAL for making the verification environment self-checking. We execute various VIP UVM sequences at the top level, verify the data flow, and measure the performance of the bus.

Figure 3 Sub-System UVM Verification Environment

SoC Verification

SoCs are composed of primarily pre-verified third-party IPs and some in-house IPs. Usually, we prefer a black-box verification using hardware emulation or simulation technologies for the SoC level verification. For example, you may come across a complex SoC verification environment, as shown in figure 4. The SoC testbench [TB] will have all kinds of testbench components like standard UVM Verification IPs[USB/Bluetooth/WiFi and standard interfaces], legacy HDL TB components [JTAG Agent] with UVM wrappers, custom UVM agents[Firmware agents], and some monitors, in addition to the scoreboard and SystemC/C/C++ functional models. In this case, you will have to deal with both firmware and UVM sequences at the chip level. As a verification engineer, you need to know how to implement this kind of hybrid verification environment using the standard VIPs, legacy HDL BFMs and firmware code, and more importantly, how to automate the simulation/emulation using EDA tools.

Figure 4: SoC Verification Environment 

UVM-Universal Verification Methodology  UVC-UVM Verification component

BFM-Bus Functional Model  VIP-Verification IP RAL-Register Abstraction Layer

Let me explain how it works. For example, if the SoC uses an ARM processor, usually we replace the ARM RTL [Encrypted Netlist/RTL] with its functional model called DSM [Design Simulation Model] that can use the firmware[Written in C] as a stimulus to initiate any operation and drive all other peripherals[RTL IPs]. So the SoC verification folks write UVM sequences to generate various directed scenarios through firmware testcases and verify the SoC functionality. During the simulation, the firmware C source code is compiled as an object code[ARM Machine Language binary] which will be loaded into on-chip RAM. The ARM processor model [DSM] reads the object code from memory and initiates the operation by configuring & driving all the RTL peripheral blocks [Verilog/VHDL]. It works for both simulation and emulation. If the SoC is very complex, hardware emulation is preferred to accelerate the verification process and achieve faster verification sign-off.

Why PSS?

Figure 5: IP, Sub-System, and SoC Verification Methodologies

PSS Definition: The Portable Test and Stimulus Standard defines a specification for creating a single representation of stimulus and test scenarios, usable by a variety of users across different levels of integration under different configurations, enabling the generation of different implementations of a scenario that run on a variety of execution platforms, including, but not necessarily limited to, simulation, emulation, FPGA prototyping, and post-Silicon. With this standard, users can specify a set of behaviors once, from which multiple implementations may be derived.

Figure 6 : PSS flow

As shown in figure6, using PSS, we can define the test scenarios and execute them at any level IP/Sub-System/SoC using any Verification Technology. For example, we can define an IP’s test scenarios in PSS. At the IP level verification, we can generate assertions using EDA from its PSS specification for the formal verification, and if needed, we can generate UVM testcases from the same PSS specification for the simulation or emulation at the SoC level. We don’t need to manually rewrite the IP/Sub-system level testcases to migrate and reuse them at the SoC level. PSS specification remains the same for all kinds of technologies. Based on our choice, like formal/simulation/emulation, the EDA tool can generate the testcases from PSS speciation in any languages or methodologies like C/C++/Verilog/SystemVerilog/UVM.

The methodologies like formal verification and PSS are evolving, simultaneously the EDA vendors are also automating the test generation and verification sign-off using technologies like ML. So in the near future, the industry needs brilliant and skilled verification engineers who can collaborate with the chip architects to drive the verification process for the first-time silicon success through the ‘Correct by Construction’ approach, beyond the traditional verification folks who deal with black-box verification that involves predominantly writing testcases and managing the regression testing. Are you interested in chip verification and ready for this big job?

Also Read:

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing

Verification IP vs Testbench

CEO Interview: Sivakumar P R of Maven Silicon


Podcast EP101: Unlocking the True Potential of Wireless with Peraso Technologies mmWave Silicon

Podcast EP101: Unlocking the True Potential of Wireless with Peraso Technologies mmWave Silicon
by Daniel Nenni on 08-17-2022 at 10:00 am

Dan is joined by Ron Glibbery, who co-founded Peraso Technologies in 2009 and serves as its chief executive officer. Prior to co-founding Peraso Technologies, Ron held executive positions at Kleer Semiconductor Intellon, Cogency Semiconductor, and LSI Logic.

Dan and Ron explore the impact of Pareso’s unique mmWave silicon technology to deliver high bandwidth communications for many applications, including 5G.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


UVM Polymorphism is Your Friend

UVM Polymorphism is Your Friend
by Bernard Murphy on 08-17-2022 at 6:00 am

Polymorphism min

Rich Edelman of Siemens EDA recently released a paper on this topic. I’ve known Rich since our days together back in National Semi. And I’ve always been impressed by his ability to make a complex topic more understandable to us lesser mortals. He tackles a tough one in this paper – a complex concept (polymorphism) in a complex domain (UVM). As best I can tell, he pulls the trick off again, though this is a view from someone who is already wading out of his depth in UVM. Which got me thinking about a talk by Avidan Efody that I blogged on recently, “Why designers hate us.”

More capable technology, smaller audience

Avidan, a verification guy at Apple, is probably as expert as they come in UVM and all its capabilities. But he also can see some of the downsides of the standard, especially in narrowing the audience, limited value in what he calls “stupid checks” and some other areas. See HERE for more on his talk. His point being that as UVM has become more and more capable to meet the needs of its core audience (professional hardware verifiers), it has become less and less accessible to everyone else. RTL designers must either wait on testbenches for debug (weeks to months, not exactly shift left) or cook their own tests in SystemVerilog. They still need automation, so they start hooking Python to their SV, or better yet cocotb. Then they can do their unit level testing without any need for the verification team or UVM.

Maybe this divergence between designer testing and mainstream verification is just the way it has to be. I don’t see a convergence being possible unless UVM crafts a simpler entry point for designers, or some cocotb look-alike or link to cocotb. Without all the classes and factories and other complications.

But I digress.

Classes and Polymorphism

The production verification world needs and welcomes UVM with all its capabilities. This is Rich’s audience and here he wants to help those not already at an expert level to uplevel. Even for these relative experts, UVM is still a complex world, full of strange magic. Some of that magic is in reusability of an unfamiliar type, through polymorphism.

A significant aspect of the UVM is its class-based structure. Classes allow you to define object types which not only encapsulate the parameters of the object (e.g., center, width, length for a geometric object) but also the method that can operate on those objects. Which for a user of the object abstracts away all that internal complexity. They just need methods to draw, print, move, etc. the object.

Reuse enters through allowing a class to be defined as extension to an existing class. All the same parameters and methods, with a few new ones added. And/or maybe a few of the existing parameters/methods overridden. And you can extend extended classes and so on. This is polymorphism – variants on a common core. So far, so obvious. The standard examples, like the graphic in this article don’t look very compelling.

Rich however uses polymorphism judiciously (his word) and selectively to define a few key capabilities, such as an interrupt sequence class. Reusing what is already defined in UVM to better meet a specific objective.

As I said, I’m way out of my depth on this stuff, but I do trust that Rich knows what he is talking about. You can read the white paper HERE.


Delivering 3D IC Innovations Faster

Delivering 3D IC Innovations Faster
by Kalar Rajendiran on 08-16-2022 at 6:00 am

System Technology Co Optimization STCO

3D IC technology development started many years ago well before the slowing down of Moore’s law benefits became a topic of discussion. The technology was originally leveraged for stacking functional blocks with high-bandwidth buses between them. Memory manufacturers and other IDMs were the ones to typically leverage this technology during its early days. As the technology itself does not limit the use to only such purposes, there has always been a broader appeal and potential for this technology.

Over the years, 3D IC technology has progressed from its novelty stage to becoming an established mainstream manufacturing technology. And the EDA industry has introduced many tools and technology to help design products that take the 3D IC path. Over the recent past, complex SoC implementations started leveraging 3D IC technology to balance performance/cost goals.

The slowing of Moore’s law has become a major driver to the chiplets way of implementing SoCs. Chiplets are small ICs specifically designed and optimized for operation within a package in conjunction with other chiplets and full-sized ICs. More companies are turning to 3D stacking of ICs and chiplets implemented in different process nodes optimal for the respective chiplet’s function. Designers can also combine 3D memory stacks, such as high bandwidth memory, on a silicon interposer within the same package. The 3D IC implementation will be a major beneficiary of the chiplets adoption wave.

When a new capability is ready for mainstream, its mass adoption success depends on how easily, quickly, effectively and efficiently a solution can be delivered. While the 3D IC manufacturing technology may have become mainstream, there are some foundational enablers for a successful heterogeneous 3D IC implementation. Siemens EDA recently published an eBook on this topic, authored by Keith Felton.

This post will highlight some salient points from the eBook. A follow up post will cover methodology and workflows recommendations for achieving optimal results when implementing 3D IC designs.

Foundational Enablers For Successful Heterogeneous 3D IC Implementation

Any good design methodology always includes look-aheads for downstream effects in order to consider and address them early in the design process. While this is important for monolithic designs, it becomes paramount when designing 3D ICs.

System Co-Optimization (STCO) approach

This approach involves starting at the architectural level to partition the system into various chiplets and packaged die based on functional requirements and form factor constraints. After this step, RTL or functional models are generated. This is followed by physical floor planning and validation all the way to detailed layout supported with in-process performance modeling.

STCO elements already exist in a number of Siemens EDA tools, allowing engineers to evaluate design decisions in the context of predictive downstream effects of routability, power, thermal and manufacturability. Predictive modeling is a fundamental component of the STCO methodology that leverages Siemens EDA modeling tools during physical planning to gain early insight into downstream performance.

Transition from design-based to systems-based optimization

A 3D IC design requires consistent system representation throughout the design and integration process with visibility and interoperability of all cross-domain content. This calls for tools and methodology capable of a full system perspective from early planning through implementation to design signoff and manufacturing handoff.

Expanding the supply chain and tool ecosystem

3D IC design efforts demand a higher level of tool interoperability and openness than the industry is used to. Sharing and updating design content in a multi-vendor and/or multi-tool environment must be supported. This places a greater demand on assembly level verification throughout the design process to ensure the different pieces of the system work together as expected.

Balancing design resources across multiple domains

STCO facilitates exploration of the 3D IC solution space for striking the ideal balance of resources across all domains and deriving the optimal product configuration. An early perspective enables better engineering decisions on resource allocation, resulting in higher performing, more cost effective products.

Tighter integration of the various teams

A new design flow is required to support the design, validation, and integration of multiple ASICs, chiplets, memory, and interposers within a 3D IC design. The silicon, packaging and PCB teams are more likely to be global, requiring even tighter integration with the system, RTL and ASIC design processes.

For more details on Siemens EDA 3D IC innovations, you can download the eBook published by Siemens EDA.

While the Siemens heterogeneous 3D IC solution is packed with powerful capabilities, fully benefitting from these capabilities depends on the implementation methodology put to use. Designing 3D IC products that deliver differentiation, profitability and time to market advantages will be the subject of a follow-on blog.

Also Read:

Coverage Analysis in Questa Visualizer

EDA in the Cloud with Siemens EDA at #59DAC

Calibre, Google and AMD Talk about Surge Compute at #59DAC


ARC Processor Summit 2022 Your embedded edge starts here!

ARC Processor Summit 2022 Your embedded edge starts here!
by Synopsys on 08-15-2022 at 10:00 am

ARC Summit 2022

As embedded systems continue to become more complex and integrate greater functionality, SoC developers are faced with the challenge of developing more powerful, yet more energy-efficient devices. The processors used in these embedded applications must be efficient to deliver high levels of performance within limited power and silicon area budgets.

Why Attend?

Join us for the ARC® Processor Summit to hear our experts, users and ecosystem partners discuss the most recent trends and solutions that impact the development of SoCs for embedded applications. This event will provide you with in-depth information from industry leaders on the latest ARC processor IP and related hardware/software technologies that enable you to achieve differentiation in your chip or system design. Sessions will be followed by a networking reception where you can see live demos and chat with fellow attendees, our partners, and Synopsys experts.

Who Should Attend?

Whether you are a developer of chips, systems or software, the ARC Processor Summit will give you practical information to help you meet your unique performance, power and area requirements in the shortest amount of time.

Automotive

Comprehensive solutions that help drive security, safety & reliability into automotive systems

AI

Power-efficient hardware/software solutions to implement artificial intelligence technologies in next-gen SoCs

Enabling Technologies

Solutions to accelerate SoC and software development to meet target performance, power and area requirements

We look forward to seeing you in person at ARC Processor Summit!

Make the Safe Choice

  • Over 20 years of innovation delivering silicon-proven processor IP for embedded applications – billions of chips shipped annually
  • Industry’s second-leading processor by unit shipment
  • The safe choice with significant investment in development of safety and security processor

Industry’s Best Performance Efficiency for Embedded

  • Broad portfolio of proven 32-/64-bit CPU and DSP cores, subsystems and software development tools
  • Processor IP for a range of applications including ultra-low power AIoT, safety-critical automotive, and embedded vision with neural networks
  • Supported by a broad ecosystem of commercial and open-source tools, operating systems, and middleware

PPA Efficient, Configurable, Extensible

  • Optimized to deliver the best PPA efficiency in the industry for embedded SoCs
  • Highly configurable, allowing designers to optimize the performance, power, and area of each processor instance on their SoC
  • ARC Processor eXtension (APEX) technology to customize processor implementation

Rich ARC Ecosystem

  • Complete suite of development tools to efficiently build, debug, profile and optimize embedded software applications for ARC based designs
  • Broad 3rd party support provides access to ARC-optimized software and hardware solutions from leading commercial providers
  • Online access to a wide range of popular, proven free and open-source software and documentation

Register Today!

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry’s broadest portfolio of application security testing tools and services. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at www.synopsys.com.

Also Read:

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

DSP IP for High Performance Sensor Fusion on an Embedded Budget

Intelligently Optimizing Constrained Random


Digital Twins Simplify System Analysis

Digital Twins Simplify System Analysis
by Dave Bursky on 08-15-2022 at 6:00 am

Siemens Digital Twin SemiWiki

The ability to digitally replicate physical systems has been used to model hardware operations for many years, and more recently, digital twining technology has been applied to electronic systems to better simulate and troubleshoot the systems. As explained by Bryan Ramirez, Director of Industries, Solutions & Ecosystems, Siemens EDA, one example of the early use of twin technology was in the Apollo 13 mission back in 1970. With a spacecraft 200,000 miles away, hands-on troubleshooting was not possible to solve a failing subsystem or system problem. Designers tackled the challenge by using a ground-based duplicate system (a physical twin) to replicate and then troubleshoot problems that arose.

However, such physical twins were both expensive and very large, and often had to be disassembled to reach the system that failed. By employing a digital twin of the system, designers can manipulate the software to do the analysis and develop a solution or workaround to the problem, saving time and money.  A conceptual model of the digital twin was first proposed by Michael Grieves of the University of Michigan in 2002 explained Ramirez, and the first practical definition of the digital twin stemmed from work at NASA in 2010 to improve physical model simulations for spacecrafts.

Digital twins allow designers to virtually test products before they build the systems and complete the final verification.  They also allow engineers to explore their design space and even define their system of systems. For example, continued Ramirez, using digital twin technology to model autonomous driving can help impact the electronic control systems. Using the twin technology designers can develop the models that simulate the sensing, computations and actuations of the autonomous driving system as well as the “shift-left” software development. This gives the designer the ability to go from chipß->carß>city validation without hardware. That reduces costs and design spins as well as allowing designers to optimize system performance.

Additional benefits of digital twin technology include the ability to include predictive maintenance, remote diagnostics, and even real-time threat monitoring. In industrial applications, real-time monitoring, feedback for continuous improvement, and feed-forward predictive insights are key benefits of leveraging the digital twin approach (see the figure). Factory automation can also benefit by using the digital twin capability for simulating autonomous guided vehicles, interconnected systems of systems, as well as examining security, safety, and reliability aspects.

Extrapolating future scenarios, Ramirez suggests that the digital twin capability can simulate the impossible. One such example is an underwater greenhouse dubbed Nemo’s Garden. In the simulation, the software can accelerate innovation by removing the limitations of weather conditions, seasonality, growing seasons, and diver availability.

All these simulation capabilities are the result of improved compute capabilities, which, in turn, are the result of higher-performance integrated circuits. Additionally, as the IC content in systems continues to increase, it becomes easier to simulate/emulate the systems as digital twins. However, as chip complexity continues to increase, cost – especially the cost of respins – the need for the use of digital twins increases to better simulate the complex chips and thus avoid costly respins. The challenges that the digital twin technology faces include creating models for the complex systems, developing multi-domain and mixed-fidelity simulations, setting standardization for data consistency and sharing, and performance optimization. These are issues that the industry is working hard to address.

For more information go to Siemens Digital Industries Software

Also read:

Coverage Analysis in Questa Visualizer

EDA in the Cloud with Siemens EDA at #59DAC

Calibre, Google and AMD Talk about Surge Compute at #59DAC


Time for NHTSA to Get Serious

Time for NHTSA to Get Serious
by Roger C. Lanctot on 08-14-2022 at 10:00 am

Time for NHTSA to Get Serious

In the final season of “The Sopranos,” Christopher Multisanti (played by Michael Imperioli) and Anthony Soprano (James Gandolfini) lose control of their black Cadillac Escalade and go tumbling off a two-lane rural highway and down a hill. Christopher dies (spoiler alert) with an assist from Tony, before Tony calls “911” for help.

Connected car junkies will immediately cry foul given that the episode – which first aired in 2007 – falls well within the deployment window of General Motors’ OnStar system. But more vigilant devotees will recall that in an earlier season Tony says he “had all of that tracking shit removed” from his car. (Tony favored GM vehicles.)

I was reminded of this as I rewatched the series and pondered the National Highway Traffic Safety Administration’s crash reporting General Order issued last year. The reporting requirement raises a critical issue regarding privacy obligations or the relevance of privacy in the event of a crash. The shortcomings of the initial tranche of data reported out by NHTSA last month suggest a revision of the reporting requirement is in order.

When the NHTSA issued its Standing General Order in June of 2021 requiring “identified manufacturers and operators to report to the agency certain crashes involving vehicles equipped with automated driving systems (ADS) or SAE Level 2 advanced driver assistance systems (i.e. systems that simultaneously control speed and steering),” the expectation was that the agency would soon be awash in an ocean of data. The agency was seeking deeper insights into the causes of crashes, the mitigating effects of ADS and some ADAS systems, and some hint as to the future direction of regulatory actions.

Instead, the agency received reports of 419 crashes of ADAS-equipped vehicles and 145 crashes involving vehicles equipped with automated driving systems. What has emerged from the exercise is a batch of heterogeneous data with obvious results (human-driven vehicles with front end damage and robot-driven vehicles with rear-end damage.) and gaping holes.

The volume and type of data were insufficient to draw any significant conclusions and the varying ability of the individual car companies to collect and report the data produced inconsistent information. In fact, allowable redactions further impeded the potential for achieving useful insights.

To this add NHTSA’s own caveats – described in great detail in NHTSA documents;

  • Access to Crash Data May Affect Crash Reporting
  • Incident Report Data May Be Incomplete or Unverified
  • Redacted Confidential Business Information and Personally Identifiable Information
  • The Same Crash May Have Multiple Reports
  • Summary Incident Report Data Are Not Normalized

The only car company that appears to be adequately prepared and equipped to report the sort of data that NHTSA is seeking, Tesla, stands out for having reported the most relevant crashes. In a report titles “Do Teslas Really Account for 70% of U.S. Crashes Involving ADAS? Of course Not,” CleanTechnica.com notes that Tesla is more or less “punished” for its superior data reporting capability. Competing auto makers are allowed to hide behind the limitations of their own ability to collect and report the required data.

It’s obvious from the report that there is a vast under-reporting of crashes. This is the most salient conclusion from the reporting and it calls for a radical remedy.

The U.S. does not have a mandate for vehicle connectivity, but nearly every single new car sold in the U.S. comes with a wireless cellular connection.  The U.S. does have a requirement that an event data recorder (EDR) be built into every car.

If NHTSA is serious about collecting crash data, the agency ought to mandate a connection between the EDR and the telematics system and require that in the event of a crash the data related to that crash be automatically transmitted to a government data collection point – and simultaneously reported to first responders connected to public service access points.

There are several crucial issues that will be remedied by this approach:

  • First responders will receive the fastest possible notification of potentially fatal crashes. Most automatic notifications are triggered by airbag deployments and too many of those notifications go to call centers that introduce delays and impede the transmission of relevant data.
  • A standard set of data will be transmitted to both the regulatory authority and first responders – removing inconsistencies and redactions. All such systems ought to be collecting and reporting the same set of data. European authorities recognized the importance of consistent data collection when they introduced the eCall mandate which took effect in April of 2018.
  • Manufacturers will finally lose plausible deniability – such as the ignorance that GM claimed during Congressional hearings in an attempt to avoid responsibility for fatal ignition switch failures
  • Such a policy will recognize that streets and highways are public spaces where the drivers of cars that collide with inanimate objects, pedestrians, or other motorists have forfeited a right to privacy. The public interest is served by automated data reporting from crash scenes.

NHTSA administrators are political appointees with precious little time to influence policy in the interest of saving lives. It is time for NHTSA to act quickly to establish a timeline for automated crash reporting to cut through the redactions and data inconsistencies and excuses and pave a realistic path toward reliable, real-time data reporting suitable for realigning regulatory policy. At the same time, the agency will greatly enhance the timeliness and efficacy of local crash responses – Anthony Soprano notwithstanding.

Also Read:

Wireless Carrier Moment of Truth

DSPs in Radar Imaging. The Other Compute Platform

Accellera Update: CDC, Safety and AMS