WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 752
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 752
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)
            
Q2FY24TessentAI 800X100
WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 752
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 752
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)

The RISC-V and Open-Source Functional Verification Challenge

The RISC-V and Open-Source Functional Verification Challenge
by Daniel Nenni on 10-24-2024 at 10:00 am

RISC-V and Open-Source Functional Verification

Most of the RISC-V action at the end of June was at the RISC-V Summit Europe, but not all. In fact, a group of well-informed and opinionated experts took over the Pavilion stage at the Design Automation Conference to discuss functional verification challenges for RISC-V and open-source IP.

Technology Journalist Ron Wilson and Contributing Editor to the Ojo-Yoshida Report moderated the panel with Jean-Marie Brunet, Vice President and General Manager of Hardware-Assisted Verification at Siemens; Ty Garibay, President of Condor Computing; Darren Jones, Distinguished Engineer and Solution Architect with Andes Technology; and Josh Scheid, Head of Design Verification at Ventana Microsystems.

Their discussion is broken into a three-part blog post series starting with selecting a RISC-V IP block from a third-party vendor and investigating its functional verification process.

Wilson: Assuming a designer is going to use a CPU core in an SoC and not modify the RTL or add custom instructions, is there any difference in the functional verification process for licensing a core from Arm or licensing a RISC-V core from an established vendor? What about downloading an open-source core that seems to work? Do they have the same verification flow or are there differences?

Scheid: A designer will use the same selection criteria and have the same integration experience for RISC-V as Arm and other instruction set architectures. The RISC-V software support is the challenge because everything is open source, mostly coming from upstream and open-source areas, not necessarily proprietary tool chains from the vendors.

Ventana uses standard open specification protocols on how to integrate with other products. Verification IP support is available from multiple vendors to make this experience similar to others.

Garibay: The big difference is using a core from true open source. The expectation for that core would be different. In general, a designer paying for IP can assume some amount of energy and effort put into the verification of the CPU core itself that enables a less painful delivery of the IP and integrated at the SoC level.

I would have a different expectation between any true open-source IP versus something licensed from an established design house. An ISA is an ISA. What matters is the design team and the company that stands behind it more than the ISA itself relative to the user experience.

Jones: I agree. It’s less about RISC-V versus Arm and more about the quality of the licensed IP. In fact, I would remind anyone who’s building an SoC that includes some number of IP, CPU, PCIe, USB: they are paying for verification. A designer can go out and find free RTL for almost anything by looking on Google.

Basing a company’s product (and potentially the company’s future) on a key IP block, it must be from a company that can stand behind it. That’s the verification as well as support.

Acquiring IP can be separated into three options –– Arm versus RISC-V versus something found on Google. Something found on Google is risky. Between Arm and the various RISC-V vendors, it’s more about the company’s reputation and good design flow. It’s less about RISC-V versus Arm.

Wilson: It’s almost a matter of buying a partner who’s been through this versus you’re on your own?

Garibay: Absolutely. You certainly want to have a vendor that has a partnership attitude.

Brunet: Yes. It’s all about verification. As a provider of hardware-assisted verification, RISC-V is a dream come true. Arm has a large compute subsystem providing a complex environment fully verified that is broad and sophisticated in the compute environment and paid for by users. RISC-V-based designers are going to have to verify much more of the interaction between the netlist, the hardware and the software stack.

Software stack verification is the big challenge as well as scaling the RTL of the device. That’s common for designers as soon as they do big checks. Verification is the biggest bottleneck and the size of the RISC-V software stack and software ecosystem is still not at the same level as Arm. Therefore, it’s putting even more pressure on the ability to verify not only the processor, the capacity of the processing unit, but integrating with the IP that is PCIe, CXL and so on. That’s a far greater verification challenge.

Wilson: RISC-V has respected vendors and so many extensions that sometimes are not so subtly different. Does that complicate the verification problem, or does it simplify it by narrowing the scope?

Scheid: The number of extensions is the wrong thing to focus on. Arm uses the term features and many dozens of those within its Arm versions. The number of ratified extensions was around 50 something. It gets scary if there’s a big list. Going forward, designers are going to focus more on what RISC-V is working on in terms of profiles. We’re talking about advanced processors at one level, microcontrollers at another and then a time-ordered series of improvements in terms of what extensions are supported in that profile. That’s going to be easier to focus on in terms of selecting IP, the support for that profile and still allow optionality between the different implementations. It won’t be as confusing as a list of extensions.

Wilson: Do you see open-source verification IP converging around those?

Scheid: The value of having fewer combinations supported is going to circle around itself. Everyone involved from the implementers to the verification IP providers to the software system aren’t going to look at that combinatorial explosion favorably. Some who have tight constraints will want to choose arbitrary combinations. The vast majority are going to focus around working with the rest of the ecosystem to focus on those profiles and focus on that.

Garibay: The specification of the profiles is a big leap forward for the RISC-V community to allow for the establishment of software compatibility baselines such that there is at least the promise of vendor interoperability for full stack software. Designers should have a baseline and then accept it as customization or as optional sub-architectures.

The fun part about RISC-V right now is not the number of different features being added to the specification. It is the pace. Over the last two years and probably going forward for the next year, RISC-V has been on a rapid pace adding features that are needed. Features that are the right set of features to expand the viability of the architecture of high-performance computing. We need them and it has dramatically inflated the design space and the verification space. Really, we’re just getting to the same point where Arm and x86 are now and probably have been for years. It’s just a dramatic rate of change for RISC-V coming from a much simpler base.

Jones: I’m not sure it’s the right question. If I’m the SoC designer and have an option, I can take a 32-bit floating point or a 64-bit floating point. If I go with RISC-V and it’s 64-bit, it must have 32-bit. By having extensions, RISC-V benefits from the history of x86, MIPS, SPARC and Arm.

What I mean to say is, if I’m the SoC designer, I don’t have to verify the CPU. My CPU vendor has to verify it. That’s fair and I will choose a high-quality vendor. When I talk about verification on my SoC design, I’m talking about connecting the bus properly, assigning address spaces correctly and routing properly throughout my SoC design. The SoC designer has to verify that the software running on the CPU is correct. There again, I benefit from the standard that is RISC-V.

When I started out, MIPS had a problem because each MIPS vendor had a different multiply accumulate instruction (MAC) because MIPS didn’t have a MAC. Software vendors were charging each MIPS vendor $100,000 to port their compiler to it. The first vendor got his money’s worth. Everybody else got that same compiler for $100,000. MIPS figured this out, standardized, and everyone was happy. RISC-V avoids those kinds of problems.

Wilson: Do you see extensions as an advantage?

Jones: Yes, because RISC-V can be implemented as a small microcontroller that doesn’t have all the other features that a small core does not require. Arm does this too, though it doesn’t call it an extension. An Arm core is available without a floating-point unit. I would go so far as to say that the number of pages in the Arm ISA and the number of pages in the various extensions of the RISC-ISA are probably similar. RISC-V’s may be shorter.

Garibay: Aggressively standardizing is getting ahead of the problem that we saw in the past with different architectures where designers tried to implement the same function five different ways and come back with a standard. Four designers are going to be upset. It’s great to see the RISC-V organization leading in this way and paving the road. The challenge is to make sure we fill all the holes.

Brunet: I don’t see software stack interoperability happening for RISC-V. It’s a complex challenge and probably the main reason why it is not taking off. A few companies that have a complete chip entirely RISC-V are using it. Most are large, complete subsystems or compute subsystems that are mainly Arm and some cores with well-defined functionality that is RISC-V. Few are completely RISC-V. Is it architecturally the main reason or is it because of the software stack?

Jones: I’ve done a totally RISC-V chip myself. I also know a number of AI chips are completely RISC-V. The difference for these successes is that, in general, those software stacks are not intended to be exposed to the end user. They’re almost 100% proprietary. Given that, a designer can manage the novelty of the RISC-V software stack. Exporting that as an ecosystem to users is the challenge that RISC-V has and what the profiles are intended to enable going forward. We’re at the beginning.

End of Part I

Also Read:

Prioritize Short Isolation for Faster SoC Verification

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs

SystemVerilog Functional Coverage for Real Datatypes

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.