WP_Term Object
(
    [term_id] => 98
    [name] => Andes Technology
    [slug] => andes-technology
    [term_group] => 0
    [term_taxonomy_id] => 98
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 25
    [filter] => raw
    [cat_ID] => 98
    [category_count] => 25
    [category_description] => 
    [cat_name] => Andes Technology
    [category_nicename] => andes-technology
    [category_parent] => 178
)
            
Andes RISC v ISO 26262
WP_Term Object
(
    [term_id] => 98
    [name] => Andes Technology
    [slug] => andes-technology
    [term_group] => 0
    [term_taxonomy_id] => 98
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 25
    [filter] => raw
    [cat_ID] => 98
    [category_count] => 25
    [category_description] => 
    [cat_name] => Andes Technology
    [category_nicename] => andes-technology
    [category_parent] => 178
)

Changing RISC-V Verification Requirements, Standardization, Infrastructure

Changing RISC-V Verification Requirements, Standardization, Infrastructure
by Daniel Nenni on 11-07-2024 at 10:00 am

RISC-V Verification

A lively panel discussion about RISC-V and open-source functional verification highlighted this year’s Design Automation Conference. Part One looked at selecting a RISC-V IP block from a third-party vendor and investigating its functional verification process.

In Part Two, moderator Ron Wilson and Contributing Editor to the Ojo-Yoshida Report took the panel of verification and open-source IP experts on a journey through RISC-V’s changing verification requirements, standardization and its infrastructure. Panelists included Jean-Marie Brunet, Vice President and General Manager of Hardware-Assisted Verification at Siemens; Ty Garibay, President of Condor Computing; Darren Jones, Distinguished Engineer and Solution Architect with Andes Technology; and Josh Scheid, Head of Design Verification at Ventana Microsystems.

Wilson: When SoC designers think RISC-V, they think microcontroller or at least a tightly controlled software stack. Presumably, evolution will bring us to the point where those stacks are exposed to app developers and maybe even end users. Does that change the verification requirements? If so, how?

Scheid: On standardized stacks? Market forces will see this happen. We see Linux-capable single-board computers with a set of different CPU implementations. Designers see the reaction to non-standardization. The RISC-V community has survived a couple of opportunities for fragmentation. The problem was with the vector extension and some debate about compressed instructions. We survived both by staying together with everything. I think that will continue.

Back to these single-board computers. The ability for the software development community to run on those allows developers to say, “Yes, I work on this, and I work on these five things, therefore I consider that.” It’s a way for software to know that they’re portable. They’re working on the common standard together.

Garibay: The difference isn’t in so much the verification of a CPU. We’re spending much more time at the full-chip SoC deployment level and that’s typically ISA-agnostic. A designer is running at an industry-standard bus level with industry-standard IO and IP. That layer of the design should standardize in a relatively stable manner.

Wilson: Does it help your verification?

Jones: Sure. The more software to run on RISC-V CPUs, the better. Once designers step outside of the CPU, it’s plug-and-play. For example, companies take products to Plugfests for various interconnect technologies, including PCIe. When PCIe was brand new, they couldn’t have done that because it was too new. If a customer is building a system, they want to have more software. The more software that there is to run, the better it is for everyone, including the CPU vendor who can run and validate their design.

Brunet: An aspect as well is the need to run much more software and the need for speed. Designers are not going to run everything at full RTL. Using a virtual model and virtual platform has been helpful for conventional architectures. With RISC-V, we

are starting to see some technology that is helping the virtual platform and it’s necessary. We don’t see standardization and it’s a bit of the Wild West with respect to RISC-V virtual platform simulation. It will benefit the industry to have better standardization, as proven by the traditional Arm platform. It’s a reliable virtual environment that can accelerate and run more software validation. I don’t see that happening yet with RISC-V.

Wilson: I’m hearing that the infrastructure around RISC-V has not caught up to where Arm is, but the ramp is faster than any other previous architecture. It’s getting there and RISC-V has advantages, but it’s brand new.

Scheid: I can feel the speed of the ramp.

Jones: Yes. Having seen the 64-bit Arm ramp for many years, the rate RISC-V is moving has been much more rapid. Arm broke the seal on extra architectures. RISC-V can learn all those lessons, and we’re doing well.

Wilson: Are we going to sacrifice quality or accuracy for that speed of ramp, or is RISC-V doing what other architectures have done faster and just as well?

Garibay: RISC-V is not a monolith. It is individual providers and many players in the market. The competition could cause corner cutting and that’s the great capitalist way forward. Designers must get what they pay for and there’s nothing inherently different about RISC-V in that respect. The goodness is that they have a number of potential vendors. At least one of them has something that is good and useful. It may be up to the designer to figure that out, but all the work over time will be viable.

Jones: I agree with Ty. We’re probably supposed to disagree to make the panel more interesting. But I agree that with RISC-V, the competition is actually a good thing, and that’s enabling a faster ramp. How many CPU design teams are designing Arm processors? A couple that are captive. Otherwise, it’s Arm. How many CPU design teams are designing RISC-V? Many. Andes has more than one team working on it.

Competition produces quality. Products that aren’t good won’t last. If designers have a software infrastructure with Arm and want to make a functional safety feature but don’t like Arm’s offering, they’re stuck. With RISC-V, they have a multitude of vendors that offer whatever’s needed.

Wilson: I’d like to dig into that. Historically, one RISC-V advantage is that it is open source. Designers can add their own instructions or pipeline. Is that true anymore with full support vendors beginning to organize the market? If so, what kind of verification challenge does a designer take on as an SoC developer?

Scheid: There are two sides. Certainly, there’s a verification aspect because a designer is taking on some of that design and making decisions about what’s appropriate as an architectural extension, which is powerful but there is risk and reward.

Another consideration is software ecosystem support. The amount of resources for any given instruction set architecture and the software system spent on software support is far greater than hardware.

Designers must consider the choice of going with standard customization or not. RISC-V is the third path, which is going ahead and proposing extensions into the RISC-V community for standardization and ratification as actual extensions. That can matter to design team and depends on what to keep proprietary. That also, as a middle ground, allows a design team to customize and have externalized ecosystem support over time by convincing the RISC community this is a value-added, viable extension.

Garibay: The ability to extend, especially the RISC-V instruction set is one of the great value-added propositions driving some of the motion toward the architecture.

How does that affect verification? Obviously, if the licensor is the one making the changes, it takes on some unique accountability and responsibility. As a company that licenses its IP, it has the responsibility to make it safe to create an environment around the deliverable so that the IP cannot be broken.

It’s an added burden for a CPU designer to create the environment to validate that statement is true to the extent that’s possible. It’s a critical value-add to the offering and worth spending engineering cycles to make it happen.

Licensors must make sure they create safe sandboxes for an end user or an SoC builder that are proprietary to the licensee. If the licensee wants a change and is willing to pay for it, great. If the licensor wants to put something in their hands and let them roll, it’s a collaborative process that must be part of the design from the beginning to make sure that it is done right.

Wilson: Is it possible to create a secure sandbox?

Garibay: Yes. I think Andes has done that.

Wilson: Do you end up having to rerun your verification suite after they’ve stabilized their new RTL?

Jones: A CPU IP vendor must put a sandbox around this unless it’s a company willing to do a custom design. If a designer wants to add a couple of special instructions, the company needs to make sure that the designer’s special instructions won’t break everything else. For instance, Andes has a capability through which designers can add their own computation instructions.

Wilson: You’re only allowed to break what you put into your custom?

Garibay: Yes, absolutely.

Jones: Designers have to verify their special add, sum, subtract, multiply on their own. That’s another question for the CPU vendor: How do you enable me to verify this? How do you enable me to write software to it? Buyer beware. You have to check.

Wilson: When we start talking about security, is there a role for formal tools, either for vendors or users?

Garibay: We use formal tools in our verification. One of the great things about RISC-V is its spec is not in a state that is easily usable by formal tools. I’d love to see the RISC-V port go that way.

Scheid: We use a full spectrum of different formal methods within our implementation process. In terms of customization, the method that makes the most sense for special add and would be the type of commercial tools that allow for C-to-RTL equivalency checking. With the right type of sandbox approach, it could be directly applicable to solving that problem for a designer’s customization.

Jones: I’ll take a little bit of the high road. You always have to ask the question about formal and formal means different things to different people. Formal has a role; a CPU vendor may be questioned about whether they used formal and for what and what did the formal tools find? Formal is good where traditional debug is difficult such as ensuring a combination of states can never be hit. Proving a negative is difficult for debug but is a strength of formal.

Scheid: For formal as a place in this space, I come back the ability to do customization of instructions, an attractive feature of RISC-V. Clearly, it can be done with Arm, but it’s a different value of the checkbook that’s needed to be used here. It’s attractive for grading verification challenges, something that comes with Arm.

RISC-V has a higher stack of verification. A custom instruction set goes completely through the roof on what’s needed to be verified. It’s doable. The verification bar is high, complex and focused on certain verticals. It’s also not for everybody. It’s a good attribute, but there’s a price in verification. Another interesting aspect of competition is the EDA space. Ventana is EDA and the only EDA vendor that does not provide any processor IP. The other two are vocal about RISC-V, creating an interesting situation with the market dynamic.

Also Read:

The RISC-V and Open-Source Functional Verification Challenge

Andes Technology is Expanding RISC-V’s Horizons in High-Performance Computing Applications

TetraMem Integrates Energy-Efficient In-Memory Computing with Andes RISC-V Vector Processor

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.