Agile methods in hardware design are becoming topical again. What does this mean for verification? Paul Cunningham (GM, Verification at Cadence) and I continue our series on research ideas. We’re also honored this month to welcome Raúl Camposano to our blog as a very distinguished replacement for Jim Hogan. As always, feedback welcome.
The Innovation
This month’s pick is An Agile Approach to Building RISC-V Microprocessors. The paper was published in IEEE Micro in 2016. The authors are from UC Berkeley.
Agile development is already well-established in software. As projects have become larger, more complex, more distributed, traditional waterfall development struggles to converge or deliver to expectations. Can agile methods apply to hardware design? So far, hardware teams have mostly resisted this change, arguing that hardware development has very different constraints, complexities and costs. Yet the same trends continue for size, complexity and distributed development, with ever-challenging schedules. The authors of this paper (including David Patterson and Krste Asanović) have adapted a set of agile development principles for hardware over 5 years and multiple design projects. They argue that agile is not only possible but desirable for hardware design.
Breaking from previous Innovation reviews, this is not a deeply technical paper. It covers something much more fundamental – a change in process for development and verification/validation. Our focus here is on the latter. Traditional practice separates pre-silicon verification and post-silicon validation, whereas with agile hardware development product features are continuously validated in the target application, in their case using FPGA prototypes, alongside verification to a spec.
Paul’s view
There’s no question that agile has transformed the software community. We use agile methods heavily at Cadence for EDA software development. I find this RISC-V study intriguing for two reasons: On one hand, continuous incremental feature development has similarity with traditional hierarchical design as units (IPs) are developed and verified in parallel, with continuous integration and validation at the top.
On the other hand, RTL for the RISC-V core here is auto-generated from code generators written in a high-level language called Chisel. The authors combine this raised abstraction with a highly automated physical implementation flow to enable rapid iteration on many different generated RTLs. This iteration is more agile than would be possible in a traditional hierarchical design methodology. The authors also mention that Chisel source code can be compiled to both RTL and a fast cycle-accurate C++ model. They don’t elaborate on how C++ models were used for verification and validation. I would welcome some follow-on publications here.
The idea of bringing agile to hardware by raising the abstraction resonates with me. If we can make hardware source code sufficiently software-like, and we can sufficiently automate physical implementation, then agile hardware design could become mainstream.
The challenge in achieving this next leap is to abstract at scale in our industry. Chisel extends the functional programming language Scala. SystemC extends C++ for hardware design and both Cadence and Siemens offer SystemC synthesis and verification solutions. As Hennessy and Patterson explain in their Turing award paper on domain specific architectures, there are multiple languages with domain specific emphasis. These include Halide, Matlab, and TensorFlow, all with active communities and tool development. How many abstractions will we need and how easily can these proliferate? Please let us know what you think!
Raúl’s view
The authors make an important point – this is not high-level synthesis, it’s generation. If something is wrong, they don’t so much fix the design as improve the tools and generators. I see this as a little different from a traditional agile flow and not quite the same as raising the level of abstraction. It’s abstracting differently, in terms of generators. I think that’s interesting.
I teach a class at Stanford, where Rick Bahr and others have a related paper based on the domain-specific language, Halide, where they also talk about an agile process. Both papers introduce intriguing ideas; they have found their way into commercial IP development, e.g., memory generators. In logic, I think flows like this would be great for architectural exploration, for figuring out what we want the system to do. Both can generate the microarchitectural level.
After that you still must deal with all the nitty-gritty details of Verilog, external IP and synthesis, where agile doesn’t obviously have a role. I think there’s merit in verifying chip-level prototypes early on, a point both papers make. If you have a prototype, you have a functioning system useful for functional verification, which is also key in agile software design. Microarchitecture and implementation would be a different topic. Still, the prototype could also be a useful validation reference during microarchitecture design.
My view
The authors are clear that their motivation for using an agile flow is to validate early and often. But general emphasis in the paper is on design creation rather than verification. Which I understand – agile has a big impact on that phase. It would be very helpful to dig deeper into agile impact on verification / validation processes and where there might be potential for agile-related improvement.
Also Read
Cadence Dynamic Duo Upgrade Debuts
Reducing Compile Time in Emulation. Innovation in Verification
Cadence Underlines Verification Throughput at DVCon
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.