As we established last year, we will use the January issue of this blog to look back at the papers we reviewed last year. We lost Jim Hogan and the benefit of his insight early last year, but we gained a new and also well-known expert in Raúl Camposano (another friend of Jim). Paul (GM, Verification at Cadence), Raúl (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I are ready to continue this series through 2022 and beyond. As always, feedback welcome.
The 2021 Picks
These are the blogs in order, January to December. All got good hits. The hottest of all was the retrospective, suggesting to me that you too wanted to know what others found most interesting 😀. This year, “Finding Large Coverage Holes” and “Agile and Verification” stood out, followed by “Side Channel Analysis” and “Instrumenting Post Silicon Validation”. Pretty good indicators of where you are looking for new ideas.
Reducing Compile Time in Emulation
Agile and Verification, Validation
Fuzzing to Validate SoC Security
Instrumenting Post-Silicon Validation
An ISA-like Accelerator Abstraction
Memory Consistency Checks at RTL
I am really enjoying this blog; I can’t believe it’s been 2 years already. It is amazing to me how Bernard seems to find something new and interesting every month. Our intention when we launched this blog was just to share and appreciate interesting research, but in practice the papers have directly influenced Cadence’s roadmap in verification. Which I think is the ultimate show of appreciation.
The biggest theme I saw in our 2021 blogs was raising abstraction. As has been the case for the last 30 years, this continues to be the biggest lever to improve productivity. Although, I should probably qualify that to domain-specific abstraction. Historically, abstractions have been independent of application – polygon to gate to netlist to RTL. Now the abstractions are often fragmenting – ISA to ILA for accelerator verification in the September blog. Mapping high level behavioral axioms to SystemVerilog for memory consistency verification in the October blog. Verilog to Chisel for agile CPU verification in the April blog. Assertions generalizing over sets of simulations for security verification in the May blog. And then of course, some abstractions continued to be domain-agnostic: Gate-level to C++ for system level power modeling in the November blog. Coverage to text tagging in the February blog.
The other theme which continued to shine through is how innovation emerges at intersections of different skills and perspectives. The February blog on leveraging document classification algorithms to find coverage holes is one great example this year. Early ML methods from the 1980’s rediscovered and reapplied to CPU verification in the June blog. Game theory used to optimize FPGA compile times in emulation in the March blog. It’s been great to see Bernard take this principle into our own paper selection this year, in a few months diverting away from “functional verification” into topics like power, security, and electrical bugs. It’s helping us do our own connecting of dots between two different domains.
Looking forward to continuing our random walk through verification again this year!
Without focusing on any particular area, from June to December, we touched on many interesting topics in Verification. The two most popular ones were Embedded Logic to Detect flipped Flops (hardware errors) and Assessing Power-Side Channel Leakage at the RTL-Level. Another RTL-Level paper dealt with memory consistency. At an even higher level, we looked at Instruction-Level Abstractions for verification. We also had the obligatory papers on ML/NN, one to generate better pseudo-random tests, the other to build accurate power models of IP. Finally, our December pick on Concolic Testing to reach hard to activate branches also deals with increasing test coverage.
One of the areas we focus on this blog is marketability; methodology papers, foundational papers, extensions of existing approaches and too small niches all do not qualify for different reasons. This has of course little to do with the technical merits. Some of the presented research is ripe for adoption, e.g., use of ML/NN to improve different tasks in EDA. A few are around methodology, e.g., an emulation infrastructure; some are more foundational such as higher-level abstractions. Others are interesting niches, for example side-channel leakage. But they are all research worthy and reading the papers was time well spent!
We three had a lively discussion on what principle (if any) I am following in choosing papers. Published in a major forum certainly. As Paul say, it has been something of a random walk through topics. I’d like to get suggestions from readers to guide our picks. Based on hits there are a lot of you, but you are evidently shy in sharing your ideas. Maybe a private email to me would be easier – email@example.com.
- I’m especially interested in hard technical problems you are facing constantly
- If you can (not required), provide a reference to a paper on the topic. This could be published in any forum.
- I’m not as interested in solved problems – how you used some vendor tool to make something work in your verification flow. Unless you think your example exhibits some fundamentally useful capability that can be generalized beyond your application.
Meantime we will continue our random walk, augmented by themes we hear continue to be very topical – coherency checking, security, abstraction
Methodology for Aging-Aware Static Timing Analysis
Scalable Concolic Testing. Innovation in Verification
More Than Moore and Charting the Path Beyond 3nmShare this post via:
There are no comments yet.
You must register or log in to view/post comments.