As usual in January we start with a look back at the papers we reviewed last year. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. And don’t forget to come see us at DVCon, first panel in the morning on March 1st 2023 in San Jose. Bring your own favorite ideas to share with us!
The 2022 Picks
These are the blogs sorted by popularity. We averaged a little over 10K hits per blog, an encouraging sign that you continue to enjoy this series. As usual the hottest blog was the retrospective, very closely followed by hazard detection using Petri Nets. The latter scored significantly above the average, suggesting we should invest more time in this topic through the coming year. Memory consistency checking, advances in coverage and security also ranked high. We’ll follow your interests by reviewing more papers on these topics through 2023.
2021 Retrospective. Innovation in Verification
Hazard Detection Using Petri Nets. Innovation in Verification
Post-Silicon Consistency Checking. Innovation in Verification
Dynamic Coherence Verification. Innovation in Verification
ML-Based Coverage Refinement. Innovation in Verification
Symbolic Trojan Detection. Innovation in Verification
Refined Fault Localization through Learning. Innovation in Verification
Validating NoC Security. Innovation in Verification
Ant Colony Optimization. Innovation in Verification
Formal at System (July) Level. Innovation in Verification
Test Ordering for Agile. Innovation in Verification
Stalling to Uncover Timing Bugs. Innovation in Verification
Paul’s view
A third year of blogging on innovation in verification, and I’m still really enjoying it. I truly believe verification is essentially the infinite problem: if you double the gate count you square the state space, and verification is a problem of covering the state space. There is no such thing as being “done”, there are always more devious bugs to be found if time and budget permit. I am grateful to all those of you in academia and industry who continue to research, innovate, and publish your works. The need and opportunity is definitely there!
Looking back on 2022 it was a year of blogging about cool algorithms. I learned about using Ant Colony Optimization in model checking (link). I read two papers that used genetic programming to improve test quality – one on increasing coverage of cross-domain FIFO stalls in big GPUs (link) and another on biasing random instruction generators to produce more memory race conditions (link).
Then there were three papers leveraging neural networks to improve productivity – to predict test failures and test coverage (link), to automatically root cause bugs (link), and to prioritize which tests to run and in what priority order (link).
We also blogged about a neat formal method for abstracting a design to the system level using temporal logic assertions (link), and I had a wonderful walk down memory lane back to my PhD days with a paper on using Petri nets to detect hazards in asynchronous interconnect (link).
Lastly, we zoomed in on a few papers applying formal methods to security verification – on one Trojan logic detection (link) and the other on NOC security (link).
Stepping back from our blog and looking to our industry at large, AI-driven verification and system level verification continue to stand out as big trends. 8 of the 11 papers we covered last year fall under these two big buckets. Here’s looking forward to another fun year of blogging on some more innovations in verification!
Raúl’s view
This is my second year with the Verification Blog and, if nothing else, I have learned a lot (and I hope so did you!) In 2021 the main themes (which of course overlap) we encountered in our random walk included higher levels of abstraction (power side channel leakage, memory consistency at the RTL, Instruction-Level) as well as specific verification aspects such as how to detect flipped Flops and Concolic Testing. We also covered papers on ML/NN (obligatory).
These themes continued to be prevalent in 2022. In the realm of higher levels of abstraction, we explored hazard detection using Petri nets (August, the most popular post, I wonder if the interest was sparked by Petri Nets or hazards/synchronous systems?), the equivalence of higher-level abstraction and RTL (July), and formal properties for NOC security (December). We also looked at how machine learning is being applied to a wide range of verification challenges, including covering hard-to-reach branches (April), fault localization in software (May), and test case selection and prioritization for agile software development (September). Other notable topics we covered included IBM’s Threadmill for post-silicon testing (October), memory consistency verification using genetic algorithms (February), Trojan detection (March), model checking with ant colony optimization (November), and FIFO stall verification (June).
In addition to technical merit and marketability, we also began considering citations and follow-up work. While our sample of two dozen papers is by no means exhaustive, our goal is to give readers a sense of the diverse range of approaches used in verification. And, let’s be honest, where else can you learn about stalling FIFOs, delve into Petri nets, and discover that ants are more effective when they carry food pheromones?
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.