WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 634
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 634
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

2025 Retrospective. Innovation in Verification

2025 Retrospective. Innovation in Verification
by Bernard Murphy on 01-29-2026 at 6:00 am

Key takeaways

As usual in January we start with a look back at the papers we reviewed last year. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

Innovation New

Looking back at 2025

We decided on a new way to present our findings this year. I’ll start with a list of blogs sorted by major topic (e.g. AI), then by popularity. Then a quick summary of Paul and Raúl’s takeaways, closing with some insights into who was reading our posts through 2025.

Our motivation for topic selections this year was obviously influenced by AI. It is amazing to realize how quickly our selections in AI have evolved over the several years we have been posting: from CNNs to RNNs to LLMs and reinforcement learning. You can be confident that we will continue to track this topic. Quantum simulation was surprisingly popular, maybe suggesting a follow-on. Hardware acceleration is always hot, we’ll find more. Analog continues to be important, given growing complexity and embedded roles in digital systems. So also is inspiration from software engineering innovations. And, from what we hear, interest in new methods to tackle verification problems in multicore systems remains high.

Category Topic
AI Agentic Bug Localization. November, Stanford, Yale, USC
AI Neurosymbolic code generation. September. Google, MIT, UT Austin, Cornell
AI Prompt Engineering for Security. July, U Florida
AI LLMs Raise Game in Assertion Gen. April, Princeton
Quantum Simulating Quantum Computers. December, ETH, U Toronto, U Mass.
HW Accel Emulator-Like Simulation Acceleration on GPUs. October, NVIDIA, U Beijing
Analog Reachability in Analog and AMS. June, Texas A&M
Analog Metamorphic Test in AMS. March U Bremen, JK U Austria
SW inspired Cocotb for Verification. August, U Tamilnadu
SW inspired Optimizing an IR for Hardware Design. May, ETH and Cambridge U
Multi-core Verif Bug Hunting in Multi Core Processors. February, IBM

 

Paul and Raúl’s combined takeaways

An AI focus is natural, yet we also aim to emphasize grounded research. Our April paper from Princeton on assertion generation is an example, highlighting challenges in this area. Building an agentic verification engineer is going to take more than an off-the-shelf LLM and a bit of prompting. Still research in AI for verification continues to push forward: November’s agentic bug localization paper echoes the top-focus area in industry and academia while February’s paper explores intermediate representations and knowledge graphs. Expect to see more this year on fine-tuned models, transfer learning, LLM long term memory, etc.

We’re intentionally optimizing for broad and topical coverage: digital, analog, GPU acceleration, quantum, LLVM. That said, our selections have leaned heavily to academic papers. We will try to find more industry-sponsored papers this year. Academic research topics are also influenced by industry needs but it would be useful to see more insights into early deployment experiences, either directly authored or co-authored by semiconductor or systems enterprises.

It is also useful to explore overlaps and differences with software verification, especially for RTL/SystemVerilog/SystemC. Two papers touched on this and it will continue to be a topic of interest in our selections

Paul would like to add that we’re grateful and pleased to see research in verification continue to be so vibrant. Stepping back from it all (see below), there really are many quality advances being published, offering a wealth of interesting ideas from around the world!

Who is reading our blogs?

LinkedIn (LI) provides some revealing demographic insights. We now see 20k-30k views per blog. These include not only engineers in ASIC communities but also FPGA communities and software developers.

Top readership groups registered under LI are from Intel, Synopsys, Cadence, Qualcomm, AMD and Apple. Readers are based in the San Francisco Bay Area, the Bengaluru area, Austin (TX) and Portland (OR), with additional interest from Munich (Germany), Paris (France) and Ankara (Turkey).

We greatly appreciate your support and the contributions made by the authors of papers we review. We would still like to see more feedback: suggestions for topics to cover, or support, or comments on topics we have reviewed. If you want to provide private feedback, contact me through my LinkedIn page HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.