WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 601
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 601
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 601
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 601
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

2024 Retrospective. Innovation in Verification

2024 Retrospective. Innovation in Verification
by Bernard Murphy on 01-30-2025 at 6:00 am

Key Takeaways

  • The top blog post of 2024 was on using automated theorem proving to validate multipliers, attracting over 17k views.
  • Over half of the papers covered in 2024 focused on AI-related topics in verification, indicating a strong reader interest in AI.
  • The second most-viewed paper was on a promising architecture called Mamba, based on state space models.

As usual in January we start with a look back at the papers we reviewed last year. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

2024 Retrospective. Innovation in Verification

The 2024 Picks

These are the blogs we posted through the year, sorted by popularity. We averaged 12.6k engagements per blog, a little up from last year; thank you for your continued interest! The leader at over 17k views was a surprise, using automated theorem proving to validate multipliers. While still an exotic technology more commonly at home in proving math theorems (the 4-color problem) and the security of specialized OSes, it seems our readership is more than intrigued, perhaps looking for future methods to extend formal verification?

Theorem Proving for Multipliers. Innovation in Verification
The Next LLM Architecture? Innovation in Verification
Accelerating Simulation. Innovation in Verification
2023 Retrospective. Innovation in Verification
Fault Sim on Multi-Core Arm Platform in China. Innovation in Verification
Bug Hunting in NoCs. Innovation in Verification
Fault Simulation for AI Safety. Innovation in Verification
Compiler Tuning for Simulator Speedup. Innovation in Verification
Using LLMs for Fault Localization. Innovation in Verification
BDD-Based Formal for Floating Point. Innovation in Verification
Novelty-Based Methods for Random Test Selection. Innovation in Verification
Safety Grading in DNNs. Innovation in Verification

Paul’s view

Wow! 5 years of blogging our appreciation for innovation in verification. And our readership continues to increase every year. I never expected this. Thank you, readers!

Our most recent paper, picking up on research by MIT on accelerating logic simulation (see here), had the most hits out of the gate in its first month. This was a great paper, and I hope MIT continue active research in this area. Our blog on State Space Models (see here) was second out of the gate. It didn’t expect this but am delighted to see it as I also found the work very intriguing, especially the structural parallels between an SSM and control vs. datapath in a digital circuit.

The paper on using theorem proving techniques rather than SAT/BDD for multiplier correctness proofs (see here) was also somewhat of a surprise hit for me, but another happy surprise. While the content was heavy, the paper was very well written, and I suspect popular because it is relevant to the verification of AI accelerator chips.

After these top 3 winners we have a mix of hits in the 9-14k range, which still means a lot of interest in every paper we blogged on. I was glad to have read the compiler tuning paper (see here) as it confirmed the same sorts of finding we have been seeing internally at Cadence.

Looking forwards to this year, we would welcome your feedback and suggestions on what to cover. AI, accelerated simulation, and formal will all continue to be important topics – I see the same high interest in these topics from our Cadence customers. Another area which is getting increasing attention in the commercial world is “synthetic content”. This is verification done by generating interesting bare metal software programs that run on the chip being verified and attempt to stress test correctness and performance in clever ways. Synthetic content sits somewhere between signal level testbenches in, say SystemVerilog UVM, and true system-level testing running an actual operating system and application software. We’ll try to cover this topic some more in 2025. Looking back on 2024, we didn’t look at mixed-signal verification. We aim to pay more attention to this area this year.

Raúl’s view

Surprisingly, Wine Spectator named a Chilean wine as their #1 pick for their 2024 list. Similarly, our readers surprisingly selected a paper on theorem proving for verifying multipliers as the most-viewed blog post of 2024. However, the broader trend tells a different story. While the wine list remains dominated by familiar regions, our reviewed papers, much like last year, were largely focused on AI in verification. Over half of the papers we covered dealt with AI-related topics.

The second most-viewed paper, Mamba, diverged from verification entirely. It explores a promising architecture based on SSM, which shows promise as a potential alternative to Transformers, particularly for applications requiring efficient processing of long sequences. Meanwhile, AI applications in verification occupied spots 6, 7, 8, 10, and 11. This might suggest a degree of reader fatigue with these themes or perhaps a sense of saturation in the field. To list them:

  • Detecting transient faults in DNN accelerators using AI and RTL simulation.
  • Using Bayesian optimization to optimally configure compiler flags.
  • Leveraging large language models (LLMs) for fault localization in Java programs.
  • Applying neural networks to identify coverage “holes.”
  • Employing deep neural networks (DNNs) to enhance functional safety in image classification.

The remaining papers were Verification- or EDA-specific and captured considerable interest. Ranking third was a paper on accelerating RTL simulation using a dedicated HW/SW architecture (SASH). Fourth place went to a practical approach for speeding up fault simulation, while fifth focused on leveraging fuzzing for NoC verification. Ninth place featured a study on managing BDD size for floating-point adder verification—a second formal verification paper, though surprisingly, it was among the least viewed.

As I look ahead to a fifth year of blogging about verification papers, I’m curious to see which topics we’ll explore based on your feedback. The continued growth in readership is especially encouraging—our 2023 retrospective saw a 40% increase in readers compared to the previous year. Thank you for your interest!

Also Read:

Accelerating Simulation. Innovation in Verification

Compiler Tuning for Simulator Speedup. Innovation in Verification

Cadence Paints a Broad Canvas in Automotive

 

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.