WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

2023 Retrospective. Innovation in Verification

2023 Retrospective. Innovation in Verification
by Bernard Murphy on 01-25-2024 at 6:00 am

As usual in January we start with a look back at the papers we reviewed last year. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. We’re planning on starting a live series this year to debate ideas and broader topics and to get your feedback. Details to follow!

2023 Retrospective

The 2023 Picks

These are the blogs we posted through the year, sorted by popularity. We averaged 12.7k engagements per blog, a meaningful increase from last year which we take as an indication that you continue to enjoy our reviews of current research in verification. The leader was no surprise, applying LLMs to automated code review at almost 17k engagements. A close second uses ML to develop model abstractions. In fact the top 4 blogs in 2023 were on all on applications of AI/ML. Petri nets made an appearance again this year, here for validating rapidly evolving DRAM protocols. Using dedicated hardware for speculation in simulation, and a method to find anomalies rounded out the list. The retrospective for 2022 did about as well as usual but was overshadowed by interest in other papers through the year. It is a safe bet we will be looking at more applications of AI/ML in 2024!

Automated Code Review. Innovation in Verification
ML-Guided Model Abstraction. Innovation in Verification
Deep Learning for Fault Localization. Innovation in Verification
Assertion Synthesis Through LLM. Innovation in Verification
Better Randomizing Constrained Random. Innovation in Verification
Petri Nets Validating DRAM Protocols. Innovation in Verification
Developing Effective Mixed Signal Models. Innovation in Verification
ML-Based Coverage Acceleration. Innovation in Verification
Speculation for Simulation. Innovation in Verification
2022 Retrospective. Innovation in Verification
Anomaly Detection Through ML. Innovation in Verification
Information Flow Tracking at RTL. Innovation in Verification

Paul’s view

Another year flies by, and 49 papers read since we started the blog in November 2019! Back then we were thinking it would be a great way to bring together our verification community and show our appreciation for continued investment in verification research at academic institutions around the world.

What I didn’t predict was how reading all these papers would inspire new investments and innovations at Cadence. Writing this blog as has taught me that even at an executive level in engineering, staying connected to ground level research and reading papers regularly is good for business. So thank you readers, and thank you Bernard!

No surprise that our top 3 hits last year were all papers on using AI in verification, one on AI to automate code review (link), one on AI to help find bugs more quickly in high level SimuLink models of mixed-signal devices (link), and one on using AI to automatically identify which line of source code is the root cause of a test failure (link). We absolutely need to continue to invest in research here both in academia and in the commercial world. Somehow, over the next decade we need to find our next 10x in verification productivity, and it’s most likely to come from AI.

That said, my personal shout out from 2024 is not AI related. It’s for two papers in logic simulation: one on parallelizing simulation using speculative execution of the event queue (link), and the other on improving distribution quality of randomized inputs in constrained random tests using clever hashing functions (link). I call these “engine-level” innovations –making the building blocks inside EDA tools fundamentally better. We also need to continue research and innovation here. These two papers were very innovative but had nothing to do with AI. Let’s not forget to keep investing in non-AI related innovation as well.

Raúl’s view

Writing this retrospective during the holidays inevitably collides with one of humankind’s necessities which can be elevated to an art: eating. Reviewing restaurants perhaps shares enough with reviewing papers to justify ratings such as ★★★ exceptional, worth a special journey, ★★ excellent, worth a detour, ★ high quality, worth a stop, and 😋 exceptionally good at moderate prices. Paul already stated that our September review was a “Michelin star topic”. I will continue in this vein, using your preferences (number of views), dear readers, as the yardstick.

While last year’s blog was largely about cool algorithms, this year’s was about AI/ML and Software (SW). The top three ★★★ papers were all about verification of SW using AI/ML. The top rated blog (July) was about code review with generative AI, the second (November) dealt with testing and verifying SW for Cyber-Physical Systems using surrogate AI models, and the third (May) was about detecting and fixing bugs in Java augmenting with AI classifiers. Two of these three papers use large datasets from GitHub for training. Such data is not available publicly for hardware (HW) design; which is arguably different enough from SW to at least raise the question whether these results can/will be replicated for HW. Nevertheless, looking at what the SW community is doing about verification is certainly a source of inspiration.

The next three papers, ranked with ★★, are an eclectic collection of AI/ML, a very cool algorithm, and Petri-Nets. All deal with verification in EDA. September’s paper was a preview on using a LLM (GPT-4) and a model checker (JasperGold) to translate English into System Verilog Assertions (SVA). The next one (June) addressed how to sample the solution space for constrained random verification uniformly (meeting the constraints) – a cool algorithm for a hard problem, back from 2014. The last contribution in this group (April) extended Petri Nets for the verification of JEDEC DDR specifications; it is educational both on JEDEC specs and Petri Nets, and uncovers one timing violation.

Papers 7-9, ranked with ★, deal with analog design verification, CPU verification and parallel SW execution. In October we reviewed an invited paper to the IEEE open journal of the Solid-State Circuits Society, besides being a good tutorial on analog design and validation, the main contribution consists of replacing analog circuit models by functional models to accelerate Spice simulation by 4 orders of magnitude. February’s paper was about using DNNs to improve random instruction generators in CPU verification, showing a reduction of “the number of simulations by a factor of 2 or so” in a simple example (IBM Northstar, 5 instructions). March brought us the complete design of a HW accelerator to implement the Spatially Located Ordered Tasks (SLOT) execution model to exploit parallelism and speculation, and for applications that generate tasks dynamically at runtime.

Which leaves us with two 😋 recipients. In August we reviewed a paper from 2013 which pioneered k-means clustering (2013) for post silicon bug detection. And in December we looked at a very important topic, security verification using IFT (Information Flow Tracking) and it’s extension from gate level to RTL. Not surprisingly, December’s contribution got the least hits as our readers were probably facing the dilemma described initially.

Ratings can be arbitrary at times, all these contributions are star worthy and advance the state of the art. We can be grateful for an active, international research community in academia and industry tackling really hard problems. As of my personal preferences, you can guess…

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.