WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 466
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 466
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 466
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 466
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

2020 Retrospective. Innovation in Verification

2020 Retrospective. Innovation in Verification
by Bernard Murphy on 01-20-2021 at 6:00 am

Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I launched our series on Innovation in Verification at the beginning of last year. We wanted to explore basic innovations and new directions researchers are taking for hardware and system verification. Even we were surprised to find how rich a seam we had tapped. We plan to continue the series, first starting with a retrospective on what we found last year and how that might direct our discovery this year.

2020 Retrospective

The 2020 Picks

These are the blogs in order, January to December. All did well in views but the first one and the last two really blew the roof off. We’d be curious to know which were your favorites.

Optimizing Random Test Constraints Using ML

Learning to Produce Direct Tests for Security Verification using Constrained Process Discovery

End-to-End Concolic Testing for Hardware/Software Co-Validation

Metamorphic Relations for Detection of Performance Anomalies

Is Mutation Testing Worth the Effort?

Predicting Bugs. ML and Static Team Up

Using AI to Locate a Fault

Quick Error Detection

Bug Trace Minimization

Covering Configurable Systems

ML Plus Formal for Analog

More on Bug Localization

Paul’s view

It has been such fun reading all these papers and discussing them with Jim and Bernard. I have been so impressed by the quality of work from the various authors and it is wonderful to see that innovation in verification is truly thriving. A very big thank you to all the universities and governments that are sponsoring and funding this research!

Probably the biggest theme that shone through from our cogitations last year is fault localization – helping engineers quickly and efficiently work out why tests fail and where the bugs are in their designs. There are a lot of ideas that are gaining traction in the software verification world that have not yet fully permeated to hardware verification. Also, it’s clear that ML is a key enabler behind this wave of innovation in fault localization.

Another theme which stands out is how great results nearly always come from combining multiple techniques together – simulation with formal, mutation with static, ML with deductive.  As a computer scientist and lover of algorithms, this has made for wonderfully enjoyable reading throughout the year.

A very happy new year to all our readers.

Jim’s view

First, I have to agree that there are a lot of creative people out there, imagining new ways to improve verification. Some are immediately interesting. Factors that always attract me here are:

  • Innovations directed at big market transitions. In semiconductor we think of new process nodes, but it could equally be in OpenRAN for 5G, car electrification, improvements to public health infrastructure, big return AI applications – you get the idea.
  • I’m not looking for incremental advances. I want disruptive ideas, unique, enough IP to be patentable, to preserve an advantage for ~5 years until the product is established in its market.
  • It’s important to realize that most investors these days are pretty seasoned, even a little cynical. They know what big ideas look like. Anything else will be a really tough sell.

I see metamorphic testing in this class, bug prediction using ML, and using AI or other methods to localize faults. I see ML/AI as an extension of statistics. A way of improving and speeding up our guessing. Potential applications here have been barely touched. I’m always a fan for anything to do with analog, a a market underserved by automation. I’d want to get my experts to do more due diligence on that paper, but it is immediately intriguing.

I’m not suggesting the other topics are unworthy. Among the other papers are several advances which could be very valuable incremental enhancements to existing verification flows. Perhaps these could be self-funded startups to prove a prototype then slip straight into acquisition?

My view

As the screener of candidate papers for our little group, you might be interested in my methods for selection. I bias to fundamental research which tends to be posted in a great variety of national and international conferences, best consolidated through platforms like the ACM and IEEE digital libraries. The ACM library provided more help initially because the IEEE didn’t yet support personal accounts for the library – now they do.

I still like to look in both libraries, because they provide a lot of complementary coverage. Also we have a lot to learn from our software brothers and sisters. Beyond that, I’m looking for anything topical and relevant to verification. I like to look at fairly recent papers, though Paul (rightly) prods me now and again to look towards the start the millennium. Sometimes I find hidden gems! We’re all eager to get feedback. If you think we should look harder at some problem or research area, please let us know!

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.