WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Scalable Concolic Testing. Innovation in Verification

Scalable Concolic Testing. Innovation in Verification
by Bernard Murphy on 12-23-2021 at 10:00 am

Combining simulation and symbolic methods is an attractive way to excite rare branches in block-level verification, but is this method really scalable? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I continue our series on research ideas. As always, feedback welcome.

Scalable Concolic Testing

The Innovation

This month’s pick is Scalable Concolic Testing of RTL Models. The paper was published in the IEEE Transactions on Computers, July 2021. The authors are from U. Gainesville, Florida.

Reminder: Concolic uses simulation analysis (Concrete) to together with Symbolic methods to try to achieve a balance between the scalability of concrete and the exhaustive coverage of symbolic. The authors use this technique to improve coverage on hard-to-reach branches in block-level RTL. Their main contribution is improved heuristics to reduce path explosion in the symbolic analysis without impacting coverage.

The authors have a couple of innovations. They have a method they call “contribution-aware edge-realignment” whose goal is to find an efficient way to force a single path to an uncovered branch. This avoids state explosion problems. They look for assignments to variables used in the branch condition and grade these based on their likely contribution to meeting that condition.

The second innovation aims to overcome inefficiency in covering only one rare branch at a time. They strive to cover multiple targets in part by pruning, so that if uncovered branch x is in the path to uncovered branch y, x can be dropped as it will be covered in activating y (unless y is unreachable). They show considerable improvement over other reported work in run times and memory to activate rare branches.

Paul’s view

This is a great paper and a very easy read given the depth of the content. I am big fan of concolic, here used to improve branch coverage. This work is very relevant for mainstream commercial verification.

The core contribution in the paper is the concept of a branch activation graph. This is a graph with nodes representing branches in the RTL (i.e. a branch condition and its begin-end body), with an edge from branch A to branch B if there exists a design state where branch A is triggered and B is not, and where executing the body of branch A takes the design to a state in which branch B is then triggered.

This activation graph serves as a guide for a symbolic simulator to prioritize its search to reach an as-yet uncovered branch in verification. If there are no input values that can trigger a uncovered branch from the current state, try input values that trigger an adjacent branch in the activation graph. If this is not possible, pick a branch that is two hops away from the uncovered branch in the activation graph. And so on. After applying this heuristic for several clock cycles there is a good chance the symbolic simulator will hit the uncovered branch. Certainly a much better chance than were it to just randomly toggle inputs and hope to get lucky.

The results presented are compelling. Use of the activation graph along with a few other innovations to prune searches and pick good starting traces results in a solution that is 40x faster and scales 10x more efficiently in memory consumption with design size compared to prior work using alternate heuristics in the symbolic simulator. There is just one outlier case, a usb_phy, where their approach does not work as well. I am really curious why this testcase was an exception; unfortunately, this wasn’t explained in the paper.

We are working on a formal-based concolic engine at Cadence that we call “trace swarm”. The activation graph concept in this paper could be a great fit for this too.

Raúl’s view

The system uses the Design Player Toolchain to flatten Verilog and the Yices SMT solver for constraint solving. The authors compare experimental results  to the Enhanced Bounded Model Checker EBMC. Also to QUalifying Event Based Search QUEBS (another concolic approach). They selected benchmarks from ITC99, TrustHub and OpenCores with 182 to 456,000 lines of code for the flattened design and 20 hard to reach targets in each benchmark which they picked after running a million random tests.

Coverage for this approach is 100%, execution time from subsecond to 134s, and memory 9.5MB-1.2GB. The other approaches don’t reach 100% coverage. They also run between a few times to a few 100 times slower using 1-10x memory. There are outliers: USB_PHY runs 16-50x slower in the presented approach (134s) using 5 times more memory (just 138MB). As Paul commented an explanation would have been nice. QUEBS runs ICACHE 30,000 times slower.

The paper also shows that memory scalability as unrolled cycles/lines of code increase is much better that EBMC. (EBMC is the more scalable of the competing approaches). Finally, they compare target pruning  to EBMC, showing also better results. For example, the number of targets pruned by this approach is 15 vs 10 in EBMC.

I like the approach as being pragmatic. It uses an intuitive concept of distance based on assigned variables and path lengths, avoiding repeat computations (pruning, clustering). And of course the mix of simulation and constraint solving.  The results are promising, with modest execution times and memory usage, and scale well. This would be a worthwhile addition to constrained RTPG to increase coverage or to existing concolic tools.

My view

As a path to activate rare branches, this looks a more targeted approach than starting traditional BMC checking from various points along a simulation path. Very interesting.

Also Read

More Than Moore and Charting the Path Beyond 3nm

Topics for Innovation in Verification

Learning-Based Power Modeling. Innovation in Verification

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.