WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Formal: Going Deep and Going Early

Formal: Going Deep and Going Early
by Bernard Murphy on 03-20-2018 at 7:00 am

This year I got a chance to talk with Cadence at DVCon on a whole bunch of topics, so expect a steady stream of blogs over the next couple of months. First up was an update from Pete Hardee (Director of Product Management) on, surprise, surprise, formal verification. I’m always trying to learn more about this space, so I picked a couple of topics from our discussion to highlight here.

21317-classic-formal-versus-deep-searching-min.jpgFirst, going deep. Formal methods, particularly bounded model-checking, typically operate breadth-first. The engine steps forward one cycle from the current state and checks active assertions, steps out another cycle and so on until assertions are proved, or counter-examples are found, or proving exceeds a specified depth (all modulo constraints of course). This method of proving is exhaustive but limited in the number of cycles it can analyze, since analysis size expands more or less exponentially with depth.

Does this mean you are stuck with checking properties inside that depth? Apparently not. While property-proving in the classical sense is restricted to breadth-first methods, bug-hunting to significantly deeper levels is also possible. One technique the Cadence JasperGold toolset supports is called cycle-swarm, in which the prover works exhaustively for a number of cycles, then advances forward some number of cycles while either not testing or only lightly testing, then restarts full proving from that point, and so on.

This trick, in which the engine ignores big chunks of the state space in order to reach further out, can be directed in other ways. State-swarm follows a trail of cover properties you plant on the design. It doesn’t guarantee to follow them in any particular order, only that it will hit each at some point. Guide-pointing follows a similar approach but guarantees to hit your cover properties in a specified order. In both cases you want to define cover-properties close to where you think something might fail.

21317-classic-formal-versus-deep-searching-min.jpg
I have struggled before in understanding these methods, but I believe I have got it now. The engine starts a new search from each such property (once hit), effectively resetting the span of the search at each such point. From each property you are starting a new (forward) cone of analysis, which is what allows you to reach out so far; you’re advancing step-wise, in bounded cones of analysis. This probably also means you need to scatter cover properties in increments of provable cycle depths before your goal property so none of the intermediate cones blow up before it hits such a property.

Pete and others freely acknowledge this is bug-hunting rather than proving, however several customers claim this still can be a very productive exercise whether or not you also deploy “classic” formal. An HP user presented at JUG 2015 a case where he found a potential bug (FIFO underflow) using state-swarm at nearly 3000 cycles, far beyond the normal scope of formal proofs.

The second topic that always interests me is how to put more verification in the hands of RTL designers. Naturally there is a self-serving aspect to this for any tool provider but there are also broader benefits. One is in supporting continuing shift-left. To the extent that RTL designers can hand off cleaner code, verification engineers spend less time tracking down bugs and iterating on RTL updates. A second benefit is in supporting reusability. For a current application, you know bugs will be shaken out in block or system verification. But if this has to happen again and again on new applications of that block, reuse loses a lot of its appeal.

Formal apps can be a valuable contributor to ensuring high quality handoff (against certain objectives) in both cases. Formal lint (Cadence calls it Superlint) should be a familiar starting point for any RTL designer since it requires little more effort than running a regular linter in most cases.

The other app in the designer’s desktop is CDC. This app will run structural (e.g. requiring approved synchronizers at domain crossings), functional (e.g. checking grey-coding on FIFO read/write pointers) and reconvergence (e.g. requiring one-hot coding) checks. Handing this kind of analysis to RTL designers ought to be a no-brainer for handoff, though I’m not sure how far this has progressed across the industry and how much it’s still farmed out to the verification team. Perhaps the inevitability of the shift-left squeeze will make the transition inescapable.

Go HERE to learn more about JasperGold verification apps and watch a video in which Pete explains the advantages of RTL designer’s desktop.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.