WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 392
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 392
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)
            
Mobile Unleashed Banner SemiWiki
WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 392
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 392
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)

Simulation and Formal – Finding the Right Balance

Simulation and Formal – Finding the Right Balance
by Bernard Murphy on 01-23-2018 at 7:00 am

Simulation dominates hardware functional verification today and likely will continue to dominate for the foreseeable future. Meanwhile formal verification, once thought to be a possible challenger for the title, has instead converged on a more effective role as a complement to simulation. Formal excels at finding problems in shallow but very broad state graphs and avoids a good deal of the overhead in testbench setup, while simulation has the advantage over formal in deep sequential problems, mixed-level (AMS, SystemC) and asynchronous behavior modeling. Unsurprisingly, many verification teams already use both to maximize signoff quality and throughput.

20990-teeter-totter-min.jpeg
The trick is finding the right balance. Sticking to simulation may be easy but that doesn’t help if it takes you longer to get to signoff than your competitors or if that signoff still misses important problems. Equally an over-enthusiastic view of how widely you can use formal may get you bogged down in tangled abstractions and constraints with an unclear view of what you really have or have not signed-off at the end of it all. Fortunately, a lot of experience has grown up around this topic; Infineon and one of the Cadence IP teams shared their experiences at CDNLive and Jasper User Group events. Also, Marvell added some important input on optimizing simulation turn-times.

Based on their CDNLive presentation, I would characterize this Infineon team as cautious mid-range users in the world of formal; neither pushing the envelope in complex system verification or heavy usage, nor at the very early stages of adoption. They are using formal to look for corner cases in ~10-20% of test-cases then re-testing those in simulation, and using formal standalone (no simulation backup) in ~5-10% of cases. They start with a common split between blocks:

  • Formal-friendly: low sequential depth, control, data transfer, simpler data transform and concurrent blocks of these types. Interesting to note, per Pete Hardee (product management director at Cadence) that the simpler datapath blocks (think CSA) are starting to appear in the formal-friendly set.
  • Formal-unfriendly: high number of state elements, high sequential depth, complex datapath elements (think Wallace-tree)

Infineon made use of a number of pre-packaged apps in JasperGold, including the connectivity app and the control and status register app. They also created some of their own properties for specialized checks, especially at IP boundaries. Infineon put a lot of work into determining coverage from these proofs which they then merged with simulation coverage, both to confirm the correctness of their split between formal and simulation and to discover where they could reduce simulation effort in coverage. Quite interesting – take a look at how they scaled back effort in their UVM development after examining what they felt was already well-proven around register-file testing.

20990-teeter-totter-min.jpeg
The Cadence IP group also uses formal verification and their input is in some ways even more revealing. They have to optimize to their business goals independent of tools teams and they have to satisfy a very wide range of potential use models. Interestingly, the team that presented acknowledged initially an even more cautious view of formal, seeing it as a specialized capability playing no major role in serious signoff (sound familiar?). They chose to step up to more active usage on a CCIX block (a new standard for chip-to-chip cache coherence messaging), where they ran simulation and formal methods in parallel.

20990-teeter-totter-min.jpeg
A big goal again was to get meaningful coverage from formal and to this end the IP group used several approaches:

  • Classic formal coverage based on full and bounded proofs, with coverage analyzed using proof-core methods. They view this as valuable for finding bugs but not primary for signoff
  • Something Cadence calls state-swarm where the search for bugs is driven by end-to-end properties, but is guided by multiple cover-points along the way, in what amounts to a random walk (they view this as somewhat like to a constrained random analysis). This was the IP group’s main signoff criterion, for me a variant on the proof-depth cover-point approach.
  • Guide-pointing which is similar to state-swarm except that proving must hit the cover-points in order (this is more like a formal variant on directed testing)

The second and third cases here are ways to reach out beyond conventional formal cycle-depth limits in what is generally known as bug-hunting, not usually considered a contributor to formal coverage, but it seems like the IP group are confident these methods are sufficiently structured that they can be an effective contributor.

Meanwhile, while verification teams look for ways to offload work for simulation, what remains for simulation gets harder. Marvell talked at the same event about the turnaround time challenge in simply compiling and elaborating designs now running to a billion gates (you read that right – a billion gates in simulation). Even for simulation, that can run to many hours. When you’re tweaking a design, you really want to make this as incremental as possible to minimize major down-times between runs. Making compile incremental is easy – you can do that with Makefile-like methods. But incremental elaboration isn’t so easy; that’s where you’re flattening the netlist and optimizing for simulation performance, which tends to be a chip-wide operation.

Cadence now supports, in its Xcelium Parallel Logic Simulation, a technique it calls Multi-Snapshot Incremental Elaboration (MSIE), which allows for user control of partitions in elaboration to enable reuse from earlier elaborations (replace only partitions that have changed). Bottom line, Marvell reported improvements of 30-100X in TAT over monolithic compile/elaboration run-times. Adam Sherer (product management group director at Cadence) also noted that Xcelium is now supported on ARM servers, which means that massive simulation workloads can be pushed into datacenters hosting tens of thousands of ARM cores in dense new server-architectures.

Back to the balance. Simulation is still the main workhorse, especially for huge designs and giant workloads. Formal adds a first-class complement to that effort, contributing not just in proving and bug hunting but also in coverage, relieving simulation of some important components of signoff. Speakers at the conference also noted that where simulation and formal were run side-by-side, formal found some problems not found by simulation, and vice-versa. Both add to a higher quality result, as long as you get the balance right.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.