WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 699
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 699
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

Tutorial on Advanced Formal: NVIDIA and Qualcomm

Tutorial on Advanced Formal: NVIDIA and Qualcomm
by Bernard Murphy on 04-03-2018 at 7:00 am

I recently posted a blog on the first half of a tutorial Synopsys hosted at DVCon (2018). This blog covers the second half of that 3½ hour event (so you can see why I didn’t jam it all into one blog :D. The general theme was on advanced use models, the first half covering use of invariants and induction and views from a Samsung expert on efficient modeling for formal. In the second half we had customer presentations on two perennially popular topics: architectural formal verification for cache-coherence and formal signoff.

22001-modern-architecture.jpeg

Syed Suhaib is a formal verification expert at NVIDIA and shared his view on verifying a coherency manager using architectural formal verification. I’ve written about this before, so you get my standard quick problem recap – you have a verification problem at the system-level (not contained inside a component), there are no obvious long-pole candidates to abstract so instead you abstract everything, replacing components with smaller FSM models to enable verifying control interactions.

Cache-coherence management is a popular target for this style of verification. You have multiple processors each with its own cache but still working with a shared logical memory model. Which means that changes to these caches must be kept coherent across the system. So this is a system-level verification problem, too big to fit in conventional formal proving. The usual abstraction tricks won’t work because everything is too big – CPUs, caches and interconnect. Abstracting one or two of these doesn’t reduce the size enough; you have to abstract everything.

There are multiple ways to implement coherency management; commonly you’ll have a central coherency manager which monitors reads and writes across all caches, looking for potential problems and stalling/updating as needed. The coherency management system is essentially a big FSM with many possible states and transitions covering all the potential sequences of updates, requests and so on. Syed said that a table representation of the protocol can runs to hundreds or thousands of lines.

Simulation is a non-starter for proving correctness because there are just too many possibilities to cover with confidence. In fact one of the early successes for formal was in validating these systems. The state diagram is wide but not especially deep, as long as you look at the problem in the right way, looking just at what is relevant to the proof – the control part of coherency management between CPUs and caches.

I’ve talked before about cross-validation between architectural models and the source RTL so I won’t bore you with it again (see e.g. my blog “System Level Formal”). Syed gave a quite detailed overview on their architectural models for tracker functions (e.g. for snoop and read on their cluster 1 interface), also on the coherency engine read-request FIFO. As a result of this analysis they found a couple of bugs, one an instance where read requests were re-ordered, also a potential deadlock.

An interesting question from the audience was whether they had considered yet more advanced approaches like TLA+ or Murphi. Syed said that the approach they used took 6-9 person-months so was not trivial, presumably implying that those other techniques would have taken even more effort. He did acknowledge that, thanks to abstraction, what they got were not full proofs, but they did get bounded proofs on the model and they did do coverage analysis. Note to the skeptics – what you would get with simulation would be markedly worse. Finally, he said they went with this approach rather than the more exotic approaches because it was familiar. They could build abstracted arch models in Verilog, prove on these and cross-verify with the source Verilog. Who could argue with that?

22001-modern-architecture.jpeg

Mandar Munishwar (formal verification lead at Qualcomm) closed with a discussion on formal verification signoff, particularly complete formal signoff on a block – what does this mean and how to you get there? For Mandar this means answering:

  • Have I written all the checks I need?
  • What is the quality (and coverage) of those checks?
  • Were any of those checks over-constrained?

Unsurprisingly a credible signoff starts with disciplined planning, executing and measuring, especially on the first question, which must be based as much on a system view of needs as any coverage metrics. Since he’s talking about end-to-end (E2E) verification, Mandar likes to capture block behavior in a spreadsheet. For each signal he records: signal type (pulse, level, active type) and relationship with other signals in the same interface, what causes the signal to be asserted / de-asserted, latency between cause and effect, expectation for integrity of the data in transport and finally assertion name(s) corresponding to the natural language (NL) description implied in those entries. This makes a lot of sense to me; it’s a lot easier to review and debate NL expectations in a group than SVA assertions and constraints. Get the NL right first then translation to SVA and checking SVA against the NL definitions is easier and more certain.

Mandar also adds the common component checkers at this point – for FIFOs, counters, arbiters and so on; then execution starts. This is iterative of course – running, checking/fixing counter-examples, work on converging proofs, handling bounded proofs, all the usual steps. In the measurement phase (which overlaps with execution), he checks for over-constrained proofs, relaxing constraints where needed. He also checks coverage, starting from baseline COI coverage – a minimum necessary but far from sufficient requirement. For him, an important part of coverage is the subjective quality check at the planning phase, in review with the team: is the set of checkers sufficiently diverse for each of the block signals across signal type, relationship, causality, etc that they captured in the spreadsheet?

Back on measurement, Mandar is a believer in formal cores being a much tighter test than COI coverage as a measure of real coverage. I wonder if this realization is more important to the folks aiming for full formal signoffs. I think even those who aren’t so ambitious should still be looking at formal cores to understand how little of the logic they are really testing in some cases.

Finally he went back to his original question – can I trust formal signoff? This is a great question. If you have done all that up-front planning to ensure good coverage of checkers, you have made sure no proofs were over-constrained and you have further ensured that you have good formal core coverage, it feels like you should be pretty close, doesn’t it? Mandar suggested one final step. If you introduce a bug into the logic, bullet-proof verification should catch it. (VC Formal supports mutation for this purpose).

Mandar suggested a number of classes of mutation he thought were important to consider: top-output connectivity, reset bugs, lower-level connectivity errors and others (I would add clock-gating errors to this list based on an earlier talk in this session). If the mutation engine introduces bugs of these type and they are detected in proving, then you have yet higher confidence for formal signoff. Is this good enough? Eventually signoff is subjective, whether for simulation or formal, though you want it grounded in tight methodology, review and metrics. It seems like what Mandar proposes is a reasonable definition for formal signoff.

You can learn more about the Synopsys VC Formal solution HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.