Building on an old chestnut, if sufficiently advanced technology looks like magic, there are a number of technology users who are increasingly looking like magicians. Of course when it comes to formal, neither is magical, just very clever. The technology continues to advance and so do the users in their application of those methods. Synopsys recently hosted an all-day Special Interest Group event on formal in Sunnyvale, including talks from Marvell and others, with a keynote given by my fellow author Manish Pandey and Pratik Mahajan. A number of points captured my attention.
Regression with ML
Nvidia talked about regressing formal runs. This started with an observation that complexity is growing in many directions, one of which is arbiters, FIFOs and state machines all talking to each other. Proving you have covered all the bases quickly runs out of gas in dynamic verification of possible interactions between these systems. In fact, even trying to do this through bottom-up property checking is dodgy; who knows what interactions between subsystems you might miss? So they chose to go with end-to-end (E2E) property checks to comprehensively cover all (necessary) systems in proving.
The problem in that idea is proof convergence. Taken together these are very big state machines. Nvidia turned to the standard next step – break each property down into sub-properties (with assume-guarantee strategies for example). The sub-properties are easier to prove in reasonable time, but each requires its own setup and proving, and these E2E goals generate lots of sub-properties. This generates so many sub-proofs that resource competition with other forms of verification becomes a problem.
Their next refinement was to apply ML capabilities (RMA) available with VC formal, both within the tools and in learning between tool runs, to accelerate runs and to reduce resource requirements. They do this initially in interactive convergence and subsequently in regression runs. In both cases the advantages are significant – order of magnitude improvements in net run-times. Clearly worthwhile.
Proof Using Symbolic Variables
Microsoft talked about validating a highly configurable interrupt controller IP more efficiently. Their approach was based on connectivity checks for each configuration; they found that in a previous rev this expanded to tens of thousands of assertions and took 2 days to completely validate. In a newer and naturally more complex rev of the IP this grew to 2.5M assertions, the complete proof wouldn’t converge, and initially they were forced to reduce the scope of proving to a sample set, not exciting when the goal had been to demonstrate complete proofs.
Then they got clever, looking at the symmetries in the problem and using symbolic variables for key values in the design, each constrained to lie within its allowable range. This isn’t entry-level formal (you have to think about what you’re doing), but it is very powerful. Proof engines will prove over an assumption that such variables can take any allowed value within their constrained ranges, so the proof is complete but can operate on effectively a smaller problem. That allows for much faster run-times. The large example (which wouldn’t complete before) now ran in 24 hours. Out of curiosity they re-ran the smaller example that previously took 2 days; this now ran in 15 minutes. As everywhere in verification, clever construction of a testbench can make huge difference in runtimes and in coverage.
Sequential equivalence checking
This (SEQ) is the standard way to verify clock gating, however the Samsung speaker talked about a number of applications beyond that scope: validating for RTL de-featuring (where you turn off ifdef-ed functionality), sequential optimizations (e.g. moving logic across flop boundaries to improve timing), shortening pipeline stages and power optimizations, all verification tasks you can’t pull off using conventional equivalence checking. Given the extended nature of such checks they can be a little more involved than conventional checking. He talked about multiple capabilities they are able to use in VC Formal to aid in convergence of proofs – orchestration, CEGAR memory and operator abstraction and specialized engines. Overall this enabled them to find 40+ RTL bugs in 5 months. He added that 40+% of the bugs were found by new hires and interns, highlighting that this stuff is not only for advanced users.
Datapath Validation
Lastly, Synopsys have now folded their HECTOR datapath checker under VC Formal Datapath Validation (DPV) App. This employs transaction-level equivalence to validate between a C-level model and the RTL implementation. Datapath elements for a long time were one of those areas you couldn’t use formal, so this is an important advance. Marvell talked about using this capability to verify a number of floating-point arithmetic function blocks. In standard approaches, whether for simulation or “conventional” formal, the number of possibilities that must be covered is exponentially unreachable.
The Datapath Validation App works with Berkeley SoftFloat models, widely recognized as a solid reference for float arithmetic. This team used those models in DPV equivalence checking with their RTL implementations and found a number of bugs, some in the RTL and some in the C++ models and added that they subsequently found no bugs in simulation and emulation. This suggests to me that this type of verification is going to see a lot of growth.
Interested users can request the material from VC Formal SIG from their Synopsys contacts. You can checkout VC Formal HERE.
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.