Formal verification for hardware was stuck for a long time with a reputation of being interesting but difficult to use and consequently limited to niche applications. Jasper worked hard to change this, particularly with their apps for JasperGold and I have been seeing more anecdotal information that mainstream adoption is growing. So I thought it would be interesting to ask Pete Hardee (marketing and product management for Jasper) what has changed in the industry and why.
Cadence now treats formal as one of the 4 legs of their verification strategy. They arguably have the market-leading solution in Jasper, but they wouldn’t make it a top-level component if the demand wasn’t there, so what’s different? According to Pete, virtually all of the top semis doing RTL design are now using formal, as are a lot of the fast-growing companies. And formal usage is growing within these companies. Adoption alone would suggest it’s no longer a niche application.
The reason for this change all comes down to coverage. Full dynamic SoC coverage is already well out of reach (because of size, complexity, 3[SUP]rd[/SUP]-party IP, software, …), but you still have to have high confidence by signoff. So verification engineers look for different ways to build confidence.
One way is through connectivity checks – separate questions of whether the IPs function and communicate correctly from whether you have connected them together correctly. Can I prove that all the IP in the design are hooked up per a specification I am willing to provide (usually a connectivity spreadsheet)? If you can completely prove this aspect of the design is correct, you are able to signoff a whole class of functional checks more completely than would ever be possible in simulation. This makes formal checks a natural approach when they’re sufficiently simple to use. Apps make them simple and that is growing adoption in verification teams.
A different class of problem is proving certain things cannot happen – something essentially impossible to prove in simulation for any reasonable-size design. A good example is proving that an encryption key cannot leak out to an insecure IP (or an IP being used in insecure mode), equally that it can’t be overwritten and that it remains secure even in the presence of faults. This isn’t an area where “reasonably” confident is an acceptable signoff, so you have to use formal methods.
Power management is a nightmare for coverage because you take an already massive mission-mode state space and exponentially expand it in switching between all the possible power variants. You can gain some confidence through dynamic verification, but complete proof that there are no gotchas in switching again requires formal, in this case supported by an app.
Pete also noted that fear of assertions and constraints seems to be on the decline. Solutions to certain properties you feel you must cover can’t always be pre-packaged in apps. This used to be when you’d ship the problem off to your team of verification PhDs. Now not so much apparently. Pete guesses that some verification teams bit the bullet in training (and maybe a little coercion), engineers in hot companies aren’t afraid of formal and real expertise in this area is looking increasingly valuable on a resume. “That stuff’s too hard” doesn’t seem to be something you want to be heard saying anymore.
Getting to high coverage near the end of the verification cycle is another driver. We all know that the last mile in coverage is really hard. Maybe that’s because a lot of the uncovered cases are unreachable. Proving that is a a real time-saver – you know if you formally can’t reach a state, you can safely drop it from coverage. And if the app proves you can reach it, it will provide you with an example that will help you build a test. Reachability analysis is an exploding area in formal because getting to maximum coverage is must-have.
Unsurprisingly, safety is driving more interest in formal, since safety is another area where “reasonable” coverage is not an acceptable goal. ISO26262 demands traceability of requirements, fitting well with formal which has well documented properties and constraints. In fault analysis, formal helps both in efficiency (why test a fault if it can’t be observed?) and in completeness (maybe a given fault-sim didn’t propagate a fault to an output or a checker, but would that always be the case?). Demonstrating safety to ASIL-D requirements in ISO26262 is again a must-have – expect that automotive safety will drive more growth for formal in multiple areas.
Pete added that he’s also seeing growth in exploring design state-space for bug-hunting. This is an interesting domain where deep state-space bugs can be missed by constrained-random and you can’t conclusively catch the bug with direct formal if the bug is too deep. JasperGold has engines which support a concept they call “elastic bounded model checking”, letting you do a guided search progressively deeper into state space while skipping states you don’t feel are of interest. One user group paper reported multiple critical bugs found at 100-400 cycles deep and one case at nearly 3000 cycles deep, far beyond reasonable bounds for conventional model checking.
Hopefully if you stayed with me this far you would have to agree that formal (especially in JasperGold) is covering a lot of bases. It’s no longer a niche application for specialists. It really has become a primary pillar of verification. I found a really useful way to understand more about how JasperGold is being used is to check out papers from the user group (JUG) conferences. You can get to papers from the last conference HERE (you’ll need to have an account with Cadence.com).