Last week I sat in on Oski’s latest in a series of “Decoding Formal” sessions. Judging by my first experience, they plan and manage these events very well. Not too long (~3 hours of talks), good food (DishDash), good customer content, a good forward-looking topic and a very entertaining wrap-up talk.
Adoption in an ARM Austin verification team
The event started with a presentation by Vikram Khosa from the ARM Austin design center, on their experience in getting to signoff formal deployment on A-class CPU cores and evolving that approach from late 2014 to today. Their approach probably mirrors that of many other teams entering formal – initially a voyage of discovery, over time becoming more refined and directed as they learned what worked well for them, what had to be improved and what they could concretely promise.
For those of us comfortable with all the rigor of dynamic signoff, this can seem a little haphazard. Planning ranged from detailed plans based on Oski methodologies to less detailed plans in some cases and minimal planning in others. They found that, in addition to developing methodologies, it was just as important to socialize comfort with and acceptance for use of these methods, with mainstream verification and with the design team. Signoff goals at this stage were no unexplained counter-examples (CEXs) or unreachables (UNRs) (within the range of properties/covers they were testing of course). They obviously did a decent job in gaining acceptance because formal has now replaced simulation as the primary method to verify the floating point unit (and complements simulation in some other areas).
There were a number of takeaways from Vikram’s perspective. They found more value in interface and end-to-end (E2E) checkers than in small-bore assertions (no surprise). Building interface checkers took much more time than they anticipated but could be helped by aggressively over-constraining at the outset. I learned something here, probably more familiar to formal experts. Rather than adding constraints incrementally, it is common to start with massive over-constraints then relax progressively.
The formal group in this ARM team has grown, but Vikram still sees need for more training, formal literacy, simplified flows and even training in formal-friendly design. And integrating formal more closely with conventional verification efforts is still a work in progress. All an indication that formal verification has come a long way but still has a long way to go.
Architecture verification for cache coherent protocols at Arteris
This talk was a joint presentation between Arteris and Oski and introduced a new (for me at least) idea – formal verification at the architecture level. First a word about cache coherence verification. This has become in many ways the exemplar for formal verification (ARM has been using Jasper for this purpose for quite a while). The problem is potentially so functionally and sequentially complex that no amount of simulation, emulation or prototyping can be trusted to fully cover all possibilities. This problem truly needs formal methods.
At the same time, cache-coherence is a capability managed through a bus between multiple IP (CPUs, caches, coherence management units, etc) at the system level, not typically a comfort zone for formal methods. An obvious way to mitigate this complexity is through abstraction – replace big complex RTL blocks with simplified FSMs to represent the key functionality of those blocks for the purpose of the proof. Oski and Arteris took this a step further by replacing (as far as I can tell) all the end-point blocks with abstractions, switching in effect from formal verification based on RTL to formal verification based on architectural models.
Their approach to doing this had 3 steps: creating the architectural models for the blocks, formally verifying system behavior using these model for aspects like coherence, data integrity, progress and deadlocks, and finally validating each block model against the corresponding RTL using property checking.
Naturally it is easy to throw rocks at this method (and some were thrown in the Q&A). One obvious question is how they know their abstractions didn’t hide problems. They seemed to feel that they had done a thorough job in cross-checking properties between abstracted models and RTL, but that won’t satisfy many of us. Still, I think Oski and Arteris are due a lot of credit here. It’s never easy being a pioneer and I suspect eventually solutions like this are unavoidable. We resist mostly because we are uncomfortable with this big of a step, not because we have better ideas. But eventually we’re going to have to get comfortable, somehow, or stop using advanced system design architectures (and that’s not going to happen).
Origami, with some links to formal
This talk was presented by Dr. Robert Lang, a EE and applied physicist by training but now a full-time practitioner of origami, the art of building design by folding paper in complex ways. He is famous for his amazingly complex designs, building on initial work by Akira Yoshizawa who showed that this art form could progress far beyond the relatively simple designs which most of us associate with origami (paper planes, boxes, simple birds, that sort of thing).
Lang is not only a great origamist, he is also one of the leading experts in the mathematical theory of origami which starts from 4 relatively simple rules for folding. This creates a lot of interesting geometric mathematics; also the reverse direction (can this object be created through folding a flat sheet of paper) has obvious connections with reachability. Which connects it (if through a slightly tenuous link) with the core topic of the session.
But who cares if the link is tenuous? This was a great and fun talk. More technical sessions should wrap up with talks like this – they’re good for the soul and the imagination and adding to our ability to converse on topics that everyone can appreciate. You can see some of the amazing designs developed by Robert Lang here (in case you missed the earlier link).
This stuff isn’t just about art, incredible though that is (look for the cuckoo clock and one praying mantis eating another). Origami has been used in a simple sense in space applications to fold solar arrays and solar sails, since these need to be compact in flight to deployment. Lang is also working with Lawrence Livermore lab to build a massive primary objective for a space-based refractive telescope, expected to be 100 meters in diameter. Obviously, you aren’t going to fly that into space as-is. It has to fold and a prototype has already been built.
In the medical field, Paracor medical has built heart implants which, folded up, are small enough to be injected but then expand in place. A similar technique has been developed at Oxford University for stents. Origami is now being investigated as a way to build artificial livers and other organs. And Paul Rothemund at Caltech and others have found ways to fold DNA and RNA based on origami principles. Carlos Castro at Ohio State has used these ideas to fold DNA boxes around cancer cells; that’s a powerful idea.Share this post via: