I was at every single lunch at DVCon, not because the food was that great (it wasn’t bad) but because the topics were all interesting. The Wednesday lunch, hosted by Cadence, was a panel on software-driven verification and portable stimulus, moderated by Frank Schirrmeister (a different role for Frank – he’s usually a panel member and doing most of the talking ;)). What follows is a condensation of many comments from the panelists, the audience and Frank (I also had a chance to talk with him after the panel).
Why do we need these approaches?
Because software and hardware have become completely intertwined in electronic products. You can’t separate verification between these domains any longer, which means that software is the right way and in some cases the only way to drive realistic use-cases. But we also need to be able to drive subset use-cases down to lower levels of design. And we need to be able to randomize to get a reasonable sense of system-level coverage, which is why the portable stimulus (PS) standard is important.
As hardware becomes more complex, you accept out of necessity that not all bugs are created equal – you want to find the most harmful by exercising the hardware in the way it will be used. Consider some obvious cases where you have to model the software to verify the hardware: you increasingly need firmware just to get out of reset, power management is largely driven by software and cache coherency correctness can only be determined when linked to realistic software use-cases. Proving the hardware will play nice with the software requires software models and scenarios which can be used in these lower-level hardware tests.
To approach this systematically, we need to define system level coverage. Unsurprisingly, this starts from software-driven verification. Begin with basic firmware coverage: function and line-coverage. In graph-based/declarative approaches, coverage of the graph is another metric (a more complete version of branch coverage). Then there should be metrics associated with external factors, such as high interrupt rates. And you want to be able to randomize over these factors, something PS aims to enable through vendor tools.
It’s also worth considering the role played by big data analytics. We’re already at a point where system-level behavior may be sufficiently complex that escapes are possible even in software driven approaches. Data mining and analytics may provide a way to detect rare or unexpected behaviors that would otherwise be missed.
Are we ready?
Frank and others believe this direction is going to force verification engineers out of their comfort zones (hardware towards software and vice-versa). How do we organize and educate for that shift? At one level executives and business priorities can force a shift – adapt or die – but we can do things to ease this transition.
Frank feels we’re making great strides on the technology side. We’re finding commonalities between engines and disciplines, we have cockpits to debug hardware and software together and we have unified interfaces to different verification platforms. We also have verification IP portable between emulation and simulation and we can collect coverage from all sources through verification management so we can see and drive progress on overall coverage goals. Adoption is, at least for now, gated more by organizational limitations.
There are some promising signs. A power engineer understands how sequences affect the task, as does a cache-coherency engineer, so they’re motivated to communicate in structured ways with the software team and then to find ways to automate that communication. But there are still challenges. One idea that ought to be reasonable is to get the virtual platform model ready very early (to drive subsequent verification). But Frank has seen cases where even when that was done, the software team couldn’t take advantage of it because they were tied up in finishing the last project – an organization problem.
However you parse this, it is clear that product designers and tool vendors are jointly searching for the way through to the promised land of system verification. These are interesting times.
You can learn more about the Cadence system-level verification solutions HERE.