Paul, Raúl and I are having fun with our Innovation in Verification series, and you seem to be also, judging by the hit rates we’re getting. We track these carefully to judge what you find most interesting and what seems to fall more under the category of “Meh”. Paul and others also get informal feedback in client meetings but it would be great if we could get active feedback from you, the readers, on what topics would most interest you. We’d like to tune our picks to your preferences.
For example, we’re planning an upcoming review on a paper on dynamic coherency testing because support is strong from multiple directions that verification teams want more input here. In that spirit, I have a few questions for you. I’m looking forward to your feedback. Quick comments or carefully considered, voluminous responses are fine. We’ll use your feedback as an input to our future topics for Innovation in Verification.
Application papers versus academic papers
We have tended to pick academic papers since these are, at least in principle, most likely to aim at breakthroughs. Applications are no less worthy but more aimed at very targeted in-house optimizations; apps to simplify or improve a specific verification objective. If tied to specific vendor tools that may also limit broad interest.
We’ve looked at most areas in verification. At the block level there’s always opportunity to improve coverage, also how quickly we can get to coverage. System-level verification is wide open. Lots of opportunity to debate subsystem testing, coverage, how best to define tests at the system level, the relative merits of synthetic versus real-life tests. Then there are the non-functional KPIs: performance, power, security, safety, especially as architectures for managing security and safety continue to evolve.
Post-silicon debug is clearly topical, reflecting limitations in how well (or not so well) we are able to limit escapes in pre-silicon verification. Optimizing the total verification flow, beyond individual run performance, is also picking up. This is in part in reducing total regression times through learning optimization. Even more broadly, many readers are experimenting with Agile methods, integrating with design processes for continuous integration and deployment (CI/CD) flows.
We could also cover more in some areas we have neglected: mixed signal verification, ML hardware verification and virtual modeling are examples.
Vertical validation is becoming increasingly important. In automotive, aerospace, the IoT, HPC, medical and many other domains, system objectives are moving much closer to silicon. As a result, completing a test plan needs to comprehend verification objectives and also system validation objectives. One indication is the growing importance of requirements traceability. This is from high-level design down inside the software and silicon. While looking for papers on traditional verification topics, I’ve also come across related papers. These are on system-level validation for robotics and other autonomous applications, suggesting trends towards these cross-domain validation problems.
This applies particularly for example to sensing and sensor fusion. The front-end here is obviously AMS, though there can be significant digital content to control calibration. Fusion is important, especially in safety-critical systems. This requires close interaction between hardware and software to ensure real-time reaction to changes.
Lots of opportunities to explore existing domains more deeply and add new domains to explore. Please let me know what you think, either as a comment or email me directly (firstname.lastname@example.org)
Also ReadShare this post via: