Keynotes commonly provide a one-sided perspective of a domain, either customer-centric or supplier-centric. Kudos therefore to Cadence’s Paul Cunningham for breaking the mold in offering the first half of his keynote to Anthony Hill, a TI fellow, to talk about outstanding challenges he sees in verification for automotive products. Paul followed with his responses, some already in reach with existing technologies, some requiring more stretching to imagine possible solutions, and some maybe out of range of current ideas.
Anthony set the stage with his breakdown of macro trends in automotive systems design. ADAS and autonomy are pushing more automation features like lane assist and driver monitoring, in turn driving more centralized architectures. Connectivity is critical for OTA updates, vehicle to vehicle communication, in-cabin hot spots. And electrification isn’t just about inverters, motor drivers (now charging at higher power levels and frequencies). It’s also about all the electronics around those core functions, for battery monitoring and for squeezing maximum power efficiency to extend range.
These trends drive higher levels of integration to increase capabilities and to reduce latencies for responsiveness and safety. Pulling in more IPs to build more complex systems, increasing distributed mixed-signal operation around PLLs, DDRs etc., and adding more complex tiered network on chip (NoC) bus fabrics. Chiplet requirements now central at some big auto OEMs will add yet more verification complexity.
Anthony’s Challenges
Anthony shared challenges he sees growing in importance through this increased complexity. Integrating IPs from many sources remains challenging. Not so much in basic functionality and timing, more in non-executable (documentation) guidance on usage limitations or limits to the scope of testing for functionality, performance, safety, configuration options, etc. Relying on doc for communicating critical information is a weak link, suggesting more opportunities for standardization.
In functional safety (FuSA), standards for data exchange between suppliers and consumers are essential. FMEDA analysis is only as good as the safety data supplied with each IP. Is that in an executable model or buried somewhere in a document? Can formal play a bigger role in safety than it does today, for example in helping find vulnerabilities in a design during the FMEA stage?
For mixed signal, he’s seeing more cases of digital embedded in analog with corresponding digital challenges like CDC correctness. These are already solved in verification for digital but not for mixed A/D. Monte-Carlo simulations are not adequate for this level of testing. Can we extend digital static verification methods to mixed A/D?
As NoCs become tiered across large systems, non-interlocked (by default), and with significant room for user-defined prioritization schemes, it is become more challenging to prove there is no potential for deadlocks, more generally to ensure compliance with required service level agreements (SLAs). Simulation alone is not enough to ensure, say, that varying orders of command arrival and data return will work correctly under all circumstances. Is a formal methodology possible?
Another very interesting challenge is multicycle glitch detection in non-traditional process corners. Gate level simulation across PVT corners may claim a multicycle logic cone is glitch free but still miss a glitch in a non-standard corner. Anthony speculates that maybe some kind of “formal” verification of logic cones with overlapping timing constraints could be helpful.
Finally, in chiplet-based designs, today he sees mostly internal chiplets plus external memory for HPC-class busses, but over time expects multi-source chiplets will amplify all the above problems and more over current IP-level integration, pushing additional requirements onto what we expect from models and demanding more standardization.
Paul’s Responses
Paul opened by acknowledging that Anthony had a pretty good challenge list since he (Paul) was only able to find slides among his standard decks to address about 50% of the list. For the rest he sketched ideas on what could be possible.
For system integration (and by extension multi-die systems), System VIPs and other system verification content are now extending the familiar IP-level VIP concept up to more complex subsystems. For example, the Cadence Arm SBSA (server-based system architecture) compliance kit provides all the components necessary for that testing. This concept is naturally extensible to other common subsystems, even extending into mixed signal subsystems, adding further value through stress testing across multiple customers and designs. Connectivity checking is another variant on the system-level verification theme, not only for port-to-port connections but also for connection paths through registers and gating.
Verification management solutions provide a structured approach to system verification, from requirements management, spec annotation, and integration with PLM systems. Additionally, verification campaigns today span multiple engines (virtual, formal, simulation, emulation, prototyping) and regression testing through the full evolution of a design. Taking a holistic and big data view together with machine learning enables design teams to understand the whole campaign and how best to maximize throughput, coverage, and debug efficiency. In-house systems with similar purpose exist of course, but we already see enthusiasm to concentrate in-house R&D more on differentiating technologies and to offload these “verification management” functions onto standard platforms.
In safety, Paul sees current verification technologies as a starting point. Fault simulation and support for FMEDA analysis with fault campaign strategies is already available, together with formal reachability analysis to filter out cases where faults can’t be observed/controlled. The EDA and IP industry is tracking the Accellera working group activity on an interoperable FuSA standard and should become actively involved with partners as that work starts to gel. There is also surely opportunity to do more: enabling fault sim in emulation will amplify capacity and performance and will allow us to quantify impact of safety mechanisms on performance and power. Paul hinted that more can also be done with formal as Anthony suggests.
Verifying mixed signal hierarchies is an interesting challenge. Analog embedded in digital, digital embedded in analog, even inline bits of digital in analog. DV-like verification in these structures will be a journey, starting with an interoperable database across the whole design. Cadence has put significant investment (20 years) into the OpenAccess standard and can natively read and write both digital and analog circuitry to this database. From this they can extract a full chip flat digital network structure – even for the analog components. From that it should be possible in principle to run all the standard digital signoff techniques: SDC constraints, clock and reset domain crossings, lint checks, connectivity checking, even formal checking. Paul stressed that proving and optimizing these flows is a step yet to be taken 😀
He sees the NoC challenge as something that should be perfect for formal methods. The Jasper group has been thinking about architectural formal verification for many years since these kinds of problem cannot be solved with simulation. Given the refined focus this challenge presents, the problem should be very tractable. (Editor sidenote. This problem class reminds me of formal application to cache coherence control verification, or SDN verification where you abstract away the payload and just focus on control behavior.)
Finally on PVT-related multicycle paths, Paul confessed he was stumped at least for now. A problem that only occurs on a corner that wasn’t tested is very hard to find. He closed by admitting that while automated verification will never be able to find everything, the industry will continue to push on boundaries wherever it can.
Good discussion and good input for more Innovation in Verification topics!
Also Read:
BDD-Based Formal for Floating Point. Innovation in Verification
Photonic Computing – Now or Science Fiction?
Cadence Debuts Celsius Studio for In-Design Thermal Optimization
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.