Manish Pandey, VP R&D and Fellow at Synopsys, gave the keynote this year. His thesis is that given the relentless growth of system complexity, now amplified by multi-chiplet systems, we must move the verification efficiency needle significantly. In this world we need more than incremental advances in performance. We need to become more aggressively intelligent through AI/ML in how we verify, in both tool and flow advances. To this end, he sets some impressive goals: speeding up verification by 100X, reducing total cost by 10X and improving the quality of result by 10 percentile points (e.g. 80% coverage would improve to 90% coverage). He sees AI/ML as key to getting to faster closure.
Background in AI/ML types
Manish does a nice job of providing a quick overview of the primary types of learning: supervised, unsupervised and reinforcement along with their relative merits. Synopsys verification tools and flows use all three techniques today. Supervised learning is the most familiar technique, commonly used for object recognition in images. This follows training on large banks of labeled images (this is a dog, this is a cat, etc). He points out that in verification, supervised learning is a little different. Datasets are much larger than bounded images and there are no standard reference sets of labeled data. Nevertheless, this technique has high value in some contexts.
In unsupervised learning there is no target attribute. Learning explores data looking for intrinsic structure, typically demonstrated in clustering. This method is well suited to many verification applications where no a priori structure is known. The third method is reinforcement learning, familiar in applications like Google’s AlphaGo. Here the technique learns it’s way to improvement across a succession of run datasets, such as it might see in repeated regressions.
Constrained random, CDC/RDC and formal
One important area where these methods can be applied is to identify coverage holes in constrained random (CR) analysis, then using AI/ML to find ways to break through. Difficult branch conditions can cause holes for example. Overcoming these barriers allows CR to expand coverage beyond that point. (I wrote about something similar recently.) Manish cited a real example where this technique was able to both substantially reduce time to target coverage by 1-2 orders of magnitude and to increase coverage over the pure CR target.
Another application is common in CDC analysis. Static analyses are infamous for generating say ~100k raw violations, generally rooted in a very small number of real errors. Unsupervised learning is an excellent approach to analyzing these cases, looking for clustering among violations. Between clustering and automated root cause analysis they were able to reduce one massive dataset to just ~100 clusters. This is easily a 100X reduction in time to complete analysis.
In formal analysis over a set of properties, orchestration is a way to manage distribution of proof tasks through selection of preferred proof engines over a finite set of servers. Reinforcement learning can greatly enhance this process through learning, to better order properties and engine assignments from one regression pass to the next. This they have seen deliver 10-100X speed up over default approaches to scheduling. Which in turn allows more time for verifiers to push for higher proof coverage.
Debug, regression performance, assertion generation
Debug can benefit from AI/ML through automated root cause analysis a potentially huge benefit in compressing a very tedious task. Looking at past simulation results and debug action graphs through a combination of supervised and unsupervised learning, it is possible to identify top potential root causes by probability. Manish doesn’t quantify how much this reduces debug time, probably because that is highly dependent on many factors. But he does say the reduction is substantial, which seems entirely believable.
A holy grail in optimizing verification throughput is in finding a way to slim down testing in regression. Reducing to only those tests you need to run given any changes that have been made in the design. This is not an easy problem to solve, which makes it an obvious candidate for AI/ML. Manish talks about mining historic simulation data, bug databases and code feature data to determine test set reductions which should have minimal impact on coverage for a given situation. He doesn’t mention which methods he applied in this case, but I could see a combination of all three being valuable. He again suggests a large potential savings in time through this technique.
One last interesting example. Mapping from specification requirements to assertions is today a purely manual (and error-prone) task. Synopsys is now able to support this conversion automatically through natural language processing (NLP). Manish is careful to point out that success here depends on some level of user discipline in how they write their specifications. They support a workflow to help users learn how improve recognition rate. Once both users and the technology are trained 😃, conversion becomes an almost magical translation, again saving significant effort over the old manual approach.
This is real today
Manish closes by pointing out that they are already able to demonstrate the goals he set out at the beginning of his talk – substantial speedup in net runtimes with corresponding reduction in human and machine cost and improvement in quality of results over conventional CR coverage targets. He also stressed that these advances were hard won. They have been working on refining these capabilities, jointly with customers, over several years. AI/ML are not quick fixes, but they can deliver substantial gains in verification with enough investment.
You can watch Manish’s keynote HERE.
Also Read:
Upcoming Webinar: 3DIC Design from Concept to Silicon
Heterogeneous Integration – A Cost Analysis
Delivering Systemic Innovation to Power the Era of SysMoore
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.