Verification is a complex task that takes the majority of time and effort in chip design. Veriest shares customer views on what this means. We are an ASIC services company, and we have the opportunity to work on multiple projects and methodologies, interfacing with different experts.
In this “Verification Talks” series of articles, we aim to leverage this unique position to analyze various approaches to common verification challenges, and to contribute to make the verification process more efficient.
As product life cycles are getting shorter, you need to be very effective in your development flow to meet deadlines. What does that mean in terms of design maturity? Since we can never say that the verification is fully complete, I was interested in who and when decides when “enough is enough”: Is there some predefined date? Is it adjustable in relation to the success of the development process?
Even after 10 years of career as a verification engineer, I was very interested to discuss this topic with other respected professionals and get their perspectives. My interlocutors were: Mirella Negro from STMicroelectronics in Italy, Mike Thompson from the OpenHW Group in the Canada and Elihai Maicas from NVIDIA in Israel. We’ve also talked about different sign-off criteria, progress tracking and measuring metrics against the criteria as the development progresses.
How do we set a project timeline?
Everyone agrees that verification completion criteria should be jointly defined by all stakeholders in project.
Mike believes that it should all start from the question “what are our quality goals?” and that primarily depends on the specific project. “For example, many ASIC projects will budget for one metal re-spin before going to production, so the quality requirement of first silicon could be written as ‘no functional bugs that gate sample testing and initial customer testing’. This is still a very high bar, but it is not ‘zero functional defects’”.
Elihai believes it depends on the product application. Although he has worked only on consumer electronics projects, where the tape-out date is usually specified at the beginning, he noticed that in the development of, for instance, medical or automotive devices all stages are much more strictly defined and reviewed. So hopefully these chips are not taped-out until the confidence in the quality is very high.
“This is market driven”, says Mirella. She explains that after the marketing team understands customer demands, they come to the R&D team with a description of the product and technology required. The R&D team then evaluates if that goal is achievable and in how much time. There are two possible scenarios: The first one – this is an innovation; in which case you are usually not limited in time. The second scenario – the product that is currently in demand in the market; here the development makes sense only if your delivery date is aligned with expected customer demand. If you miss a very short window of opportunity, you may miss out the time to sell.
Mirella emphasizes the importance of risk planning and having business continuity plan in place. Risk planning should include both a general risk as well as project-specific risk. Project scope can change because projects are very complex and involve a lot of innovation. But also, you can be completely stuck due to unforeseen circumstances such as a pandemic, a bug in the tool, or the technical leader who left the company.
Sign-Off Criteria
According to Mike, the quality goal should be defined at the start of the project, but the “measurement question” must typically wait until the requirements and/or functional specification is available. And the answer to that question should be more qualitative and usually is a variation of “100% coverage and no test failing”. “There are a lot of re-use opportunities for completion criteria when the projects are in the same domain. For instance, in our case, two different 32-bit RISC-V cores implement the RVIMC instruction extensions. So, the RVIMC completion criteria for these cores are (probably) the same,” he adds.” However, the two cores have very different interfaces to instruction and data memory, so the completion criteria for these aspects are very different. This illustrates why the ‘measurement question’ is gated by the Requirements and/or Functional Specification.”
Elihai’s experience indicates that in most cases there are some “magic” numbers (for example: phase1: 85% code coverage, phase2: 90% code coverage + 90% functional coverage, Tape Out: 95% code coverage + 100% functional coverage) that are passed from project to project, plus some criteria specific to the current project, block, IP. This should all be defined before the development starts. However, he adds: “I never saw that happening. In real life, the specs are constantly being edited as the development progress, and so are the flows and exact numbers (performance for example) that need to be simulated”. “The criteria are defined by the technical managers and project leaders, based on personal experience and legacy criteria used in the team,” Elihai summarizes.
At STMicroelectronics, however, there is a common sign-off criteria for all projects: anything less than 100% of functional and code coverage means there is still work to do. Once reached full coverage and no failing tests, there is still possibility to miss some functional bugs, but it is not that obvious to define better criteria. They try to grant at least what is measurable today by the available tools.
Takeaways
Since the quality criteria depends on the market, competition, budget, I’m not sure that we can or should aim to create common criteria because this dictates the success of the company. Therefore, companies that find themselves less successful comparing to competitors, could revise their approach in this regard. Still, there seems to be a minimum that everyone respects. It is mandatory to have a very high percentage (100% or almost 100%) of all success indicators that can be measured with today’s tools. Also, no one compromises the quality when it comes to the sensitive applications such are medical and automotive devices. There the verification sign-off happens only once all internal and external auditors approve it. On the other hand, in consumer electronic devices some features might be abandoned due to a pressure to meet a deadline.
After understanding what different criteria are and how they were created, in the follow-up article we will look at how you can track the progress and what can cause problems on that way.
To view more about Veriest, please visit our web site.
Also Read:
Verification Completion: When is Enough Enough? Part II
On Standards and Open-Sourcing. Verification Talks
Agile and DevOps for Hardware. Keynotes at DVCon Europe
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.