WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 748
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 748
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)
            
Q2FY24TessentAI 800X100
WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 748
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 748
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)

Siemens EDA on Managing Verification Complexity

Siemens EDA on Managing Verification Complexity
by Bernard Murphy on 04-13-2023 at 6:00 am

Harry Foster is Chief Scientist in Verification at Siemens EDA and has held roles in the DAC Executive Committee over multiple years. He gave a lunchtime talk at DVCon on the verification complexity topic. He is an accomplished speaker and always has a lot of interesting data to share, especially his takeaways from the Wilson Research Group reports on FPGA and ASIC trends in verification. He segued that analysis into a further appeal for his W. Edwards Deming-inspired pitch that managing complexity demands eliminating bugs early. Not just shift-left but a mindset shift.

Siemens EDA on Managing Verification Complexity

Statistics/analysis

I won’t dive into the full Wilson Report, just some interesting takeaways that Harry shared. One question included in the most recent survey was on use of data mining or AI. 17% of reporting FPGA projects and 21% of ASIC projects said they had used some form of solution around their tool flows (not counting vendor supplied tool features). Most of these capabilities he said are home-grown. He anticipates opportunity to expand such flow-based solutions as we do more to enable interoperability (think log files and other collateral data which are often input to such analyses).

Another though provoking analysis was on first-silicon successes. Over 20 years of reports this is now at the lowest level at 24%, where typical responses over that period have hovered around 30%. Needing respins is of course expensive, especially for designs in 3nm. Digging deeper into the stats, about 50% of failures were attributable to logic/functional flaws. Harry said he found a big spike in analog flaws – almost doubling –  in 2020. Semiconductor Engineering held a panel to debate the topic; panelists agreed with the conclusion. Even more interesting the same conclusion held across multiple geometries; it wasn’t just a small feature size problem. Harry believes this issue is more attributable to increasing integration of analog into digital, for which design processes are not yet fully mature. The current Wilson report does not break down analog failure root causes. Harry said that subsequent reports will dig deeper here, also into safety and security issues.

Staffing

This a perennial hot topic, also covered in the Wilson report. From 2014 to 2022 survey respondents report a 50% increase in design engineers but a 144% increase in verification engineers over the same period, now showing as many verification engineers on a project as design engineers. This is just for self-identified roles. The design engineers report they spend about 50% of their time in verification-related tasks. Whatever budget numbers you subscribe to for verification, those number are growing much faster than design as a percentage of total staffing budget.

Wilson notes that the verification to design ratio is even higher still in some market segments, more like 5 to 1 for processor designs. Harry added that they are starting to see similar ratios in automotive design which he finds surprising. Perhaps attributable to complexity added by AI subsystems and safety?

The cost of quality

Wilson updates where time is spent in verification, now attributing 47% to debug. Half of the verification budget is going into debug. Our first reaction to such a large amount of time being spent in debug is to improve debug tools. I have written about this elsewhere, especially in using AI to improve debug throughput. This is indeed an important area for focus. However Harry suggests we should also turn to lessons from W. Edwards Deming the father of the quality movement. An equally important way to reduce the amount of time spent in debug is to reduce the number of bugs created. Well duh!

Deming’s central thesis was that quality can’t be inspected into a product. It must be built in by reducing the number of bugs you create in the first place. This is common practice in fabs, OSATs and frankly any large-scale manufacturing operation. Design out and weed out bugs before they even get into the mainstream flow. We think of this as shift left but it is actually more than that. Trapping bugs not just early in the design flow but at RTL checkin through static and formal tests applied as a pre-checkin signoff. The same tests should also be run in regression, but for confirmation, not to find bugs that could have been caught before starting a regression.

A larger point is that a very effective way to reduce bugs is to switch to a higher-level programming language. Industry wisdom generally holds that new code will commonly contain between 15 and 50 bugs per 1,000 lines of code. This rate seems to hold independent of the language or level of abstraction. Here’s another mindset shift. In 1,000 lines of new RTL you can expect 15 to 50 bugs. Replace that with 100 lines of a higher-level language implementing the same functionality and you can expect ~2 to 5 bugs. Representing a significant reduction in the time you will need to spend in debug.

Back to Wilson on this topic. 40% of FPGA projects and 30% of ASIC projects claim they are using high level languages. Harry attributes this relatively high adoption to signal processing applications, perhaps because HLS has had enthusiastic adoption in signal processing, an increasingly important domain these days (video, audio, sensing). But SystemC isn’t the only option. Halide is popular at least in academia for GPU design. As architecture becomes more important for modern SoCs I can see the trend extending to other domains through other domain specific languages.

Here are a collection of relevant links: Harry’s analysis of the 2022 verification study, the Wilson 2022 report on IC/ASIC verification trends, the Wilson 2022 report on FPGA verification trends and a podcast series on the results.

 

 

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.