WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 595
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 595
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 595
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 595
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Climbing the Infinite Verification Mountain

Climbing the Infinite Verification Mountain
by Bernard Murphy on 06-14-2016 at 7:00 am

Many years ago I read a great little book by Rudy Rucker called “Infinity and the Mind”. This book attempts to  explain the many classes of mathematical infinity (cardinals) to non-specialists. As he gets to the more abstract levels of infinity, the author has to resort to an analogy to give a feel for extendible and other cardinal classes.

He asks you to imagine climbing a mountain called Mount On (you’ll have to read the book to understand the name). You climb at infinite speed and, after a day’s climbing, you look up and see more mountain stretching ahead, looking much the same as the stretch you just climbed. Anyone who’s ever climbed a mountain or a hill knows exactly how this feels. The author’s point is that no matter how fast you climb, there’s always more mountain ahead – it seems like you never get close to the top.

The reason I bring this up is that verification feels very similar, largely because the problem space continues to explode faster than we can contain. Cadence hosted a thought-provoking lunch at DAC this year (Seamlessly Connected Verification Engines? What Does It Take?) which I felt was very relevant to this topic. Jim Hogan opened and underlined the challenge. The amount of knowledge/data we have to deal with is increasing supra-exponentially. Today it’s doubling every 13 months. By 2020 when the IoT is in full swing, it is expected to double every 12 hours. The capabilities we will need to handle that volume with high quality are going to demand a (currently) almost inconceivable level of verification, touching almost everything in the stack. We’re going to have to seriously up our quality game, as Jim put it.

On that daunting note, the panel started with where the tools play best today. Alex Starr (Fellow at AMD) felt that formal was primarily useful for IP and subsystem, emulation is good for system but (software-based) simulation is struggling everywhere. That said, simulation still has an advantage over emulation in being able to verify mixed-signal designs. But a better and longer-term AMS solution may be to model virtual prototyping together with analog models in order to get system-level coverage with mixed-signal. AMD has invested over several years to make this work and it’s now paying off.

Narendra Konda (Head of Verification at NVIDIA) further emphasized the need for tools in the verification flow to play together, especially with virtual prototyping (VP). He pointed to the need for VP to play well with emulation and FPGA prototyping (FP), also for assertion-based verification (ABV) to play well in these flows. They are simulating 1B gates with a rapidly growing software stack across large banks of test-cases. They have to put a lot of this in VP to get practical run-times.

The perennial question of team interoperability came up of course. You can make the tools completely interoperable but that doesn’t help as much as it could if design and verification teams stick to their silos of expertise. Alex agreed this could be a problem – a good example is in power management verification where you have to span all the way from software applications potentially to physical design. His view was that this takes education and of course improvement in the tools, like portability of test cases.

Narendra took a harder line; designers don’t get to have comfort zones – they adapt or die. Fusing tools together is the bigger problem, but it is being solved by stages. For NVIDIA it took 1½ years working with Cadence to get VP, emulation and FP working well together. He views this collaboration as a necessary price for staying ahead. They had the same problem 2-3 years ago with ABV and the same approach to solving the problem – now this is also starting to work. Dealing with automotive safety and reliability requirements is a new frontier; there is no productive tool today in this area. Narendra expects this will take another 8-10 months to get to a solution.

The panel wrapped up on current deficiencies in cross-platform methodologies. All emphasized more need for standards, especially for interoperability between platforms from different vendors. Some of that is happening with the Portable Stimulus standard, but more still needs to be done, for example in normalizing coverage metrics between vendors.

In verification, performance is never where it needs to be. Narendra saw a need for something in-between the speed of emulation and prototyping (with emulation setup times and debug-ability of course). He felt doubling the current speed of emulation would help. Alex agreed and also said that for serious software regression, emulation and FP aren’t fast enough. There needs to be more of an emphasis on hybrid modeling between hardware platforms and VP, where it’s more feasible to get within range of real-time performance. This echoed Narendra’s earlier point about the need for VP hybrid modeling.

I found continued references to VP here and in other meetings particularly interesting. Software-driven verification with bigger software stacks and more test-cases really do drive a need to model more of the system in VP-like platforms, dropping into FP and emulation where needed. This need can only grow. Perhaps VP is becoming the new emulation and emulation is becoming the new simulation.

The vendors are doing a great job advancing what they do, and they’re clearly partnering effectively with customers to build those solutions, but the top of the verification mountain keeps receding into the clouds (there’s a pun in there somewhere) and probably always will. Meantime you can read more about NVIDIAs success with Cadence emulation HERE.

More articles by Bernard…

Share this post via:

Comments

0 Replies to “Climbing the Infinite Verification Mountain”

You must register or log in to view/post comments.