Synopsys hosts a regular lunch at DVCon each year (at least over the last few years I have checked), a nice meal and a show, opening a marketing update followed by 2-3 customer presentations on how they use Synopsys verification in their flows. This year’s event was moderated by Piyush Sancheti from Synopsys Verification marketing and a buddy of mine from way back in my Atrenta days.
As promised, Piyush provided a market update on Synopsys growth in verification. He reminded us that their emulation has been growing nicely (50% CAGR) and that they are viewed among their customers as particularly strong in accelerating software bring-up (backed up by their speaker from AMD). The Verification Group is expanding their focus from platforms to solutions, particularly in automotive, networking, 5G, AI and storage (I may have missed a few). Nonetheless they continue to invest heavily in platforms. They’re seeing good traction with their fine-grained parallelism in simulation, now available to all VCS customers with all simulation flows. VC Formal is also getting strong pickup and continues to add apps and assertion IP.
Next up was Deepak Manoharan, SoC verification manager at Qualcomm North Carolina, on the power of focus in QCOM datacenter technologies. Deepak, a good story-teller, talked more about his philosophies than technical details. He split his topic into two main areas: reacting to changeand conserving time, illustrated by work he manages on the Qualcomm Centriq 2400 server processors. These beasts host 48 cores along with DDR and PCIe interfaces and one of the bigger verification challenges they face is, guess what, proving cache coherence under all possible circumstances. He noted that in general, verification is a lot more challenging than for mobile platforms because servers must very efficiently support many use-cases.
Deepak pointed out that change is a fact of life in real design projects; ability to react quickly is essential and depends on a very clear and complete verification plan. When you need to adapt quickly to a change, interoperability between (verification) platforms is important as is bring-up time. Equally you must always remember why you are doing a certain task in verification. Plodding through the plan is important but when the ground changes underneath you, sometimes certain line-items in that plan lose value and new critical objectives emerge. Handling this effectively is part of reacting quickly.
Conserving time depends on continuing platform performance improvements of course (he noted 2-3X speed-up in the latest release of VCS), but also test-planning by platform, features to simplify and accelerate adapting to varying needs and changes (backdoor load, save/restore, efficient debug and portability between platforms).
Brian Fisk, principal MTS at AMD, introduced us to IP-level hybrid emulation. This is a very interesting direction in shift-left, an approach in which you can pre-verify a fairly extensive software stack while the rest of the hardware is still in development. Brian opened this discussion with an interesting question: are we approaching practical limits for full SoC simulation and emulation? He pointed out that the amount of software development we are having to shift-left (BIOS, drivers, etc) is growing, also we now must worry about power and performance. He suggested that the SW/HW development demands are now accelerating faster than verification platform improvements. Which points to the value of developing and proving out stacks at the IP level.
AMD has had for some time now, I think, a very useful capability they call SimNow. This allows them to model a full SoC, say a discrete GPU, with different parts potentially at different levels of abstraction and/or running on different verification platforms – simulation, emulation, prototyping or even early silicon. From a software developer point of view, the details are transparent, except in performance. Brian cited a possible configuration where the GFX engine runs on a ZeBu emulator and all the other stuff (peripherals, etc.) runs in SystemC models. The software stack runs in VirtualBox connected to this SimNow hardware model and the SW user can (I believe) fairly transparently configure the SimNow model to manage accuracy/ performance tradeoffs (through component swaps) as needed.
Now back to the IP+SW topic. Brian said that they used to compile the whole design into ZeBu. This worked fine but obviously tied down a limited resource during that period, limiting sub-component prove-out at various stages. Now they have switched to the hybrid (using SimNow) emulation model approach where prove-out of sub-components of the GFX core can share an emulator. They are now at a point that 4 independent SW teams can be working simultaneously on hybrid models testing their code against different aspects of the GFX.
In Brian’s view, ZeBu has been a game-changer for AMD. Hybrid models are passing first regressions 8-10 weeks ahead of previous milestones and each is seeing several orders of magnitude more real-world stimulus that they had been able to exercise before. Power and performance testing also starts much earlier. As a result, they now see SW developers finding HW bugs and (the nirvana of system development) the gap between SW and HW developers is closing. Brian wrapped up by answering the question with which he opened; In his view, yes, we have hit the limit of SoC emulation; system level SW development and verification must move to the IP level. Food for thought.
You can register HERE to watch a recording of the panel.
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.