In one of my favorite movies, Brad Pitt utters the only question that matters in baseball or technology management in the face of uncertainty: “Okay, good. What’s the problem?” Not surprisingly in that scene, as the question circles the table of experts used to doing things the old way, not a single one can answer it correctly in the new context.
For all the talk about massive SoC designs and teams being eaten alive by verification taking 50% to 70% of the design cycle time, there is a lot of industry disagreement on what exactly is the problem. There is no shortage of solutions, but most focus on trying to run stuff faster. The thing about faster is, it can be really expensive, and one piece running faster by itself often doesn’t solve the problem.
With that in mind, we bring in a new white paper from Bill Jason Tomas at Aldec. He starts at the same place a lot of others do: hardware-assisted verification, where FPGA prototypes are used in co-emulation to speed up critical pieces of the effort. This works very well at reasonable scale, but all too often at the massive scale of today’s SoC, something else happens: the testbench itself takes up more and more of the emulation time.
Why? Most test generators are written in some high-level representation, and are event-driven as opposed to timing-driven. At some point where the ball meets the bat, timing really is everything when cycle-exact RTL comes up, and a bottleneck forms when the hardware-assisted emulator has to field all those mismatched and unformatted events.
The game-changing solution is a set of transactors to connect the untimed testbench to the timed modules living on the emulator. Moving from events to transactions, each with proper timing and structure, allows the emulator to run with its fast timing intact.
But if everyone goes off and writes their own transactors designed for their testbench and emulation system, we are back at the start – still waiting for our design to verify, because different transactors don’t work together. Enter the Standard Co-Emulation Modeling Interface (SCE-MI). When Accellera set out to establish the SCE-MI standard, portability of transactor models was a key consideration.
SCE-MI sets up message channels, analogous to sockets in network programming models, which abstract the interface. As Bill points out, a single event in the testbench can trigger hundreds of clock cycles in the emulator. Transactors convert the message to a sequence of clocked bit-level data forming the timing-accurate inputs to the DUT, and similarly process output patterns back into event formats the testbench expects.
The HES-DVM environment provides the SCE-MI infrastructure for users to connect a SystemC testbench to an Aldec HES-7 FPGA-based prototyping system. The Aldec implementation of SCE-MI uses TCP/IP to provide reliable message delivery between the sides. This increases the portability of the Aldec SMS library, so it may be used with Aldec and 3[SUP]rd[/SUP] party simulators.
For more on SCE-MI and the thinking behind transactions, here’s the full white paper:
When people ask me what the problem in technology today is, whether the context is SoC design, software development, or the Internet of Things, I’m giving one answer lately: creative inclusion, bringing disparate IP together in some kind of framework. You can’t win if you don’t get on base, no matter how good your individual tools are. SCE-MI is a good example of innovation, allowing great tools to integrate more smoothly.
lang: en_USShare this post via: