WP_Term Object
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 420
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 420
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
    [is_post] => 1

Emulation from In Circuit to In Virtual

Emulation from In Circuit to In Virtual
by Bernard Murphy on 11-08-2018 at 7:00 am

At a superficial level, emulation in the hardware design world is just a way to run a simulation faster. The design to be tested runs on the emulator, connected to whatever test mechanisms you desire, and the whole setup can run many orders of magnitude faster than it could if the design was running inside a software simulator. And this is indeed how emulators are often used—to speed up big simulations—whether you are putting the whole design in the emulator or using the emulator to speed up some part of the design, while the rest continues to run in the software simulator (generally known as simulation acceleration).

Beyond accelerating simulation, a very active use model from the earliest days of emulation has been ICE (in-circuit emulation) where the emulator is plugged into a real system, acting as a model of the device which will ultimately be built. This model generally runs slower than the real thing, but speed bridges can take care of handshaking/synchronization with the rest of the hardware.

ICE continues to be a very active use-model, in part for an obvious reason: There’s often no substitute for testing with real traffic. Testing with idealized (virtual) models is great, but the other hardware with which you will interact may have been designed based on some unexpected choices where standards don’t nail choices down. And, let’s face it, the folks who designed those components aren’t perfect – they may also have made a mistake or two. So real traffic isn’t always going to match idealized traffic perfectly. That’s why system designers run plug-fests—to shake out all those corner cases—but obviously it’s best to do as much of that as you can before you commit to silicon.

One concern about this hardware-centric testing had been that surely this kind of configuration was intrinsically unshareable, and therefore, whoever had the emulator would hog that expensive resource until they were done. However, in modern platforms supporting multi-user emulation, speed bridge connections are relocatable between jobs and runs using those links can be driven remotely, so this is no longer a concern.

Where is this kind of use-model popular? Suppose you’re building (or configuring) an LTE or Bluetooth modem or a 5G device. You need to confirm that what you have built/configured is compliant with those standards, and it’s common to use physical testers for that purpose. Rhode and Schwartz, Anritsu and Keysight are some of the big names in that domain. You can connect your modeled device on the emulator to the physical tester to run the full suite of compliance tests. Could someone possibly build a virtual compliance tester for this purpose? If so, its usually best built by the experts who also build the real tester. Even then, physical testers provide the real thing for testing real silicon, so why not use that?


Another common reason for virtualizing the environment is to accelerate the software running on the CPU(/cluster) in the DUT. It’s pretty easy to see why this adds value. A lot of the action in booting a kernel/OS is in the software and the CPU core (plus memory), with only intermittent need to interact with the rest of the SoC until application testing begins. Running this in a virtualized model provides the benefits of fast software execution while still connecting to the accurate hardware model where needed. It also offers flexibility in testing against multiple OSes (again because bring-up is fast) and in driving traffic into the SoC from the virtual host. For example NVIDIA has talked about using this approach to develop/test PCIe driver software, using the Cadence VirtualBridge technology.

Wonderful as fast virtual simulations are, the underlying hardware in an essential part of verification for both hardware and software regressions/signoff, since system testing must be done with real hardware timing and behavior. That’s where hybrid emulation comes in. (This is a curious name since I would have thought all instances of an emulator running with something else would be hybrid. For some reason the name has been assigned to emulation running together with virtual models. Go figure.)

In the hybrid use-model, you run the processor cluster with associated software in the virtual domain, connecting to the RTL model of the rest of the (evolving) circuit on the emulator through an accelerated VIP or through a virtual JTAG interface connected to a Lauterbach debugger or through whatever other interfaces are supported by the emulator. This use-model has been quite comprehensively proven as effective in some challenging environments, where users have brought up full software stacks (kernel, OS and graphics test suites for example) on top of the hardware model. The virtualized side of the model makes it possible to bring up the OS in very reasonable time (<2 hours)—something that would otherwise take days on an emulator-only model. This in turn enables reasonable turn-times for software regressions through application-level tests (e.g. graphics rendering test suites).

Bottom line, both forms of “hybrid” emulation are valuable, each for different reasons. Cadence has a strong position in ICE and hybrid support as well as virtual emulation through their Palladium platform. You can learn more HERE.

3 Replies to “Emulation from In Circuit to In Virtual”

You must register or log in to view/post comments.