WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Virtual Reality

Virtual Reality
by Bernard Murphy on 04-20-2017 at 7:00 am

In the world of hardware emulators, virtualization is a hot and sometimes contentious topic. It’s hot because emulators are expensive, creating a lot of pressure to maximize return on that investment through multi-user sharing and 24×7 operation. And of course in this cloud-centric world it doesn’t hurt to promote cloud-like access, availability and scalability. The topic is contentious because vendor solutions differ in some respects and, naturally, their champions are eager to promote those differences as clear indication of the superiority of their solution.

Largely thanks to contending claims, I was finding it difficult to parse what virtualization really means in emulation, so I asked Frank Schirrmeister (Cadence) for his clarification. I should stress that I have previously talked with Jean Marie Brunet (Mentor) and Lauro Rizzatti (Mentor consultant and previously with Eve), so I think I’m building this blog on reasonably balanced input (though sadly not including input from Synopsys, who generally prefer not to participate in discussions in this area).

There’s little debate about the purpose of virtualization – global/remote access, maximized continuous utilization and 24×7 operation. There also seems to be agreement that hardware emulation is naturally moving towards becoming another datacenter resource, alongside other special-purpose accelerators. Indeed, the newer models are designed to fit datacenter footprints and power expectations (though there is hot 😎debate on the power topic).

Most of the debate is around implementation, particularly regarding purely “software” (RTL plus maybe C/C++) verification versus hybrid setups where part of the environment connects to real hardware, such as external systems connecting through PCIe or HDMI interfaces for example. Pure software is appealing because it offers easy job relocation, which helps the emulator OS pack jobs for maximum utilization and therefore also helps with scalability (add another server, get more capacity).

In contrast, hybrid (ICE) modeling requires external hardware and cabling to connect to the emulator, also  specific to a particular verification task, and that would seem to undermine the ability to relocate jobs or scale and therefore undermine the whole concept of virtualization. In fact, this problem has been largely addressed in some platforms. You still need the external hardware and cabling of course but internal connectivity has been virtualized between those interfaces and jobs running on the emulator. Since many design systems want to ICE-model with a common set of interfaces (PCIe, USB, HDMI, SAS, Ethernet, JTAG), these resources can also be shared and jobs continue to be relocatable, scalable and fully virtualized.

Naturally external ICE components can also be virtualized, running on the software host, or the emulator or some combination of these. One appeal here is that there is no need for any external hardware (beyond the emulation servers), which could be attractive for deployment in general-purpose datacenters. A more compelling reason is to connect with expert 3[SUP]rd[/SUP]-party software-based systems to model high levels and varying styles of traffic which would be difficult to reproduce in a local hardware system. One obvious example is in modelling network traffic across many protocols, varying rates and SDN. This is an area where solutions need to connect to testing systems from experts like Ixia.


You might wonder then if the logical endpoint of emulator evolution is for all external interfaces to be virtualized. I’m not wholly convinced. Models, no matter how well they are built, are never going to be as accurate as the real thing, in real-time and asynchronous behaviors and especially in modeling fully realistic traffic. Yet virtual models unquestionably have value in some contexts. I incline to thinking that the tradeoff between virtualized modeling and ICE modeling is too complex to reduce to a simple ranking. For some applications, software models will be ideal especially when there is external expertise in the loop. For others, only early testing in a real hardware system will give the level of confidence required, especially in the late stages of design. Bottom line, we probably need both and always will.

So that’s my take on virtual reality. You can learn more about the vendor positions HERE and HERE.

Share this post via:

Comments

0 Replies to “Virtual Reality”

You must register or log in to view/post comments.