WP_Term Object
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 420
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 420
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
    [is_post] => 1

Software-Driven Verification Drives Tight Links between Emulation and Prototyping

Software-Driven Verification Drives Tight Links between Emulation and Prototyping
by Bernard Murphy on 04-28-2016 at 12:00 pm

I’ve mentioned many times what has become a very common theme in SoC and system verification  – it has to be driven by the software because any concept of exhaustively verifying “everything” is neither feasible nor meaningful. Emulation has become a critical component of this flow in validating and regressing software “close to the metal”. On an emulator you can even boot Linux and Android and run Android test suites – but not at a performance acceptable to the fast turns and regression needs of software development teams.

Accelerating performance often takes advantage of mixed platforms, for example combining virtual platforms with emulation to accelerate OS-boot. An increasingly common way to accelerate, especially as the design starts to converge, is by prototyping on an FPGA platform. This can run an order of magnitude (or better) faster than emulation, which makes it more practical for regression flows where you want to run hundreds, thousands or more tests in each regression pass. Prototyping is great on speed but doesn’t offer as much internal visibility to debug problems on the hardware side as emulators do, so you want be able to jump back to emulation (where you have much better internal visibility) for debug. There you isolate, diagnose and correct problems as they are discovered, while continuing regression testing on the prototype platform.

This means you need to be able to jump back and forth between prototyping and emulation to get to the coverage you need, and to shake out problems as they arise. But there’s a problem with this appealing concept – typically it can take up to 3 months to build a manually optimized FPGA prototype, and that’s not exactly conducive to quick turn-around debug between prototyping and emulation.

Cadence has been working on reducing the turnaround time by optimizing the flow between their Palladium™ (emulation) and Protium™ (prototyping) platforms as a natural extension to their continuum of verification solutions. Optimizations start logically with a unified compile to both platforms, enabling reuse of scripts, constraints, clock definitions, memory definitions and more.

This compatibility isn’t just for input formats. Clocking semantics are compatible between the Palladium and Protium environments—a netlist for the Protium tool can be moved back to the Palladium platform and debugged there. And the Protium tool is compatible with the SpeedBridge adapters that work with the Palladium environment.

In addition, Protium bring-up time (for handling memories and clocks, partitioning, FPGA back-end design and functional unit debug) has been reduced from months to weeks. And with the Perspec System Verifier (the Cadence implementation of the emerging PS standard for portable stimulus between platforms), you can easily transfer stimulus between engines. A further optimization is through support for black-boxing. Black-boxes for Protium are treated as “don’t-touch” – they don’t need to be rebuilt in the prototyper in subsequent revisions, which means you can further accelerate turn-around times in RTL transfer to Protium.

Between these capabilities and the ability of the Protium platform to backdoor download memory contents, you can quickly switch from regression in prototyping to a more detailed debug in emulation. And when you have accumulated enough fixes, you have a shorter path to rebuild a new prototype for late-stage regressions.

So design/verification teams have three options for hardware-enabled verification:

  • They can use pure emulation, all RTL in hardware, and test benches either synthesized into into the emulator or connected via acceleration. This option provides great, simulation-like debug on the hardware side, though the speed may not satisfy notoriously impatient software developers. :rolleyes:
  • A second, and faster, approach can use virtual platforms for the compute subsystem, intelligently connected to an emulation hosting the items that require full accuracy – like GPUs (reports show a speed improvement of between 50X and 200X). This second approach also allows you to execute software-driven tests faster (users report an up to 10X speed increase).
  • Finally, a third approach couples emulation with FPGA-based prototyping, which is ideal when the hardware has matured and you need the speed that will satisfy software developers.

Also, since the Palladium emulation database runs out of the box on the Protium platform, re-using the same front-end compile, users can make a tradeoff between fast automated bring-up with reasonable prototyping speed or more time-consuming manual optimization for even more speed. Cadence has seen reports of 5MHz to 10MHz out-of-the-box for FPGA-based prototyping using the fully automated flow, with potential to reach 10s of MHz up to 100Mhz by manually optimizing with partitioning guidance and black-boxing.

You can learn more about Protium capabilities in the following excellent webinar HERE.

More articles by Bernard…