These days the verification process starts right when the design process begins, and it keeps going well past the end of the design phase. Simulation is used extensively at every stage of design and can go a long way to help validate a design. However, for many types of designs, especially those that process complex data streams, emulation has to be used to ensure proper operation. In a recent webinar Aldec not only discusses the limitations of simulation-only verification for ASICs and large FPGAs, they also help show how it is possible to create common testbenches that are applicable to emulation as well to improve efficiency and help in problem diagnosis.
In the webinar titled Common Testbench Development for Simulation and Prototyping, Alexander Gnusin goes into great detail about the reasons for using a common testbench between simulation and emulation, and then he dives into the specifics of how to make it happen.
Given the relatively slow speed of simulation, emulation is the only way to run enough cycles to ensure that large designs operate properly with large frame sizes. Similarly, simulation time increases as more of the system is included in the scope. Real hardware environments also differ from simulations, so it is important to factor this in as well. Lastly, as reliable as synthesis and STA are, it is imperative to simulate at the gate level to ensure the hardware implementation is correct.
Yet, there are a number of issues that must be dealt with to enable emulation. It is necessary to deal with fundamental issues, such as the cycle based I/O in hardware designs. Cycle based I/O stimulus can overload emulator interfaces. Alexander discusses eliminating this issue by adding extra code in the hardware domain to eliminate the need for cycle based communication. Also, he mentions lowering the overall transaction frequency. Alexander talks about adding synthesizable verification components, such as drivers, monitors, responders and checkers.
The webinar takes time to discuss how common test benches should be set up. Alexander goes through the coding steps using Aldec’s HES DVM, which is a hybrid verification platform. Aldec uses SCE-MI for function based transaction level co-emulation technology. The common testbench will have two top levels – an HDL top and an HVL top. The HVL level uses SystemVerilog and optionally UVM. The HDL level uses SystemVerilog RTL with extra SCE-MI2 compiler features.
There are some helpful coding enhancements for emulation available from SCE-MI2 compiler features. Among these are implicit state machines, use of shared registers, clock and reset generation templates, use of hierarchical read access, writing and reading of MEM arrays and File I/O. The webinar provides examples of each of these to help better understand what is offered.
Alexander covers some useful techniques that can improve common testbench effectiveness. The LFSR-based seed-programmable randomization techniques can reduce connection load and provide useful stimulus. Also, He suggests using synthesizable FIFO-based scoreboards for datapath checking. To verify the equivalence of packets or data chunks, Alexander suggests compressing them to short signature using CRC or FCS methods and perform the datapath checking just on those signatures. End of test statistical verification can be based on the comparison of configurable counters values in the design and verification components.
Aldec’s HES DVM allows for optimization to improve speed though several methods that Alexander discusses in the webinar. Once the common testbench has been assembled Aldec’s Riveria Profiler can provide estimates of the maximum emulation speed up.
There is not space here to go through the whole process, but the remainder of the webinar lays out the common testbench development process and then goes through a design example. This detailed and informative webinar is available for replay on the Aldec website.