WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 738
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 738
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)
            
Q2FY24TessentAI 800X100
WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 738
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 738
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)

Army of Engineers on Site Only Masks Weakness

Army of Engineers on Site Only Masks Weakness
by Jean-Marie Brunet on 05-17-2016 at 7:00 am

Hardware emulation was conceived in the 1980s to address a design verification crisis looming on the horizon. In those days, the largest digital designs were stressing the limits of the software-based, gate-level simulator that was the mainstream tool for the task.

It was anticipated and confirmed in short notice that adopting hardware in the form of field reprogrammable devices to perform functional design verification would subdue and control what was becoming an intractable problem. Not only for the largest designs of the time, but it would provide also a path to keep up the advantage with the design size growth into the future.

Another major benefit inherent to the adoption of hardware to verify a design-under-test (DUT) was its ability to test the DUT with live/real traffic, albeit with a caveat. The fastest speed of early emulators was in the ball park of 5MHz, not sufficient to keep up with real traffic clocking at 100MHz. The problem was solved by inserting speed adapters – conceptually, they were first-in-first-out (FIFO) buffers – between the physical world and the I/O of the emulator.

The two advantages, though, came with a steep price. Not a purchase price, since it was well known that accelerating time to market had a profound, positive impact on profits that would offset the expensive acquisition of an emulator. The real price was the rather time-consuming, cumbersome, and frustrating task of mapping the DUT onto the FPGAs.

The problem arose from the FPGA’s limited number of I/O pins – known as the Rent’s Rule – that complicated the mapping of the DUT onto the programmable devices. To cope with the severe limitation, several interconnection schemes were devised over time, from nearest neighbor, to full and partial crossbars, synchronous and asynchronous time pin multiplexing. None eliminated the problem.

By the mid- to late-1990s, two leading suppliers ditched commercial FPGAs, and replaced them with custom devices implementing custom emulation architectures. These were thought to alleviate and ultimately eliminate the bottlenecks. And they did.

After a decade of successful adoption of custom-based emulators, the rising interest in FPGA prototyping platforms proposed by some vendors not only for early software validation, but as an alternative to custom-based emulators seemed to change the landscape.

This is not the case. The problem remains, and is now worse.

An FPGA prototype trades off features and capabilities in favor of attractive cost advantages and fast speeds of execution. Both are requirements for software validation in a large team of software developers where each designer may be assigned one copy of the prototype. The long setup-time is still a serious problem. Given today’s SoC complexity reaching into the multiple hundreds of million gates, if not billions, it may extend to several months and never a week or less.

What would a supplier of FPGA emulators then do?


Compensate the weakness by committing an army of engineers, partly R&D personnel and partly application engineers. They provide on-site support, work side by side with lead design engineers, and assure that the customer’s designs are ready for emulation often after few calendar months. This significant involvement is mandatory, not only at the time of an evaluation before purchasing the emulator, but also during the initial adoption. It may also extend and increase the bandwidth in production use.

It may seem that as long as the commitment is shouldered by the emulator vendor, the customer may enjoy the benefits without penalties. Again, this is the wrong perception.

Being so dependent on the supplier is worrisome for three reasons:

First, requiring involvement of lead design engineers, scarce resources in any IC design organization, for the design bring-up in the emulator is a proposition few can afford.

Second, the sheer volume of engineers required for deploying an FPGA emulator, if available for hire, questions the cost advantage.

Third, a company that must rely on an emulator vendor’s army of engineers for a mission critical task gives the vendor excessive leverage that could be reduced at any time.

Instead, the company needs to rely on its own engineers to effectively run the emulator. That means setting up and training an internal support organization. FPGA-based emulators, however, would add a significant financial burden to implement such proposition.

In fact, long gone is the day when mapping a DUT onto the FPGAs in the emulator was slow, unwieldy, and aggravating. Today, custom-based emulators are scalable, efficient, can be deployed with a minimum of resources, minimal design knowledge and limited involvement from the supplier. Choosing between the two seems like a straightforward decision.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.