WP_Term Object
(
    [term_id] => 119
    [name] => Altair
    [slug] => altair
    [term_group] => 0
    [term_taxonomy_id] => 119
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 18
    [filter] => raw
    [cat_ID] => 119
    [category_count] => 18
    [category_description] => 
    [cat_name] => Altair
    [category_nicename] => altair
    [category_parent] => 157
)

The Art of Flows, Part I

The Art of Flows, Part I
by Paul McLellan on 03-29-2012 at 1:00 am

 These days, the flows that are used to build semiconductor designs are rightly regarded as part of the intellectual property of the company that developed and used them.

But it didn’t always used to be that way.

In the 1970s, before EDA really existed (unless you count X-acto knives for cutting rubilith) nobody even thought about flows. A software program was manually compiled and linked. It only consisted of a few files and you knew which ones you had altered and so which needed recompilation.

In 1977 Stuart Feldman at Bell Labs invented the make program. It was focused on building C-based applications such as the Unix operating system and its utilities. It read in the dependencies of various programs on various files and worked out which ones needed to be recompiled, then ran the job and rebuilt the target application. It suffered from a number of weaknesses that people have tried to fix over the years, most notably that it required the user to manually specify the dependencies. If you forgot one, say that a particular source file included a header file, and the header file was changed, then the source would not be recompiled. As a result, most programming projects developed the conservative approach of also recompiling the entire product from scratch every night, the nightly build. Another weakness was that it was created in the days of small projects on a single minicomputer and wasn’t really good at taking advantage of large farms of processors nor handling the fact that dozens of programmers may all be working on the project concurrently.

EDA flows are way more complicated, involving tens of thousands of files with complex interdependencies. In fact relying on the user to specify all the dependencies is simply not going to work, there are too many of them and so errors are bound to occur. This is especially important in the case of rarely changed files that the user might not really even consider as an input. If an error is corrected in a .lib file and that is not treated as an input to synthesis, then required re-synthesis will not get done.

Plus, of course, some parts of EDA flows run for literally days at a time, and so the fall-back to rebuilding everything nightly is simply not going to work. Further, EDA flows create large numbers of outputs. While a complete flow might generate GDSII layout as the ultimate output, it also generates static timing validations, DRC checks, LVS checks and so on.

The hardware infrastructure for EDA is more complex too, perhaps involving hundreds or even thousands of servers with different capabilities in terms of processor performance, memory size, disk capacity and so forth. Jobs cannot simply be randomly assigned to servers. And then there is the complex license infrastructure, whereby a license may not always be available when you want one and you just have to wait.

The first attempts to capture flows were mostly based around scripting languages, in particular PERL (and, within tools, often TCL). These ecapsulated the flow but did nothing to handle automating the discovery of which files needed to be recreated, and which hardware and licenses were most appropriate.

In fact EDA flows are now so complex that they have their own experts, engineers tasked with creating the flows and then deploying them out to the other users. So the most important aspects of a modern EDA flow are:

  • developing and debugging the flow
  • deploying the flow to multiple users
  • managing compute resources such as server farms, license keys
  • managing productivity across global teams
  • taming complexity, openness, generality

RTDA’s FlowTracer addresses these issues and will be looked at in more detail in part II. If you can’t wait, then download the free e-book The Art of the Flow from here. Or download the FlowTracer white paper “Two weeks to tapeout, do you know where your files are?

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.