WP_Term Object
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 414
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 414
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
    [is_post] => 1

Solutions for Variation Analysis at 16nm and Beyond

Solutions for Variation Analysis at 16nm and Beyond
by Tom Simon on 09-22-2016 at 7:00 am

Variation is still the tough nut to crack for advanced process nodes. The familiar refrain of lower operating voltages and higher performance requirements make process variation an extremely important design consideration. As far back as the early 2000’s design teams have been looking for a better approach to model variation than simply adding margin. This just meant that you were trading performance for yield. Back then it was thought that statistical static timing analysis (SSTA) would provide a viable solution. However, that did not pan out right away. The sign-off tools available then simply were not up to the task.

Another approach in use is called Advanced OCV, which attempted to take into consideration path lengths, by modeling chains of the cell in question with inserted parasitic elements. AOCV suffers from not including a number of significant design elements. Foremost amongst these is not looking at all of the arcs. Also it ignores side inputs. Compared to SSTA, AOCV tends to be either extremely optimistic or extremely pessimistic.

According to Cadence, in a recent white paper they published, a Statistical OCV approach offers the best solution to modeling variation. Without the compute and data expense of SSTA, statistical OCV takes into consideration pin to related-pin dependencies, input slew, output load, and provides the variation information needed for the signoff flow. The Cadence paper is authored by Ahmed Elzeftawi, Sr. Principal Product Manager and Ken Tseng, Software EngineeringGroup Director at Cadence.

The paper goes on to say that the Liberty Technical Advisory Board has created a unified Liberty Variance Format (LVF) document which includes OCV modeling coupled with timing, noise and power models. The Liberty Technical Advisory Board represents a broad consortium consisting of design tool providers, foundries and semiconductor companies. By taking advantage of statistical mean and Sigma values it’s possible for tools using this method to report timing values as probabilities or as discrete representations.

Producing these models requires looking at every transistor in the cells and deciding which ones contribute most to variation. Each timing arc within a cell must be analyzed. Additionally, the impact of input slew and output load must be included in the resulting models.

Cadence has extensive offerings for characterization and modeling that can be applied to standard cells, IO’s, memories and mixed signal blocks. Using Monte Carlo simulations as a reference they see very good correlation with their own technology for characterization. The Cadence Virtuoso Liberate characterization suite has many elements. The foundation tool in the suite is known as Virtuoso Liberate provides fast library characterization for standard cells and complex IO’s. The Virtuoso Liberate LV solution is useful for library validation providing functional equivalence and data consistency checking.

To handle variation, they offer Virtuoso Variety which provides modeling of random and systematic process variation. Virtuoso Variety can generate Advanced OCV, Statistical OCV and LVF models. In addition, Virtuoso Liberate MX is useful for custom and compiled memories and Virtuoso Liberate AMS provides mixed signal characterization.

The Cadence Innovus Implementation System can take advantage of these models to speed up timing verification and improve performance. At the end of the paper they provide as an example a 1 GHz design with the set up and hold slack for the top 200 paths. It’s pretty plain to see that there’s an average improvement of 150 picoseconds for set up and 200 picoseconds for hold.

It was inevitable that statistical approaches would be used to deal with variation. I remember having discussions with design managers 10 years ago about the promise of statistical approaches. It’s nice to see now that they have come to fruition. Certainly at nodes beyond 16nm this technology will be more than a “nice to have”. If you are interested in reading the entire white paper, it can be found here.