WP_Term Object
(
    [term_id] => 64
    [name] => Solido
    [slug] => solido
    [term_group] => 0
    [term_taxonomy_id] => 64
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 58
    [filter] => raw
    [cat_ID] => 64
    [category_count] => 58
    [category_description] => 
    [cat_name] => Solido
    [category_nicename] => solido
    [category_parent] => 14433
)

Designing for Variation

Designing for Variation
by Paul McLellan on 08-17-2015 at 7:00 am

 There is a widespread phenomenon in designing chips that new effects creep up on you. First they are so small you can ignore them. Then you can add a little pessimism to your timing budget or whatever gets affected. But eventually the effects go from second order to first order. You certainly can’t ignore them, and the guard bands required to just be pessimistic use up everything there is. Finally, you have to be accurate.

Variation is one of these areas. Of course we had variation in 90nm processes too, but it was too small to cause problems. But as we get down below 28nm with FinFETs, double patterning and ultra-low voltage required by IoT then variation becomes too significant to ignore. Apparently a rule of thumb is that double patterning requires 20X as many SPICE simulations. I like to say “you can’t ignore the physics any more” but mostly because it makes it sound like I can remember it myself. Increasingly, design groups are trying to ensure that their design will yield with very large variation, 6 sigma or even 7 sigma. Also, with strange combinations like wanting to use 7 sigma for the bit-cell of a memory, 6 sigma for the sense amps and 3 sigma for the digital periphery.

It used to be that we genuinely had “corners” that were actually at the corner. Slow-slow, fast-fast and so on. But now the PVT (process, voltage, temperature) corners are exploding. It is not at all obvious which ones are important and which are covered by other simulations. Analog/RF and memory are perhaps the worst, but even such safe stuff as digital standard cells cannot ignore variation and the number of simulations required for characterization has exploded.

The challenge is that to do this requires a lot of simulations. Thousands, or in some cases billions. Most of these simulations are wasted since they are not the ones at the extreme. What you would really like is a tool that ran the simulations that were necessary and ignored the ones that were not. Or used machine learning to optimize which simulations were required.


Well, that is basically what Solido does. Their Variation Designer tool takes in:

  • the netlist (or there is also direct interface to Cadence’s Virtuoso ADE)
  • PDKs from the foundry

and then it runs the simulations in your SPICE engine (eg Spectre, HSPICE, BDA AFS, etc), interfacing to SGE, LSF or RTDA that are already managing the compute resources.

Solido is one of those EDA companies that have been around for a long time, founded in 2005. If I had a dollar for every EDA company that was founded to address a problem too early then I’d be rich. But 28nm arrived and suddenly variation was a huge deal and instead of them knocking on reluctant design group’s doors, their own doors were getting knocked on (their doors are in Canada). Initially, the mobile people who were driving fast to advanced nodes. But now the mainstream is coming through. They have thousands of users in a few dozen companies. Nobody these days like their name being used as a reference, but they have about 8 of the top 12 semiconductor companies as customers. Their website has MicroSemi, Applied Micro, Sidense, Cypress, Huawei, nVidia, Broadcom, etc.. So right at the bleeding edge. As a private company they don’t publish all the numbers, but they are profitable. 40 people today but plan to be 50 by the end of the year.

The next easy time to see them is at TSMC’s OIP event coming up next month. Register here. The Solido website is here.

Share this post via: