When I studied mathematical analysis, one of the things that we had to prove turns out to be surprisingly difficult. If you have a continuous function and at one point it is below a line (say zero) and at another point it is above zero, then there must be a point at which the value is exactly zero. In effect, a continuous function can’t get from below a line to above a line without crossing the line. OK, mathematicians like to spend time proving things that are “obvious” since sometimes they turn out not to be.
How about this, more relevant to semiconductor design. If you simulate a design at the SS corner and the FF corner for some particular parameter, then any other corner will fall between those two values. I mean to get from slow to fast you have to go through the other corner right? Isn’t it obvious? Wrong.
Variation causes weird things to happen. It was not a problem at 90nm but from 28nm on downwards you can’t just simulate those big FF and SS corners and get away with it. Those simulations (at a given voltage and temperature) will define some sort of range but you can’t go from there to the assumption that any other corner will fall inside this range. It is as if you can get from one side of the line at SS to the other FF, without going through typical.
For example, above are a few hundred simulations of a PLL duty-cycle at all sorts of corners including SS and FF. So all the other values “should” fall in between. But look at the distribution. SS is the dot at the far left, so that is pretty much where you would expect to find it. But FF is in the middle of the distribution. If you made the assumption that all other process corners would fall between those two points you would be very wrong.
So it is clear that if you are designing complex analog designs in 28nm or below then you need to do all those simulations to find out what the real distribution is. In the diagram on the left is a simply non-variation-aware flow. On the right is a flow starting to take variation into account. Just pick all the PVT corners that you need and do the simulations. The trouble is that this is prohibitively expensive. In certain cases, such as memories, where these problems are at their worst (there are bit-cells, rows, columns, sense-amps, and more) then the only way to be sure if all you use is brute force is to do a billion simulations. In simpler cases it might be thousands. Words like geological time scale and age of the universe spring to mind. That is not going to be the way to handle this problem.
What is required is a better way to manage this process so that only a subset of the simulations are done. The flow becomes closer to pick some good corners, do the simulations and then see what has been learned. Pick some more corners. Continue until confident that all the important corners have been simulated. The problem is that this cannot be done by hand, it requires a tool to manage the process and do the machine-learning. The diagram above shows a little detail. On the left is the old manual process of simulating a predetermined list of corners. On the right we add intelligence and analysis.
All these diagrams come from the book Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide by Trent McConaghy, Kristopher Breen, Jeffrey Dyck and Amit Gupta of Solido. I should emphasize a couple of things about it. This is not some theoretical analysis of variation for research groups, it is a practical guide for actual design groups. And it is not a user guide to Solido’s tools, it is a guide to what needs to get done, in some sense what needs to get simulated, and while I’m sure Solido are not going to complain if you decide to use their tools, the book is useful even if you do not. It treads that balance between being deep on theory (and thus of little use to a practical designer) and being an extended application note on Variation Designer (and thus of little use to anyone who is not a hands-on user).
The book is available on Amazon here. There you can also get a free sample and you can even try it free (on any Kindle including phones and tablets) for a week.
For anyone who is interested (get a life!) then the proof of the continuous function problem I started with relies on another more primitive fact: any bounded set of real numbers, perhaps infinite, has a least upper bound. Once you have that, then the continuous function problem is easy. The set of values of x that result in a value less than zero must have a least upper bound. But the limit of the function as this bound is approached is zero (that is the definition of a continuous function in math-speak). So there is a value at which the function is zero. W[SUP]5[/SUP]Share this post via: