At IEDM last week Coventor hosted a panel session as they do each year. The theme this year was surviving variation. The panel was hosted by someone whose name is familiar round here, Dan Nenni. The panel that Coventor had put together had people from all sorts of different slots in the design/supply chain for semiconductor. Unfortunately the participant companies would not release the slides so this is all from my scribbled notes on the evening.
First up was Rich Wise of Lam Research. He emphasized that variation is really about the limits to semiconductor yield. High variation and yield is poor and you can’t ramp a process to volume. Indeed, ramping a process to volume is largely about getting variation under control.
Next was Jan Hoentschel of GlobalFoundries. Of course keeping variation under control is absolutely key for a fab. Firstly, it is the key component for yielding a new process (or an existing one for that matter, fabs do occasionally “lose” the process). But foundries have another way of dealing with variation other than reducing it. They can put large variation parameters into their PDKs. But this transfers the problem elsewhere and if there is too much variation it simply becomes impossible to design for the process. Since foundries only make money when designs go into production this is economic suicide.
Next was David Fried, Coventor’s CTO. One of the ways to get variation under control is to run lots of wafers. But with a modern process that takes too long and costs too much. So the alternative is to model the process (would that be with Coventor’s SEMulator 3D by chance?) which allows you to add variation at the process step level and then measure the impact on yield, timing and so on without building wafers. In old processes the connection was fairly simple (gate length a little long, slower timing and so on). But a modern process has so many steps and complex multi-pattern lithography, that that is no longer the case and a good physical model feeding into good TCAD models is required.
Jeff Smith, from TEL (Tokyo Electron) America, sells semiconductor equipment (aka tools). He pointed out that it is no longer enough for them to sell equipment and let the customer worry about getting the yield up. They need to work out how to use their own equipment to reduce variation, by things like building sacrificial trenches to cut off some of the more extreme out-of-spec steps.
Tom Dillinger of Oracle said that he felt more like he was in the left-field bleachers trying to call the strike zone accurately while everyone else was up near the plate. He is at the mercy of variation. If the manufacturing processes can’t get it down then it shows up in his technology files and it makes it much harder to actually do a design. Closing timing/power/electrical etc just doesn’t converge if variation is too high. And modern processes, with double patterning in particular, are making things worse.
Last up was Tomasz Brozak of PDF solutions. They sell software for managing process yield, measuring (and thus improving) lithography. And so on. As I said above, improving yield is largely about reducing variation without letting costs get out of control.
The big message of the evening is that you cannot handle variation by worst-casing everything because they you find that having invested billions of dollars in a fab for a new process, and having moved the performance up nicely for the typical values in the center of the distribution, that now the spread (standard-deviation) is so large that worst case has hardly moved at all. Everyone, from equipment manufacturers to foundries to EDA suppliers need to replace worst case with accuracy. Everyone on the panel was making some contribution to that cause, narrowing the spread of the most important parameters so that design and high-yielding manufacturing is possible.Share this post via: