Recently I attended a panel discussion on variability in semiconductor fabrication hosted by Coventor in conjunction with the IEEE IEDM conference in San Francisco. The IEEE bills the conference as “the world’s pre-eminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling.” It’s easy to see how this discussion was relevant to the conference focus. SemiWiki’s own Dan Nenni was the panel moderator.
The panelists were John Wise from Lam Research, Jan Hoentschel with GLOBALFOUNDRIES, David Fried of Coventor, Jeff Smith for Tokyo Electronics NA, Tom Dillinger of Oracle Corporation, and Tomasz Brozek representing PDF Solutions. Each made opening comments, but the overriding theme was that variation is becoming a huge bottleneck for IC design and manufacturing. Paul McLellan has already written here about this panel, but I thought certain highlights were worth focusing on.
Tom Dillinger of Oracle spoke from the perspective of the design community, so I found his observations to be particularly interesting. He started out right away by saying that the device models are the most important thing. For FinFET you need to use BSIM-CMG models. BSIM-CMG comes with new parameters to indicate the number of fins. With additional fins comes “lots of parasitics.” Tom’s view is that for SRAM designers statistical models help a great deal with modeling variation. You have many copies of the exact same physical device. But with logic designs you introduce a wide variety of physical parameters that make predicting variation effects extremely difficult. FinFETs have finger counts and fin counts to factor in. With high fin counts it is necessary to reduce the parasitics in the models.
Tom feels that one of the models fall down because the fin is presumed to be rectangular. In reality it is a trapezoid or even a triangle. Also at more advanced nodes without EUV, moving to multiple patterning will introduce new sources of variation in devices. In fact a great deal of the Q&A after the panel discussion centered on whether EUV will become viable. There was no clear consensus on this point. There was agreement with Coventor CTO of Semiconductor David Fried’s statement that without a lightning bolt change from the current, 7 generation old, approach to lithography, that the only way to reduce the effects of variation is to work a ‘chorus’ of smaller improvements.
David Fried sees each discipline in the design and manufacturing process taking a silo based approach and specifying worst case parameters for the design. The result is expensive guard-banding that is probably overkill. It used to be that you could run batches of wafers and have a good idea about the variation problems you might encounter. With an exploding number of sources of variation, Fried asserts, it is impossible to run enough wafers. Jeff Smith of TEL America said that they use Coventor tools to identify the highest risk areas so they know what kinds of test wafers they should run. This makes it possible to find a manageable number of test cases to explore.
Fried also made the point that causes of variation are now masked so that it is harder to connect physical effects to yield results. The underlying physical effects can be misunderstood. This leads to overly conservative design guidelines. Not surprisingly, Fried called for a predictive approach, using simulation tools to better understand the underlying physics.
The panel discussion highlighted that we are at an interesting juncture. There was some talk about SOI versus FinFET, but the consensus was that FinFET on SOI will be the winner. So, going forward there are many questions. Will a cost effective lithography change come in time? How well will existing FinFET models work in production designs? Can excessive guard-banding that reduces cost effectiveness of new nodes be eliminated?
It promises to be an interesting year ahead.