I was at SNUG earlier today at both Aart’s keynote that opened the conference and at his “meet the press” Q&A just before lunch. The keynote was entitled Bridges to the Gigascale Decade. And the presentation certainly contained lots of photos of bridges! Anyway, I’m going to focus on just one thing, namely how the dynamics of the industry change depending on the cost-per-transistor as we go down to 9nm.
One thing that Aart talked about at both sessions was this trend as we go down through the next few process nodes. It is clear that FinFETs bring great value, especially much lower leakage current. Whereas 20nm planar doesn’t bring much advantage, all the extra costs and hassles of 20nm without the advantages of FinFETs. No wonder everyone is rushing to 14nm (or sometimes called 16nm). This is actually 14nm FinFETs with 20nm interconnect fabric.
Aart had a graph from Intel showing the costs per transistor coming down almost linearly, with an extra kicker if and when we have 450mm wafers. Of course there is another saving with EUV but, as you probably know, I’m a bit of a skeptic about that. I hope this is true, but I’ve also seen other graphs showing the cost being flat. At the common platform meeting a few weeks ago, Gary Patten of IBM said that there was a cost saving but it is much less than we have been used to. The old economics was a 50% increase in die per wafer and a 15% increase in cost per wafer leaving a 35% saving. Who knows what the new rules are?
But Aart feels it doesn’t really matter. We are leaving the era where the push of Moore’s law drove the semiconductor industry and entering the era when market pull will drive the semiconductor business even if transistor costs do not come down. That is certainly true for some markets. The cost of the application processor chip in your cell-phone isn’t that critical since it is a $500 product. But in Africa there is a market for cell-phones with a $50 BOM and every $ is important.
So I’m a little unconvinced that the economics of Moore’s law are irrelevant compared to the exponential demand for greater and greater functionality.
At the Q&A we all discussed this. We talked about how everyone would like IBM’s Watson in our pocket and semiconductor technology over the next decade will be able to deliver this. If the costs per transistor come down, then I believe this. But if they don’t, and if Watson has, say, ten times as may transistors as the current chip in your smartphone, that means that the chip will cost ten times as much or more. Yes, wonderful functionality, low power, but maybe at a price point that doesn’t work even in the US. And, to make it worse, if you just wait, which has always worked in the past as a way of getting electronics cheaper than being the first person to buy the first version of something, it won’t get any cheaper.
A lot of electronics has been driven over the years by the exponential decrease in cost over many process generations. Not a 15% saving, but a reduction in cost of 1000X over 20-30 years. That is how we have more computer power in our pockets than million dollar flight simulators did in the 1980s. I suspect that costs will come down as we get more learning about yield, but there are genuinely unavoidable extra costs like double patterning and the complex construction of the 3D FinFET structure.Share this post via: