Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/semi-chips-mind-boggling-reverse-course-on-14nm-and-beyond.8037/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Semi-Chips Mind-Boggling Reverse-Course On 14nm and Beyond

benb

Well-known member
Moore's Law isn't just dead; the industry has reversed course and is now focused on volume products in 28nm and above. Samsung continues to produce it's leading edge 3D NAND flash at 40nm. TSMC continues to innovate new 28nm processes, staying ahead of the price decline curve on that node, enticing many to stay put rather than move to the uncharted 20nm and below space. Manufacturing cost reduction is elusive in the 20nm and below space while 28nm and above offers considerable optimization potential that will be tapped.

With Samsung and Intel going slow (potentially VERY slow) on 10nm introduction, the signs of a problem in the < 28nm are already clear. And the end-result, however mind-boggling, is this--Scaling is no longer the preferred path forward, but rather, a grudging way to stay in the game. Advanced nodes are like the Concorde jet--something the few interested parties will have to pay very, very dearly for.
 
We have to prepare the number of 28nm players will increase eventually. There may be ensuing consolidation or commoditization of foundry services.

On the other hand, Intel is no longer running at 32nm, it is in the process of switching 22nm to 14nm, and 14nm processors are becoming more available and cheaper. Intel's 14nm node is also 52 nm metal pitch, which is similar to Samsung's 10nm node. So perhaps the cost of all that multiple patterning is starting to come down. It's not easy, it's the first time so many BEOL layers converted from single patterning to double or triple patterning. Next node shouldn't be too bad, probably just add one mask for the critical layers, e.g., SADP/LE2 -> SAQP/LE3.

So maybe there is still hope to keep on going.
 
Last edited:
I would argue that 3D NAND is nothing more than Moore's law in the third dimension. The Gb/mm2 is going up rapidly and what Moore actually said was "The number of components for minimum cost will double". That is exactly what 3D NAND is doing.

TSMC is introducing 10nm for risk production this year and 7nm for risk production next year. I would expect Samsung to be on a similar time frame for competitive reasons. I expect Global Foundries to skip 10nm but I think their 7nm should be out around the same time as TSMC. Has Samsung announced a delay in 10nm?

Yes Intel has pushed 10nm to late 2017 and gone to a 3 year cadence but you also need to keep in mind that Intel's 10nm is basically the same as the foundries 7nm so everyone will be doing true 10nm technologies late 2017 or early 2018.

Beyond that it is a lot harder to pin down but I would think the foundries would do a "5nm" node as fast as they can for cell phone applications processors, maybe 2019 or so and then Intel true 7nm in 2020.

I agree Moore's law is slowing for logic and for DRAM but it certainly isn't going backwards.
 
Last edited:
I view 3D NAND as brilliant, shattering many taboos. It is not scaling, however; it is optimization of 40nm. 2.5D chip stacking is another optimization that adds real value and extends the life of "old" nodes cost-effectively.

I would argue that Finfets were an optimization of the 20nm node, and we haven't seen a true 14nm node yet from the foundries. Optimizations are cost-effective and timely. Shrinks are too risky, take too long, and require too much invest for too little return.

We've entered the optimization era of semiconductors.
 
Last edited:
I view 3D NAND as brilliant, shattering many taboos. It is not scaling, however; it is optimization of 40nm. 2.5D chip stacking is another optimization that adds real value and extends the life of "old" nodes cost-effectively.

I would argue that Finfets were an optimization of the 20nm node, and we haven't seen a true 14nm node yet from the foundries. Optimizations are cost-effective and timely. Shrinks are too risky, take too long, and require too much invest for too little return.

We've entered the optimization era of semiconductors.

Well first off I disagree that 3D NAND isn't scaling, it isn't 2D scaling but at the end of the day they are packing more Gb into a mm2 and in my view that is scaling.

Secondly, your original comment was about Moore's law and while many people think of Moore's law as scaling it isn't scaling either but rather about cost per component and that is what 3D NAND gets you as well.

I agree we haven't seen a true 14nm node from the foundries yet. ASML has done some really good work on how to normalize logic processes to "real nodes" and TSMC's "16nm process" is an 18nm process the same as their "20nm process" and Samsung's "14nm process" is 17nm process although better than their "20nm process" that is a true 20nm process. TSMC's "10nm process' due out later this year will likely be a 12nm process. But the bottom line is they are still scaling although more slowly.

I agree scaling is getting harder but it isn't dead.
 
I would argue that 3D NAND is nothing more than Moore's law in the third dimension. The Gb/mm2 is going up rapidly and what Moore actually said was "The number of components for minimum cost will double". That is exactly what 3D NAND is doing.

TSMC is introducing 10nm for risk production this year and 7nm for risk production next year. I would expect Samsung to be on a similar time frame for competitive reasons. I expect Global Foundries to skip 10nm but I think their 7nm should be out around the same time as TSMC. Has Samsung announced a delay in 10nm?

Yes Intel has pushed 10nm to late 2017 and gone to a 3 year cadence but you also need to keep in mind that Intel's 10nm is basically the same as the foundries 7nm so everyone will be doing true 10nm technologies late 2017 or early 2018.

Beyond that it is a lot harder to pin down but I would think the foundries would do a "5nm" node as fast as they can for cell phone applications processors, maybe 2019 or so and then Intel true 7nm in 2020.

I agree Moore's law is slowing for logic and for DRAM but it certainly isn't going backwards.

Samsung has not delayed 10nm, it is in lockstep with TSMC. Samsung will not do a quick step to 7nm, instead they will fill out their 10nm process options like they did 14nm. Samsung 7nm is more in line with GF and Intel 7nm. It would not surprise me if TSMC has 5nm in the same time frame as the others have 7nm HVM (2020). Of course TSMC 5nm will look a lot like Intel 7nm under the hood....

Remember, TSMC 10nm and 7nm share the same fab like 20nm and 16nm so the cost is dramatically reduced and yield learning is accelerated. To me this is a big advantage, especially for the SoC business. It would not surprise me at all if Apple was exclusive to TSMC for the next (3) iPhones and iPads which would cover 16FFC, 10nm, and 7nm. In fact, I would place a wager on it given the right odds. I expect to see a 7FFC low cost version as well.

We should know more during SEMICON West next week. Remember, SEMICON West 2013 is where I first heard about Intel 14nm yield problems and we all know how that ended.

Intel Comes Clean on 14nm Yield!
 
Last edited:
No doubt new nodes will be developed and offered for sale to customers. I'm not arguing technical capability, it's a bit hard to put a finger on the factors in play now, they are matters of degree; and yet the game has changed so much the tide is running the other way.
First factor: Historically this wonderful industry has offered customers an unbeatable deal: Better performance and lower cost. Today, there is a dichotomy: Lower cost (28nm) or better performance (Finfets).
Second factor: Optimization rather than scaling seems promising technically. Previously, scaling seemed like the better use of scarce engineering resource. Now, adding MRAM to the die, or stacking die, or other novelties, seem like a bigger wow factor, bigger differentiator, than being the first to quad-patterning.
Third factor: Cost is decisive in virtually all cases now. With Apple setting the market price there is little profit to be had in quad-patterning. So the focus is shifting to existing, mostly-depreciated, and higher profitability nodes.
 
No doubt new nodes will be developed and offered for sale to customers. I'm not arguing technical capability, it's a bit hard to put a finger on the factors in play now, they are matters of degree; and yet the game has changed so much the tide is running the other way.
First factor: Historically this wonderful industry has offered customers an unbeatable deal: Better performance and lower cost. Today, there is a dichotomy: Lower cost (28nm) or better performance (Finfets).
Second factor: Optimization rather than scaling seems promising technically. Previously, scaling seemed like the better use of scarce engineering resource. Now, adding MRAM to the die, or stacking die, or other novelties, seem like a bigger wow factor, bigger differentiator, than being the first to quad-patterning.
Third factor: Cost is decisive in virtually all cases now. With Apple setting the market price there is little profit to be had in quad-patterning. So the focus is shifting to existing, mostly-depreciated, and higher profitability nodes.

There is also power consumption, that has traditionally driven advanced nodes. But you can get this by optimization, or simply new advanced processes (like high-k).
 
Last edited:
No doubt new nodes will be developed and offered for sale to customers. I'm not arguing technical capability, it's a bit hard to put a finger on the factors in play now, they are matters of degree; and yet the game has changed so much the tide is running the other way.
First factor: Historically this wonderful industry has offered customers an unbeatable deal: Better performance and lower cost. Today, there is a dichotomy: Lower cost (28nm) or better performance (Finfets).
Second factor: Optimization rather than scaling seems promising technically. Previously, scaling seemed like the better use of scarce engineering resource. Now, adding MRAM to the die, or stacking die, or other novelties, seem like a bigger wow factor, bigger differentiator, than being the first to quad-patterning.
Third factor: Cost is decisive in virtually all cases now. With Apple setting the market price there is little profit to be had in quad-patterning. So the focus is shifting to existing, mostly-depreciated, and higher profitability nodes.

Contrary to what some report, cost per transistor is still coming down at advanced nodes although more slowly than in the past. Going from 28nm to 20nm produced a reduced transistor cost, initially cost per transistor was higher at 16nm because there was no shrink and increased wafer costs but now 16FFC is available with lower cost and better density. 10nm and 7nm are both expected to also further lower cost per transistor and as Dan mentioned 7nm will likely see something like a 7FFC. 10nm at TSMC is really just a half node to keep Apple happy for a year until 7nm is ready.

There is a big issue with rising design costs and only high volume parts can afford the latest technologies but for applications with the volume there is still value in the latest nodes.

I agree MRAM is very interesting because it can dramatically shrink cache sizes and the process cost to add it isn't that high.
 
Last edited:
I acknowledge Dan's comments about the fast advance to 10nm and 7nm, and Scotten's comments and evidence that cost per transistor is coming down. I expect this will eventually be decisive and the industry will get back on a Moore's Law path. That is my hope at least.

But that's just one possible outcome. While 20/14nm nodes increased costs, there were effects, adjustments, and one of those adjustments was optimization. Intel used the term optimization to describe the 3rd year of the cycle. Intel makes capital investments on that 3 year cycle. While the foundries appear to be on a 1 year cycle, they are delivering pieces of nodes every year rather than a finished, complete node, and doing it without capital investment. Major capital investment at the foundries occurs every 3-4 years. Robert Maire could probably confirm that.

I would argue that capital investment, not product announcements, indicates a truer picture of progress/stagnation in this industry.
 
By the way, there is a pretty good wiki page for multiple patterning:

Multiple patterning - Wikipedia, the free encyclopedia

It looks like quad-patterning for 5nm if there is no EUV?

Impressed by many different styles of multiple patterning...moreover, it looks like TSMC's patent coverage prepared them well for SAQP, i.e., reduced or eliminated cuts. They can probably handle 5nm. In any case a single EUV exposure is competing against one or two 193i exposures now, for cutting purposes.
 
Contrary to what some report, cost per transistor is still coming down at advanced nodes although more slowly than in the past. Going from 28nm to 20nm produced a reduced transistor cost, initially cost per transistor was higher at 16nm because there was no shrink and increased wafer costs but now 16FFC is available with lower cost and better density. 10nm and 7nm are both expected to also further lower cost per transistor and as Dan mentioned 7nm will likely see something like a 7FFC. 10nm at TSMC is really just a half node to keep Apple happy for a year until 7nm is ready.

There is a big issue with rising design costs and only high volume parts can afford the latest technologies but for applications with the volume there is still value in the latest nodes.

I agree MRAM is very interesting because it can dramatically shrink cache sizes and the process cost to add it isn't that high.

The MRAM process modules are very specific and definitely are a substantial investment. In particular, the post-etch sidewall needs some special process.

I agree that MRAM fits somewhere between SRAM and DRAM but that's about it. It could be a good L4 or e-DRAM. The size is 50F2 as of this year's VLSI (TDK-Headway/TSMC), so it's a closer fit to addressing SRAM.
 
The MRAM process modules are very specific and definitely are a substantial investment. In particular, the post-etch sidewall needs some special process.

I agree that MRAM fits somewhere between SRAM and DRAM but that's about it. It could be a good L4 or e-DRAM. The size is 50F2 as of this year's VLSI (TDK-Headway/TSMC), so it's a closer fit to addressing SRAM.

I agree embedded MRAM is for SRAM replacement but even it it stays at 50F2 (and I don't think it will), it would be good for L3 and L2, I don't know why you say L4. The higher up in Cache number you go the more viable eDRAM is. It is the lower cache numbers where eDRAM is too slow and MRAMs speed advatage will come in.
 
Impressed by many different styles of multiple patterning...moreover, it looks like TSMC's patent coverage prepared them well for SAQP, i.e., reduced or eliminated cuts. They can probably handle 5nm. In any case a single EUV exposure is competing against one or two 193i exposures now, for cutting purposes.

I don't believe SAQP for 7nm or 5nm will be one or two cut masks. The experts I talk to are all still saying more like 4. I have some meetings next week at SEMICON where I should be able to get a new read on this.
 
I acknowledge Dan's comments about the fast advance to 10nm and 7nm, and Scotten's comments and evidence that cost per transistor is coming down. I expect this will eventually be decisive and the industry will get back on a Moore's Law path. That is my hope at least.

But that's just one possible outcome. While 20/14nm nodes increased costs, there were effects, adjustments, and one of those adjustments was optimization. Intel used the term optimization to describe the 3rd year of the cycle. Intel makes capital investments on that 3 year cycle. While the foundries appear to be on a 1 year cycle, they are delivering pieces of nodes every year rather than a finished, complete node, and doing it without capital investment. Major capital investment at the foundries occurs every 3-4 years. Robert Maire could probably confirm that.

I would argue that capital investment, not product announcements, indicates a truer picture of progress/stagnation in this industry.

When you look at capital investment you need to keep in mind that the industry has gotten a lot better at reuse of capital so going to a new technology is now much cheaper than in the past.

The total yearly spending on capital for TSMC, Samsung and Intel has been roughly $30B each of the last three years and 2016 will likely be in the same range as well.

If capital investment gives a "truer" picture than I would say based on the capital investment the industry continues to move forward.
 
I agree embedded MRAM is for SRAM replacement but even it it stays at 50F2 (and I don't think it will), it would be good for L3 and L2, I don't know why you say L4. The higher up in Cache number you go the more viable eDRAM is. It is the lower cache numbers where eDRAM is too slow and MRAMs speed advatage will come in.

If you have L1 SRAM then you can have higher level MRAM but higher speed makes it hard to keep competitive cell size.
 
Last edited:
I don't believe SAQP for 7nm or 5nm will be one or two cut masks. The experts I talk to are all still saying more like 4. I have some meetings next week at SEMICON where I should be able to get a new read on this.

If you're starting with straight lines you would need more cuts. For 36 nm pitch straight lines, 3 cuts were mentioned at last SPIE (2015 papers 94260J by GlobalFoundries and 942606 by Synopsys). Of course tighter pitch would require going up. Most recently, merging cuts, particularly diagonally, have been considered (there's a picture in paper 94260J), although I am not sure about the frequency of diagonal line crossings. Another style of SADP (spacer-is-dielectric, or SID) avoids cuts altogether since the spacer patterns dielectric between metal; it doesn't use straight lines.
 
Last edited:
I feel the key to the future of semis is in the application of new materials and architectures. There are several methods of achieving any result. Materials, architectures and the applications of software to new technologies are just what we can see on the horizon, with many surprises just over the horizon. The ways of achieving any challenge are getting more diverse everyday as our knowledge base is getting broader and deeper at an ever increasing rate from new technologies feeding themselves.
 
I acknowledge Dan's comments about the fast advance to 10nm and 7nm, and Scotten's comments and evidence that cost per transistor is coming down. I expect this will eventually be decisive and the industry will get back on a Moore's Law path. That is my hope at least.

As always life is not black and white but shades of grey. I think you should rephrase the problem as which applications will stay for the moment on Moore's law scaling and which will stop @28nm. 0.18um and 65nm are still big analog nodes so Moore scaling has already stopped for several companies at these nodes. For Intel CPUs, general purpose ARM SoCs and Bitcoin mining ASICs Moore's is clearly up and running. One of the questions is if 28nm (FDSOI) will be a sweet spot for IoT edge applications with very low power requirements and low(er) volumes or if they will come back on Moore's with one of the iterations on the smaller technologies.
 
Back
Top