Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/did-cost-transistor-decrease-at-14-16nm.6174/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021171
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Did cost/transistor decrease at 14/16nm?`

benb

Active member
Intel says the cost/transistor decreased at 14nm: Cost per transistor slide
Samsung says their cost/transistor increased at 14nm: Link
The article also states costs will rise (vs. 28nm) going forward. Developing 28nm SOI is their response.
I couldn't find anything solid about TSMC at 16nm, but one article suggested that like Samsung, using a 20nm metallization at 16nm means TSMC's cost per transistor will go up.

My take: Intel's apparent advantage at 14nm in cost per transistor should have driven an increase in foundry business, but that doesn't seem to be happening. So I guess Samsung or TSMC retain an overall lower cost per transistor at 14/16nm, albeit with Intel taking a stride toward narrowing that advantage.
 

dl324

Member
My take: Intel's apparent advantage at 14nm in cost per transistor should have driven an increase in foundry business, but that doesn't seem to be happening. So I guess Samsung or TSMC retain an overall lower cost per transistor at 14/16nm, albeit with Intel taking a stride toward narrowing that advantage.
I suppose you have inside information regarding the relative complexity of the design rules for each process, the collateral each foundry has, the politics of selecting one foundry over another, ...?
 
Last edited:

benb

Active member
I don't (have inside information). I think two factors matter: Cost/transistor and availability. If you assume all three (Intel, TSMC and Samsung) have a 14/16nm process available (Intel and Samsung certainly do, TSMC most likely will soon), cost is decisive. Suppose Intel couldn't/wouldn't meet TSMC or Samsung's pricing at 14nm. As their 14nm process matures, (and their pain grows; 5% layoffs announced recently) they may make a different choice.
 

benb

Active member
GlobalFoundries perspective here: "From GlobalFoundries’ perspective, Moore’s Law — which has allowed chip designers to double density while lowering transistor cost — no longer applies. Instead, “at a finer node we are getting higher performance at higher cost,” Teepe said."

I believe the article quote is stating that GF 14nm is more expensive than 28nm and it sounds like they're saying this trend will continue beyond 14nm. This aligns with Samsung's perspective as well. The GF solution is to offer FD-SOI which they claim "
almost 14nm FinFET performance at almost 28nm cost."
 
S

Staf_Verhaegen

Guest
Intel says the cost/transistor decreased at 14nm: Cost per transistor slide
Samsung says their cost/transistor increased at 14nm: Link
The article also states costs will rise (vs. 28nm) going forward. Developing 28nm SOI is their response.
I couldn't find anything solid about TSMC at 16nm, but one article suggested that like Samsung, using a 20nm metallization at 16nm means TSMC's cost per transistor will go up.

Someone with a bad/sick mind could even claim that Intel is the only company still doing real Moore's law scaling and the rest is playing naming games; calling something 16/14 what actually should have been called 20FF.
I am not though ;)
 

Paul McLellan

Active member
To me this is one of the big questions for the entire industry. We have never before been in an era where you can get a better die but it will cost more per transistor. We have always been in an era where to go from 2 cores to 4 cores just means waiting a couple of years. But if 4 cores costs twice as much as 2 cores then why bother.

The really big unknown is what is Intel's wafer cost. In much of their business they don't compete on price and so why would they bother to have a really competitive wafer cost. If Whole Foods says they will reduce prices by 10% over the next year, that is a lot more believable than Costco saying it, since if they could do that they probably would already have done it. So it is quite reasonable to believe Intel can reduce the cost per transistor but none of the foundries can since there are no costs left to drive out.
 
Last edited:

Paul McLellan

Active member
Oh, and another thing is that TSMC is creating a new lower cost 16nm process. Off the top of my head I forget the letters that identify it. They supposedly are taking what they have learnt bringing up the two existing 16nm processes and cutting out unnecessary process steps, the aim being to make the new process cheaper per transistor than previous generations of process (such as the current sweet-spot at 28nm).
 
M

Mike Bryant

Guest
Oh, and another thing is that TSMC is creating a new lower cost 16nm process. Off the top of my head I forget the letters that identify it. They supposedly are taking what they have learnt bringing up the two existing 16nm processes and cutting out unnecessary process steps, the aim being to make the new process cheaper per transistor than previous generations of process (such as the current sweet-spot at 28nm).

I think this is where TSMC win over all other foundries, and over Intel. Recently they announced an improved 40nm library, and the 28nm library has undergone many major changes and iterations such that you have to be very careful what you are comparing with what if you talk about savings on smaller nodes. It would be very interesting to see a graph of 28nm transistor costs vs time if anyone has the data - I suspect the steepness of the angle on it might surprise people.
 

benb

Active member
With cost/transistor rising as you move to smaller nodes, we should see innovation shift to improving 28nm and larger nodes:
--Samsung's most advanced NAND is 40nm 3D NAND
--FD-SOI at 28nm is coming
--Intel pushing out 10nm until who knows when

Has the world gone mad ;-)
 
Last edited:

benb

Active member
ARM slide: Cost/transistor comparison including 28nm, 20 and 14/16.

This is one of the best illustrations of how 28nm is currently minimum cost/transistor.

Newer version of the slide in this Economist article

Broadcom CEO McGregor slide confirms that cost/transistor is rising from 28nm forward.

10nm projection by IBS/Handel Jones is for a cost/gate reduction compared to 14/16 but still higher costs than 28.

My take: 3 nodes is confirmation; we have seen the end of the economic leg of Moore's Law. It had a good run of 50 years.
 
Last edited:

Art Scott

New member
Thanks for any help. Request your input. Developing OPEN (free to anyone) Tech Spec for M-KOPA ASYNC featuring low power long battery life ROW, 7B ppl TAM, inexpensive smartphone, and a less expensive W3C RTC version, i.e. without cellular.
Any ball park cost figure? 28M$? at 28nm? Currently - "...
M-KOPA has also sold over 9,000 Huawei and Samsung smartphones in the $50 to $100 price range. It is now shifting over 1,000 smartphones per month..." If it's possible with ASYNC to get 10X (or more?) battery life at the same price .... that would be good. Noob to semiwik ... so hope this is ok topic. For "product spec" need both the tech ASYNC knowledge and the cost knowledge ... (also plan to post on ASYNC thread I saw recently )
8/18/2016 article sweet spot cost/transistor ...
"... At 5nm, it will cost $500 million or more to design a “reasonably complex SoC,” Johnson said. In comparison, it will cost $271 million to design a 7nm SoC, which is about 9 times the cost for a 28nm planar device, according to Gartner...."
 
Top