Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/10nm-intel-vs-tsmc.4919/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

10nm: Intel vs. TSMC

10nm: Intel vs. TSMC

On Dec 1, 2014, a long-time Intel promoter published an article on the Intel 10nm:

"Intel's executives stressed (at the Nov 20 Investor meeting) that moving to next-generation manufacturing technologies is hard and getting harder"

"Intel refused to comment on its 10-nanometer plans at the 2014 investor meeting."

The author offered his own best-case estimate for Intel’s 10nm schedule:

"The first 10-nanometer PC parts...go into production in early 2016 for launch in mid-to-late 2016...10-nanometer Atom in 2017"

What?s Going On With Intel Corporation?s 10-Nanometer Process?

TSMC claims to have progressed faster than planned on 10nm. On Sep 30, TSMC co-CEO made these statements:

Liu highlighted the march to 10nm, where the pace of innovation from IP certification and tools validation to manufacturing is happening much faster.

Liu said he expects 10nm customer tapeouts a year from now (second half of 2015) and risk production in the fourth quarter of next year. ... "Our goal is to enable customers' product ramp in 2016," Liu said.

TSMC Sees Fast Ramp for 16nm, 10nm Nodes - The Fuller View - Cadence Blogs - Cadence Community

Contrary to the prevalent notion, Intel may not have leads in the 10nm node. TSMC may end up ahead of Intel.

At the 2014 IEDM later this month, TSMC is expected to disclose more 16FF+ details and updated 10nm progress.

<script src="//platform.linkedin.com/in.js" type="text/javascript">
lang: en_US<b</script>
 
Last edited by a moderator:
Keep in mind that when refering to a given technology node, companies are talking about different dimensions. I presume that TSMC's 10-nm technology will have dimensions close to what IBM/SEC/ST/GF/UMC showed at VLSI'14, which had a metal pitch of 48nm and gate pitch of 64nm. This is closer to Intel's 14-nm with metal pitch of 52nm and gate pitch of 70nm (of course Intel used unidirectional M1 while IBM etc used bidirectional M1 which required tripple patterning) than what would be a 10-nm technology from Intel (probably metal pitch ~40nm which is limit of double patterning and gate pitch of say 55nm). TSMC probably pushes fin pitch to about 40nm (limit of SADP) while Intel will need to do SAQP to get something close to 30nm.
 
Also keep in mind that the author of the article you referenced has no idea what he is talking about. I'm sure his use of passive language limits his legal liability but it also limits his credibility to next to nothing. Im looking forward to Scott's posts on IEDM or anyone else that attends. Thank you Semiwiki people for keeping the content real.
 
I would like to see first some real 14nm and 16nm production and then we can talk also about 10nm. Please, do not mention the core M. That was clearly an early (premature) and very limited release, as per Intel own admission.
 
I would like to see first some real 14nm and 16nm production and then we can talk also about 10nm. Please, do not mention the core M. That was clearly an early (premature) and very limited release, as per Intel own admission.

I agree. According to the article:

There's no denying that Intel's (NASDAQ: INTC ) yield issues with its 14-nanometer manufacturing technology had impacted just about all of the company's product segments. For example, volume manufacturing of Broadwell processors, which was supposed to begin in late 2013, didn't actually start until the second quarter of 2014.

If Intel started "volume" manufacturing in Q2 2014 there should be millions of Broadwell powered laptops and 2-1s on the shelves by now. The Apple A8 went into "volume" manufacturing in Q3 2014 and close to 100M iPhones have shipped already?
 
Last edited:
Keep in mind that when refering to a given technology node, companies are talking about different dimensions. I presume that TSMC's 10-nm technology will have dimensions close to what IBM/SEC/ST/GF/UMC showed at VLSI'14, which had a metal pitch of 48nm and gate pitch of 64nm. This is closer to Intel's 14-nm with metal pitch of 52nm and gate pitch of 70nm (of course Intel used unidirectional M1 while IBM etc used bidirectional M1 which required tripple patterning) than what would be a 10-nm technology from Intel (probably metal pitch ~40nm which is limit of double patterning and gate pitch of say 55nm). TSMC probably pushes fin pitch to about 40nm (limit of SADP) while Intel will need to do SAQP to get something close to 30nm.

The post above is typical among pro-Intel arguments. It includes the following points:


  1. A node is defined solely by a few dimensions of pitches, mainly M1P and GP.
  2. Since Intel shows smaller dimensions, therefore it is one or more nodes ahead of the foundries.
  3. Based on the foundries’ pitch sizes, their 10nm is really 14nm; 16nm is merely 20nm, etc., implying some dishonest marketing fluff from the foundries.

This argument falls apart, if some reality checks are applied. The so-called logical transistor density, derived from GP X M1P, puts Intel’s 14nm at 37% denser; but the comparison of real chips between 20nm A8x and 14nm Core-M reveals Core-M’s 48% disadvantage:

A8/Core-M comparison
https://www.semiwiki.com/forum/f2/samsung-strikes-chip-deal-apple-4864-4.html#post17087

Transistor density has been Intel’s obsessive focus for decades. At the 14nm node, however, Intel fails to demonstrate leads in this critical metric, at least not by the first 14nm chip.

It appears that the misleading “logical density” was deliberately created to hide the fact that Intel has been falling behind in the real density. Intel is the party that is engaged in overly-inflated PR.

Intel advantages falsely concluded
https://www.semiwiki.com/forum/f2/samsung-strikes-chip-deal-apple-4864-4.html#post17079

TSMC’s processes vs. the Intel model
https://www.semiwiki.com/forum/f2/samsung-strikes-chip-deal-apple-4864-4.html#post17097
 
Last edited:
If Intel started "volume" manufacturing in Q2 2014 there should be millions of Broadwell powered laptops and 2-1s on the shelves by now. The Apple A8 went into "volume" manufacturing in Q3 2014 and close to 100M iPhones have shipped already?

It seems the term "volumn production" is a very lose term. TSMC first claimed they were in volumn production of 20nm in first quarter 2014, but later on they said that they only shipped small amounts in second quarter and large amounts in third quarter.
 
The post above is typical among pro-Intel arguments. It includes the following points:


  1. A node is defined solely by a few dimensions of pitches, mainly M1P and GP.
  2. Since Intel shows smaller dimensions, therefore it is one or more nodes ahead of the foundries.
  3. Based on the foundries’ pitch sizes, their 10nm is really 14nm; 16nm is merely 20nm, etc., implying some dishonest marketing fluff from the foundries.

If you track my posts, you see I am anything but pro-Intel. My point was in fact to show Intel needs to face greater problems in their 10-nm than foundry in their own definition of 10-nm. These are:

1) Smaller gate pitch. Anybody that has been in the trenches knows that scaling gate pitch and yet maintaining performance is very tough. Intel historically kept gate pitch and metal pitch close to each other. Their 14-nm was the first time they deviated from this showing that they are seeing the problem. Now if they want to push to this further to 50-55nm, they will face even more challenges. With Foundry's definition you will face those problems at 7-nm.

2) Fin pitch. Foundry's 10nm will be at the limit of SADP. Intel needs to do SAQP (or self-assembly). That part is ok. The question is how to remove unwanted portions of the fins. Again, foundry will see this at their 7-nm.

3) Metal pitch. Both will continue to use some form of double patterning. Intel choose to use SADP which greatly limit the design flexibility.

So, all in all, Intel at 10nm needs to address questions that that foundry need to answer at their 7nm.

That said, I am all against foundry's decision to call their 20nm GR FinFET a 14/16nm technology. They could as well call it a 20FF or whatever the same way they created several versions of 28nm and yet they are all called 28nm. This was a marketing mistake that Intel used against foundry.
 
I believe the overall density issue with Intel's processes vs. leading foundries has been raised numerous times and something well known in the industry. The rationale for this is based on a more aggressive speed target as historically demanded by its client and server processors. Such processor designs have a fair amount of custom more finely tuned datapaths and other embedded blocks for optimal performance that are more directly rewarded by such high performance transistors. But the trade-off has typically been a reduced pitch on the upper metal layers to support such performance targets. For more SOC types of designs where integration, density, and cost are most paramount, the layouts tend to be more interconnect limited, and thus Intel's historic process targets have not been as ideal.
 
That said, I am all against foundry's decision to call their 20nm GR FinFET a 14/16nm technology. They could as well call it a 20FF or whatever the same way they created several versions of 28nm and yet they are all called 28nm. This was a marketing mistake that Intel used against foundry.
I understand what you mean, but since they still managed to pack more xtors moving from 20nm to 14nm/16nm, a 15% more they claim, at least on paper, I´m fine with that. The number is pretty much meaningless, we know that, even the 22nm Intel node has no 22nm features and it is closer to the 28nm than to the 20nm foundry node.
Assuming they would have called it 20nm FinFet, then Intel instead of talking about metal and gate pitch leadership would have talked about technology release leadership, and to be honest, at the end of the day, it is pretty much the same. In terms of perception, I prefer 16nm or 14m.
 
It seems the term "volumn production" is a very lose term. TSMC first claimed they were in volumn production of 20nm in first quarter 2014, but later on they said that they only shipped small amounts in second quarter and large amounts in third quarter.

The key here is understanding the TSMC information flow. TSMC sets production goals like anybody else, the difference is TSMC makes public wafer revenues so there is no hiding volumes. I believe the first 20nm wafer revenue was in June of 2014 so that is officially Q2. TSMC may have shoved some out the door just to make that claim, I don't know. But you can see the steep 20nm revenue ramp in Q3 and Q4 which will continue until Apple moves to 14/16nm.

Intel is not as transparent with their wafer revenue numbers so you can't do an Apples to Apples here. I am reviewing today's Intel NASDAQ presentation right now and I think I need a translator and some advil. ;-)
 
khaki hurled a lot of technical juggernauts at us.

But, he avoided the question: why Intel’s “14” nm chip does not show the 37% or any density advantage over the “20” nm chip? Is Intel’s so-called 14nm really a 20nm?
 
I believe the overall density issue with Intel's processes vs. leading foundries has been raised numerous times and something well known in the industry. The rationale for this is based on a more aggressive speed target as historically demanded by its client and server processors. Such processor designs have a fair amount of custom more finely tuned datapaths and other embedded blocks for optimal performance that are more directly rewarded by such high performance transistors. But the trade-off has typically been a reduced pitch on the upper metal layers to support such performance targets. For more SOC types of designs where integration, density, and cost are most paramount, the layouts tend to be more interconnect limited, and thus Intel's historic process targets have not been as ideal.

If I understand correctly, cpuarchx stated that Intel sacrificed some density to achieve higher performance required by the PC/server processors.

I think the explanation is valid only partially, at best. Core-M is designed for the mobile market and first adopted in tablets. It is not particularly for PC or server. Core-M and A8 are the closest match we can find, so far. I was surprised, too, by the lopsided comparison result. I assume Apple didn’t fabricate the transistor counts of A8 and A8x.
 
Last edited:
That's a question I let Altera answer, once they get their 14nm chips out of Intel fab. Apple A8 vs Intel Core-M comparison, while informative does not say if the density disadvantage is due to ground rules or the design style. Sure, they address the same market, but there are many differences in their design. A simple example: Apple used the dense SRAM (0.12 um2 as opposed to the bigger 0.157um2) in their 28nm designs. Intel used the larger of their two (0.108um2 vs 0.092um2 they advertised in their VLSI'12 paper) in their 22nm.
 
That's a question I let Altera answer, once they get their 14nm chips out of Intel fab. Apple A8 vs Intel Core-M comparison, while informative does not say if the density disadvantage is due to ground rules or the design style. Sure, they address the same market, but there are many differences in their design. A simple example: Apple used the dense SRAM (0.12 um2 as opposed to the bigger 0.157um2) in their 28nm designs. Intel used the larger of their two (0.108um2 vs 0.092um2 they advertised in their VLSI'12 paper) in their 22nm.

The fabless companies may do their own SRAM but they all use the foundry's bit cells. I believe Apple used ARM SRAM with Samsung but did their own SRAM at 20nm using the TSMC bit cell. That may explain the density change. You also must understand that what is put in conference papers does not necessarily go into production. That is why I call BS on Intel's density advantage claims based on outdated papers by TSMC or anybody else. And now that Intel is using conference papers for marketing nonsense you can bet the technical content in those papers will be "adjusted" accordingly.
 
Last edited:
It seems the term "volumn production" is a very lose term. TSMC first claimed they were in volumn production of 20nm in first quarter 2014, but later on they said that they only shipped small amounts in second quarter and large amounts in third quarter.

The above description is true, based on TSMC’s 20nm revenue between Q2 and Q3.

That is, at 20nm, it took about 6 months to reach the volume of 50,000+ wafers per month, with adequate yields.

Commentators, therefore, infer that the 16FF+ cannot deliver in high volume till the end of 2015 or early 2016 if the production is to start in July.

I would offer the opinion that TSMC will produce 50K 16FF+ wafer in July next year, for two reasons:

The production date is likely to be a quarter earlier than the announced July date.

More importantly, 16FF+ is not exactly a new node; it is an enhanced process under a big umbrella of 20/16 nm node, with 90% of equipments shared between 20nm and 16nm production. The time-consuming ramping for a new node is NOT needed for the 16FF+.

TSMC's gradual approach at the 20/16 nm node

https://www.semiwiki.com/forum/f2/samsung-strikes-chip-deal-apple-4864-3.html#post17016

https://www.semiwiki.com/forum/f2/samsung-strikes-chip-deal-apple-4864-2.html#post16998
 
That's a question I let Altera answer, once they get their 14nm chips out of Intel fab. Apple A8 vs Intel Core-M comparison, while informative does not say if the density disadvantage is due to ground rules or the design style. Sure, they address the same market, but there are many differences in their design. A simple example: Apple used the dense SRAM (0.12 um2 as opposed to the bigger 0.157um2) in their 28nm designs. Intel used the larger of their two (0.108um2 vs 0.092um2 they advertised in their VLSI'12 paper) in their 22nm.

I agree. Do you have any idea when Altera will start shipping 14nm? I have not seen 20nm Altera parts yet either. I will check with Xilinx too. I really want to see a tear down of the first FinFET FPGAs. That is going to happen, absolutely.
 
TSMC never made any noise out of the superior density it achieved at 20nm. Apple indicated the A8 transistor counts in one slide, out of several dozens slides, at the iPhone release. Except this one slide, Apple never mentioned transistor counts or density.

It’s Intel that makes density such a big deal. Started in November last year, when it published the infamous slides that claim 35% and growing leads in density over the foundries; followed by numerous articles and forum discussion boasting the leads; aided by the false accusation that the foundries have misrepresented their nodes; rationalized by the misleading “logical density.”

Pretty much, a virtual reality is created to uniquely favor Intel’s “superiority.” But, such aggressive PR may turn into another embarrassment again next year when the foundries’ 14/16nm chips hit the market.

I am still puzzled: what can Intel gain from this futile, if not counterproductive, PR offensive?

P.S.
Intel’s obsessive focus on the density is likely to be misplaced. In the mobile market, density is a secondary factor; low power and integration of connectivity and milti-media functions are far more important. Density is completely irrelevant for the wearable and IoT devices.
 
Back
Top