Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-foundry-is-way-behind-tsmc-but-the-goal-is-2-by-2030.24411/page-3
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030871
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Intel Foundry is way behind TSMC, but the goal is #2 by 2030

">20% density gain ... full-node scaling" = trigger warning for me :)
That's the reality nowadays -- basic pitches that set cell size (M0 and CPP) are almost identical for N3/N2/A16/A14, the gate density increases come mainly from other layout/library tweaks usually referred to as DTCO, with labels such as FlexFin and NanoFlex and NanoFlex Pro (and BSPD, and COAG, and SDB, and...) -- or other "special" design rules only allowed in very specific layout regions, like those TSMC introduced in N2 to lower access resistance and parasitic capacitance in "digital-only" areas. Plus the fact that nanosheet gives more drive current in a given area than finFET, so minimum size gates are faster and high drive gates are smaller.

In other words "full-node scaling" is largely a fiction nowadays, it doesn't really mean scaling any more at all -- it means the next node with a different set of design rules and new DTCO enhancements, as opposed to a "half-node" which means the same process tweaked to improve PPA slightly (e.g. 10%)... :-(
 
are the number of plus in 14nm correct? I thought there were only 5 plus
I think there is more in this case

+++++++ is Intel Style

But all of those "+" is what is TSMC is doing

N5 > N5++ > N5+++ > N5++++ > N5+++++ > N5++++++ > N5+++++++ > N5++++++++ > ....... > N5++++++++++++++
N5 > N4 > N4P > N4X > N3B > N3E > N3P > N3X > ..... > A16 (who knows might be still 0.021um^2)

All is finfet, all is 0.021um^2 SRAM, I just lost count on how many ++++++++, 8 / 9 / 10
 
I think there is more in this case

+++++++ is Intel Style

But all of those "+" is what is TSMC is doing

N5 > N5++ > N5+++ > N5++++ > N5+++++ > N5++++++ > N5+++++++ > N5++++++++ > ....... > N5++++++++++++++
N5 > N4 > N4P > N4X > N3B > N3E > N3P > N3X > ..... > A16 (who knows might be still 0.021um^2)

All is finfet, all is 0.021um^2 SRAM, I just lost count on how many ++++++++, 8 / 9 / 10

I don't think what TSMC is doing is quite equivalent to Intel's 14++++ execution. Each + for Intel 14nm actually decreased transistor density for logic, in order to improve performance characteristics of the transistors. (Easy example - look up Coffee Lake's density on 14++ vs. Kaby Lake's 14+).

In contrast, TSMC's various processes are still improving logic density (in general), while having side variants focused on performance and/or density. When you go from N5 to N3 to N2 to A16 - you're still getting denser logic.

However, the "side processes" -- N4 to N4P to N4X may have some +/- on density, though I think they're focused more on cost/performance than outright performance, unlike 14++++.

Some articles on Logic (vs. SRAM) density changes between TSMC nodes:
https://semiwiki.com/wikis/industry-wikis/tsmc-n3-process-node-3nm-wiki/ (Compares N3 to N5)
https://semiwiki.com/wikis/industry-wikis/tsmc-n2-process-technology-wiki/ (Compares N2 to N3, scroll down to bottom)
 
You mean add another not-really-big-enough-volume (and late to market!) process to the one they're already got -- plus you need different libraries and IP which means more effort and investment... :-(

It works for TSMC because they have both the handful of huge-volume HPC customers for BSPD (e.g. A16) and a massive number of small-to-pretty-damn-big customers for FSPD (e.g. N2), the production volumes and revenue for both variants are easily big enough to justify the investment and IP generation by the (also massive) TSMC ecosystem.

I just don't see this working for Intel -- their least bad option is to do what they're doing now which is focus all their efforts on BSPD to try and grab a few of the HPC customers for 14A, always assuming they have enough fab capacity to meet demand. They can't compete against TSMC for the FSPD market (where they would also be very late to market, and with higher costs and lower yields) which means supporting a large number of diverse customers (which they're not set up to do) with a wide and deep IP ecosystem (which they don't have).

They're never going to win here, it would only increase their costs further and divert effort from BSPD.
You don't need bspdn in the first place, it's garbage
 
But it's not just about cost, the amount of extra resource -- meaning, engineers! -- needed to support both BSPD and FSPD processes is huge because so many things are different. For starters the layouts are completely different so all IP (both internally and externally sourced) has to be rebuilt from scratch, and it's not just a few layout tweaks it's a major rethink -- plus the extraction is different, thermal properties are very different, libraries (standard cell and SRAM) have to be redone from scratch, tool costs double, customer support effort doubles. You also have to duplicate all the process qualification/reliability analysis because the processes are physically fundamentally different, and this alone is a massive effort and takes a lot of time and wafers to do.

Intel are already likely to be stretched in all these areas just to support BSPD because traditionally they only had to support internal design teams (so crappy documentation is "OK"), much more effort/resource is needed to properly support external customers -- been there in the past, got the T-shirt. Suggesting that they could easily do all this again for FSPD is not credible, they'd end up with terrible support for both processes instead of barely adequate support for one -- TSMC have being doing all this for years on multiple processes, but that doesn't mean Intel can do the same...

It doesn't matter how much Intel might *want* do support both, the question is whether they *can* support both -- and I don't think they can, at least not today.

There's also the question of why they would realistically want to do this, because all the things that FSPD customers are looking for -- fast TTM, strong IP ecosystem, low cost, high yield, high density, quick TAT -- are the things that TSMC is *very* good at (which is why everyone uses them) and Intel is historically bad at (and still not competitive today). Fighting an opponent on a battleground where they're strong and you're weak is never going to end well... :-(

People who don't understand the huge differences between the two processes are grossly underestimating the cost and difficulty of supporting both, see post from @MKWVentures above... ;-)
So we don't need Intel, and the US doesn't need semiconductor sovereignty.

All we need is TSMC's US factory.
 
I don't think what TSMC is doing is quite equivalent to Intel's 14++++ execution. Each + for Intel 14nm actually decreased transistor density for logic, in order to improve performance characteristics of the transistors. (Easy example - look up Coffee Lake's density on 14++ vs. Kaby Lake's 14+).

In contrast, TSMC's various processes are still improving logic density (in general), while having side variants focused on performance and/or density. When you go from N5 to N3 to N2 to A16 - you're still getting denser logic.

However, the "side processes" -- N4 to N4P to N4X may have some +/- on density, though I think they're focused more on cost/performance than outright performance, unlike 14++++.

Some articles on Logic (vs. SRAM) density changes between TSMC nodes:
https://semiwiki.com/wikis/industry-wikis/tsmc-n3-process-node-3nm-wiki/ (Compares N3 to N5)
https://semiwiki.com/wikis/industry-wikis/tsmc-n2-process-technology-wiki/ (Compares N2 to N3, scroll down to bottom)
I don't have the actual count but there was a lot of missteps on the Intel Design Team for performance and density, I can't confirm is the foundry or the design team that was messed up.

But I kinda of not agree at this moment in time what you mentioned.
 
I don't have the actual count but there was a lot of missteps on the Intel Design Team for performance and density, I can't confirm is the foundry or the design team that was messed up.

But I kinda of not agree at this moment in time what you mentioned.
I'm (honestly) happy to be corrected; I had read that density decreased from 14+ to 14++ (KBL to CFL), and I thought they made a similar trade off for Cometlake to get clocks up above 5 GHz (same core as other chips).
 
I'm (honestly) happy to be corrected; I had read that density decreased from 14+ to 14++ (KBL to CFL), and I thought they made a similar trade off for Cometlake to get clocks up above 5 GHz (same core as other chips).
No worries, the ++++++ is always a joke inside me, sorry for making you questioning, but I felt like that semi-conductor progress is getting very very slow, (I am a old man), just remember how fast it was. And because all the people out there made jokes on Intel +++ so that is the joke
 
You don't need bspdn in the first place, it's garbage
Which just shows how little you understand about process technology -- meaning DTCO and layout and transistor access, as well as the raw metal/gate/contact pitches... :-(

For the right products -- which TSMC correctly identifies as "HPC with dense power grids and active cooling" -- BSPD offers significant advantages, which will get bigger with successive nodes because it's getting harder and harder to get low-resistance power connections down to the transistors with FSPD.

For many other products (the majority today?) it's currently not the right choice because the advantages are smaller and the cost is higher, and cooling becomes much more difficult or impossible -- for these conventional FSPD is the right choice.

When vertically-stacked-CMOS comes along in the 2030s the scales will tilt further in favour of BSPD because power access with FSPD will get even more difficult, but even this may still not eliminate FSPD completely.
 
Last edited:
No worries, the ++++++ is always a joke inside me, sorry for making you questioning, but I felt like that semi-conductor progress is getting very very slow, (I am a old man), just remember how fast it was. And because all the people out there made jokes on Intel +++ so that is the joke
Process progress used to be very fast in the days of Dennard scaling when dimensions really did scale linearly with the headline node label and supply voltages also dropped, and also the number of interconnect layers went up -- I still remember the delights of the first double-metal process when you no longer needed high-resistance poly underpasses at crossovers... :-)

The first big hit was when supply voltages stopped dropping rapidly which meant power consumption no longer fell as much. The second big hit was when actual dimensions stopped following the node name so that areas no longer halved with each new "full node" -- but they still dropped significantly, so PPA improved and cost dropped with each new node, chips still got cheaper even when functionality went up.

The third big hit is the one we're seeing now where the basic dimensions (gate/metal/via pitch) barely shrink at all because of fundamental manufacturing limits and stochastics, and so almost all the PPA advantage is coming from new design techniques (DTCO, COAG, SDB, nanosheet, BSPD...) and even more metal layers (21 in N2) -- I keep thinking that all the cards have already been played, but TSMC keep coming up with new cards to add to the deck... :-)

The really big problem now is that even if PPA continues to improve with each node the cost per transistor no longer reduces, so the cost of integrating a given function is static -- and if you use the new node to put more on each chip (e.g. in a mobile phone) the chip cost actually increases. This breaks the business model that has driven the entire semiconductor industry -- and electronic products which depend on it -- for the last 50 years... :-(
 
The really big problem now is that even if PPA continues to improve with each node the cost per transistor no longer reduces, so the cost of integrating a given function is static -- and if you use the new node to put more on each chip (e.g. in a mobile phone) the chip cost actually increases. This breaks the business model that has driven the entire semiconductor industry -- and electronic products which depend on it -- for the last 50 years... :-(
I think this has to do with competition as well there is no options for TSMC for fabless except for maybe Intel Products
For the right products -- which TSMC correctly identifies as "HPC with dense power grids and active cooling" -- BSPD offers significant advantages, which will get bigger with successive nodes because it's getting harder and harder to get low-resistance power connections down to the transistors with FSPD.
Intel being a HPC customer so they went with it no surprise
 
I think this has to do with competition as well there is no options for TSMC for fabless except for maybe Intel Products

Intel being a HPC customer so they went with it no surprise
If you mean that TSMC can demand big margins for their bleeding-edge process because there's no competition, then that's true as their financial results show.

But then they've been doing that for many years now, and are investing the profits into more fabs and more process research for the next generation and the one after that, so it's kind of difficult to object to... ;-)

Fundamentally the reason is that all the things being done to keep these PPA advances coming keep putting the manufacturing costs of the wafers up -- more process steps, more metal layers, more expensive EUV masks, more expensive EUV steppers. In N2 some vias are now triple-patterned EUV... :-(

And yes high-NA EUV will help with this (fewer masks and steps), but then the astronomical cost of the machines and lower throughput (anamorphic optics mean maximum reticle area on silicon is halved) means there probably isn't any cost saving yet (will there ever be?), which is why TSMC are holding fire on them for large-scale introduction.
 
Back
Top