Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/tsmc-n2-specs-improve-while-intel-18a-gets-worse.21692/page-4
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

TSMC N2 specs improve, while Intel 18A gets worse

Process development and customer qualification vehicles (or in the case of early N2 customers A0 steppings for their lead products). 5K WSPM isnt even particularly large from the perspective of a pilot line, and certainly nowhere near enough for HVM for anything but Apple's lowest volume products.

The point is that TSMC is ramping N2 at a faster pace than they ramped N3 which is a proper milestone. Apple could certainly use N2 in the next iPhones. Worst case by splitting production between N3 and N2 for the iPhone and iPhone Pro if demand exceeds supply. Apple can still claim first to the node which is a big deal in the trenches.

In the past Apple would tape-out in December for the next family of iPhones. Remember, the Apple SoC is generally a derivative design based on the previous version so it is less complicated for TSMC to manufacture. Same thing now with the M1 series.

If you really want to know why TSMC is so conservative with process development it is because of Apple and their delivery dates. Prior to Apple, process delays were commonplace in the semiconductor industry. Not once in my memory has Apple delayed an iPhone launch.
 
The High-NA has T2T and pupil fill limitations at ~20 nm pitch where it was supposed to take over from low-NA, going by SPIE papers 120520G (ASML) and 1321505 (imec). Depending on the requirements, more passes may still be needed.
It was my understanding that High-NA would allow a smaller etch, but perhaps I have extrapolated this to mean that for a larger etch, it could also eliminate passes .... which may not be correct. Thanks.

@nghanayem ,

Thank you for taking your time to so completely explain some of these complex issues. I had to do more than a little searching on your terms to keep up with your line of thinking ;).
In the modern day, much of the scaling is done by shrinking the space between transistors rather than the size of transistors
Which is exactly why I was thinking that 18A would produce more dense chips than N2 ... but there is obviously something wrong in my thinking.
What are you talking about? TSMC is terrible about disclosing anything interesting beyond the marketing fluff (vague V-F curve for some ARM core in a like for like comparison, a non-specific full chip density uplift, min metal pitch, poly pitch, and uHD SRAM bitcell or SRAM macro scaling depending on which looks better). Intel consistently gives detailed electricals. Always gives the dimensions for basic 4T NAND logic gates, all SRAM bitcell sizes, metal layer pitches/metallization schemes.
Perhaps... but many articles I read reference the information TSMC provides and then beg off on the Intel metrics or leave a "?" in the column... so I am not sure why this is. It is also possible that I haven't been following the tech as closely the last 10 years as I did the previous 20 :). Furthermore, the metrics, as you point out, are largely not painting the whole picture (which I believe is intentional).
Also, N2 is around a year after 18A. You would sure hope N2 is better or TSMC would be behind for years even if you want to assume 14A never comes out.
AFAIK, N2 will be producing chips in "Late 2025", but I am not sure what "chips" they are talking about just yet. This is generally how I have said the process is "ready".... when the first products are "ready" that are using it. Intel's 18A will first be used on their Clearwater Forest server processor I believe. If you look at when that processor is expected to launch, it is looking like Q3 2025 from the graphics. This is down from earlier expectations where different quotes (some from Intel) were saying 2024. Since the most I can see right now is the iPhone 18 processor being targeted for N2, that indeed is not due out until 2026 which would make your prediction of "a year behind 18A" prophetic indeed.
Logic would dicatate that back during the Bob Swan days 20/18A were originally intel "5nm" and were originally intented to be a N3 competitor rather than an N2 competitor.
They should just start calling them silly names like Android OS versions of old "Ice Cream Sandwich" as an example. Instead of calling it "18A", it could be "Ultimainium Supreme" ..... you know, like "Netburst" ;).

In all seriousness, the names are just useless as they are no longer indicative of much of anything that is useful. Once upon a time, CPU's on PC's were named after their clock frequency. That has certainly stopped (finally).
You don't understand properly. 7nm doesn't NEED EUV. TSMC N7 has no EUV and is just fine. EUV is overly similistic scapegoat for i10nm problems.
Perhaps. I would not presume to debate you seriously as it is silly for me to do so with someone as knowledgeable as you; however, I would contend that it is also impossible to prove that Intel would not have had an easier time with the 14++++ fiasco had they gone all-in with EUV earlier like they have now with High NA.

As someone that has lead up many large scale and very complex product launches, I do absolutely agree that much of what makes or breaks a product is how you manage risk and police good process. The best technology in the world can not save you if you muck these things up. The product is just doomed.
What piece of information in the past 6mo makes you think 14A has slipped 12mo?

Fair point. I think Intel has left enough between the lines with different statements to leave the statement of no slip in-tact even though 20A was eliminated. I don't have any hard evidence of a 14A slip.
Based on TSMC's statement of development time for new nodes increasing and still sticking to two process development teams, A14 will be launching products in that 2028/29 timeframe.
I have heard that products are planned for 2027; however, I can't fault your logic.
Bottom line: TSMC has a healthy dose of paranoia embedded in their company culture. Intel and Samsung do not.
... and perhaps it is this bottom line that still has me left with a gut feeling that Intel will slip Clearwater Forest and 18A while I am more inclined to believe that TSMC release timeline.

Still, one has to wonder why the iPhone 17 (which is rumored to be out around Sep 2025) fell back from N2 and now it seems only the iPhone 18 will use it.

Of course, other than Intel, no other firm will be producing chips on 18A in 2025 either.

Clearwater Forest may well be an ideal product for 18A though. I am thinking that the higher clocked Panther Lake may pose more issues though.
 
This is a common misunderstanding. Putting aside the transistor variation/process maturity part of the equation limiting frequency, SRAM Vmin, etc. There are two major issues. One as you mentioned was the growing problem of rising interconnect RC slowing down chips. The other is that in a post Dennard world all parts of the transistor are not scaled linearly (and even if you did factors like leakage would prevent you from getting the projected linear power reduction and frequency bump). In the modern day, much of the scaling is done by shrinking the space between transistors rather than the size of transistors. Assuming you change nothing other than say reduce the space between devices, total performance and power characteristics would degrade. The reason for this is parasitic capacitance. An easy example to visualize is the gates of your NMOS and PMOS. Both are giant pillars of metal. Thus, they act as a parallel plates and create parasitic capacitance that slows operation and increases power consumption. Now apply this to your individual fins/nanosheets, contacts, etc. and your various "DTCO" tricks end up degrading how the devices perform (of course assuming all else is equal). Parasitic cap linearly degrades your power and performance (power = (1/2)*f*C*V^2). Of course having every new process node be a regression would be unacceptable, so you see tons of innovation in materials and chemistry to not only claw the performance back, but even exceed the performance of the old node. People do things like increase channel strain, SiGe PMOS, shorten contacts, depopulate excess metal, add low-K spacers, increase fin drive current so you can do fin depopulation, etc.

Now with the context, yes, the leakage does indeed get worse as you scale. I don't remember the numbers, but I would not be shocked in the least if N3 has lower leakage than intel 7 even in spite of the N3 devices being smaller. Leakage reductions are not as large of a benefit to intel's products as they are to TSMC's most important customers (causing divergent priorities). There is also intel's Gox which comes with the penalty of channel control (see below). Finally, it isn't exactly a secret that an N3 transistor isn't really that much smaller than an intel 7 one. For example, the poly pitch has a 0.89 scale factor (which is worse than your historical full node shrink of 0.7x). When most of your density gain is coming from cell height reductions, the leakage problem slows down. My understanding is that the leakage getting worse on new nodes is more of a problem at lower power. For something like an intel desktop CPU the power is scaling by a power of 2 with voltage, linearly with freq, and additivity with leakage. Since that intel CPU would be using ultra low threshold voltages, the leakage was always going to be abysmal, be it on 2"nm" or 65"nm" (exaggerated for dramatic effect, but you get the idea). RC of interconnects is as you say an unavoidable problem, and intel 7 has some advantages here. Just like with parasitic cap, the interconnect RC will impact all speeds equally (assuming you don't have to worry about dielectric breakdown/arcing of the ILD which I am unsure of how reasonable this assumption is at ultra-high V). The thermal density thing is as you say a big issue. One that would presumably be much more of a limiter at higher V (on account of the power dissipation scaling the fastest with V).

That is expected behavior. The returns are often not as large as high voltages, but I can't really think of many (if any) instances where you saw large performance improvements at the low and mid range and a regression at the high range.

Major transistor architecture changes break performance. The performance then needs to be reengineered from scratch back into the process. I don't really have many good examples of things that broke performance during the SO2 -> HfO2 or planar -> finFET transitions. One I can think of is channel definition on a finFET. finFET has lower active area in the same footprint as planar (on account of the empty space between fins that are filled). The way you do the S/D and well implants also changed significantly. So if you want to your finFET to actually be better than you planar FET you needed to put in the engineering work to get a tall enough fin, with good enough profiles, properly done implants, and a well shaped S/D. Pat G said something once that I really liked the analogy with. He said something to the effect of how intel 3 is the end of the turbocharging era and how 18A is like the first EVs. It really is true though when you change device architectures it upends the paradigm. Things that worked no longer work and you need to figure out how to reimplement them. As an example, placing your gate first was a standard practice as it allowed you to self align the S/D to your gate and eliminate some nasty alignment related issues. With the adoption of HKMG this was no longer possible. In theory, you could have done the old gate last process to form your metal gate. The solution people actually use though is the well known replacement gate scheme to get the best of gate first (Self aligned S/D) and gate last (HfO2 being able to be deposited after all the high-T operations that would have damaged it are complete). Considering 2nd gen 22nm parts, and everything since then have surpassed the efficiency of sandy bridge, I would just blame that on process maturity or the 22nm performance vintage of Sandy Bridge failing to hit the final performance targets intel wanted in time for Sandy Bridge's launch.

Another major factor that I can't believe I forgot to mention for why the Vmaxs are different. Comparing across similar nodes, Intel always has thicker Gox than the thin gate devices for an equivalent TSMC process. Historically, when you look back at IEDMs from the 2000s/2010s (or even intel 4/3 versus intel 3-E) Intel's initial version of the process will have no thick gate devices. No thick gate means you can't natively do high voltage, which really restricts what you can do on the analog side of things. Although interestingly, you apparently don't "need" high voltage devices to do fancy analog stuff. I saw an interesting presentation from ISCC last year on the topic that if I am going to be honest half flew over my head. But either way you cut it, doing high voltage without thick gate will be needed going forward. TSMC mentioned at their 2024 symposia that N2 won't have thick gate oxide support and that they made a set of tools to allow easier porting of pre N2 analog IPs to N2. I have no clue how you would even begin to go about it while still having space for the gate metal between the nanosheets. With that tangent done, Intel would get around the initial process not having a thick Gox by making the default Gox unusually thick. This gave intel chip designers high enough voltages to make the PLLs, FIVIRs, clock trees, and interfaces they wanted for client CPUs. Then like clockwork, at the next IEDM you would see intel report about the "SOC" version of the last year's process with thick gate devices (and during Intel's mobile phone adventures you would see things like thin gate devices added to the SOC process for lowest standby power). The "SOC process" would then be used for various kinds of chipsets (PCH, mobo chipsets, WIFI, Bluetooth, sound, thunderbolt, security chips, etc.) and some of the ATOM phone/tablet SOCs.

But we were talking about frequency and voltage scaling on intel nodes vs TSMC nodes, not Analog Devices (pun very much intended). Even though it isn't the primary motivation, that thicker Gox allows for higher voltages. Because all the transistors have this thicker Gox, rather than only the analog devices that need thick gate having thick gates, your basic logic will (all else being equal) be able to opperate at a higher Vmax. The thicker Gox does however degrade your control of the channel, hurting switching and leakage. It also makes it harder to scale poly pitch.



What are you talking about? TSMC is terrible about disclosing anything interesting beyond the marketing fluff (vague V-F curve for some ARM core in a like for like comparison, a non-specific full chip density uplift, min metal pitch, poly pitch, and uHD SRAM bitcell or SRAM macro scaling depending on which looks better). Intel consistently gives detailed electricals. Always gives the dimensions for basic 4T NAND logic gates, all SRAM bitcell sizes, metal layer pitches/metallization schemes.

Intel seems to like going into detail about their processes at VLSI during the early summer.

Most likely never. We will get some low detail paper that will drop the pp and the min metal pitches with other charts that we have more or less seen already. Best case, we also get a couple of papers on some high speed SerDes on N2, but that is as good as we are likely to get from TSMC beyond what they have released to the public already (at least if every major TSMC node since like 10FF is anything to go off of).

TSMC has two main advantages for scaling SRAM. One is that their thinner Gox allows TSMC to more easily shrink their poly pitch. Two is that historically, TSMC processes have better leakage and lower Vmin than an equivalent intel process. This allows for smaller SRAM bitcells to maintain their data better. Also, N2 is around a year after 18A. You would sure hope N2 is better or TSMC would be behind for years even if you want to assume 14A never comes out. TSMC also seems to have better pehriphery/marcros than intel. Looking at recent history intel 7 trailing N7 sram density by like 15% desipite similar logic density, and intel 4/3 also having a 15% HD bitcell density disadvantage to N5/4 dispute the similar maxium logic densities.

There is also the elephant in the room. The old intel "7nm" was labeled as a N5 competitor and "5nm" was a N3 competitor. Per intel, intel 4/3 are the old "7nm" with various performance enhancments to narrow the gap with N3 as well as some foundry ecosystem enablement. This is reflected in reality with intel 3 having N5 HD logic density, behind on SRAM, far ahead on HP logic density, and suppior to N4 HPC power-performance. Logic would dicatate that back during the Bob Swan days 20/18A were originally intel "5nm" and were originally intented to be a N3 competitor rather than an N2 competitor.

You don't understand properly. 7nm doesn't NEED EUV. TSMC N7 has no EUV and is just fine. EUV is overly similistic scapegoat for i10nm problems. I have written about the topic at length before but I don't feel like repeating myself. So just dig around if you care. TLDR the probems were poor process deffinition even with the information intel had at the time and poor risk managment. Per Mark Phillips and prior SPIE papers 18A doesn't really have very many multipatterned layers. There is some tip to tip issues, but nothing resoultion related. So even if Intel brought in high-NA for 18A it wouldn't really do very much. Intel also showed off directional etch results which were pretty meh. But as the process matures maybe it makes sense to change the one or two multi pass layers MP mentioned to single pass with CD elongment.

What piece of information in the past 6mo makes you think 14A has slipped 12mo?

1. TSMC has never said that.
2. TSMC got their first high-NA tool earlier this year
3. When intel thinks it is ready they will insert it and when TSMC thinks it is ready they will insert it
4. The earliest TSMC can reasonably insert high-NA would be at A14 (as N2/A16 are a pair). I doubt it would ever make sense to rip out existing low-NA tools for layers that might be simplier with high-NA. Since high-NA wasn't avaliable for HVM 2-3 years ago it missed the insertion window for N2/A16.
4. Based on TSMC's statement of development time for new nodes increasing and still sticking to two process development teams, A14 will be launching products in that 2028/29 timeframe.
5. In this day and age TSMC ramps processes to a higher maxium capacity than intel at peak ramp. TSMC needs to secure more tools to adopt high-NA than intel does.

For the above reasons I am not overally concerned for TSMC. Maybe ASML is ready in time for intel to ramp the version of 14A with high-NA and intel gets "the win". If that does occur, yes that would be a nice thing for 14A wafer cost compared to N2/A16, but we aren't talking some coup that will cause N2 to utilization to collapse.
Sensei amazing post
 
I remember Intel being hailed as an EUV pioneer at the SPIE conferences in the 2010s. Intel received the first EUV system in 2013 and got EUV into production in 2023. Meanwhile TSMC and Samsung started EUV production in 2019. I guess the term pioneer does not mean was successful, just the first one to touch it?

HNA EUV is Déjà vu all over again for me. I believe it was in an investor call when Intel said they would have production HNA EUV wafers in 2027.
I don't really see the situations as similar. 10nm was designed with the intent of never inserting EUV (which fair enough because they intended to go into HVM years before even Samsung tried to adopt EUV). For 14A intel has already demonstrated a non high-NA fallback option and publicly said they will begin to adopt high-NA once it is mature enough to give a wafer cost reduction on whatever 14A layers they were considering it for, and that they expect this cross over will happen for initial 14A production.
The first ASML HNA EUV system arrived at Intel at the beginning of 2024 and production is in 2027? I guess it depends on what you mean by production. Does that include making a profit on HNA EUV wafers?
For what it is worth, the first production EUV systems started getting installed in 2016 and the first products made with EUV wafers were launching in 2019. Comparatively, high-NA is like 1/8th the challenge of getting first gen EUV to the market. The only real changes are some parts of the optics subsystem, a strong incentive to complete the development of HVM ready metal oxide resists, and the maximum field size reduction. N7+ only used EUV for 4 layers for contacts and the like, meanwhile N6 only had 5 layers. I genuinely don't know which layers intel is considering moving to high-NA on 14A, but I wouldn't be shocked if it was a similar story. When you are only talking about a few layers, you don't exactly need a bunch of high-NA tools. Common consensus is that TSMC needed mid single digits of tools for N7+ and N6 production. With Intel's client chips moving to having minimal content on the most advanced process technology, the internal wafer requirements strike me as unlikely to ever be as big as peak combined N7+/N6 production. It is for these reason that Intel's high-NA plans don't exactly strike me as aggressive. Worst case if things do go off the rails there is a functioning fallback option to work with.
I'm not saying the EUV delays were all Intel's fault, ASML had a big role in at as well. What I am saying is that an IDM plays by different rules than a foundry. As a result, Intel views the semiconductor manufacturing world differently, especially under Pat Gelsinger. This works well when you are the undisputed technology leader. It does not work so well when you are not.
I agree with the sentiment of different incentive structures at an IDM vs foundry driving some differences in what is considered "optimal" for the different business. Given Intel's specific/unusual product mix, it even drove practices that were optimal for Intel's business but would be considered strange to other IDMs. However, high-NA usage isn't one of these foundry vs IDM things Dan. All chip designers love more design flexibility, all love lower cycle time, all love the simplicity of direct print features, all love lower defect density, and every semi maker loves lower wafer costs. There is no ego about it, it is all about driving towards the solution that works the best for the designers and by extension the manufacturer.
I think we can all agree that trust is an important part of the semiconductor industry and both Intel and Samsung have breached our trust. Setting expectations is the cornerstone of trust and that is something Intel needs to prioritize, my opinion.
Absolutely. I think 5N4Y (assuming intel doesn't stumble right at the finish line) really is the minimum bar of entry to getting people to take Intel seriously. MediaTek said they were very happy with how Intel adapted to their needs and learned the 101 of being an excellent foundry supplier. Which is good, but things must go even better on traditional/advanced packaging, intel 3, and 18A. I am sure all of those early customers are fully expecting/accepting of an Intel that is learning with them, but every single customer has to have fewer sores than the last and things need to get to the point of first time right.
The point is that TSMC is ramping N2 at a faster pace than they ramped N3 which is a proper milestone.
That may be true (very much believable given almost all TSMC nodes are bigger and ramp faster than all others before), but it doesn't change the fact that 5K WSPM is peanuts. N2 will eventually get there and based on all of the N2 fab phases TSMC is building, peak N2 ramp will definitely supersede peak N3 ramp even with N2's significantly higher mask layer count.
Apple could certainly use N2 in the next iPhones. Worst case by splitting production between N3 and N2 for the iPhone and iPhone Pro if demand exceeds supply. Apple can still claim first to the node which is a big deal in the trenches.
Unless TSMC has invented the time machine to take those wafers that they paln to start in Q4 and come off the line the following Mar/Apr and send them back one year to the past, 2025 N2 iPhones are literally impossible per TSMC's own statements. N2 production simply would have needed to start late last year if Apple wanted to launch product in Q3 2025.
Even if the 5K WSPM thing is true, it is completely insufficient for ramping even the pro only iPhones. Customer qualification vehicles, F20's material to match the parameters of the F12 line (although at this point that may already be done), and a significant part of F20 technology development material would be running as hotboxes that would degrade the in practice capacity of the line below 5K WSPM. My finger in the air educated guess is that TSMC would need to be at 15-30K WSPM before starting iPhone pro only production. Given N5 fabs modules are about 28K a pop, TSMC would need F20 to have at least one 1 phase full and maybe another phase partially full of equipment to really begin HVM. Right now they have a fraction of one phase.
In the past Apple would tape-out in December for the next family of iPhones. Remember, the Apple SoC is generally a derivative design based on the previous version so it is less complicated for TSMC to manufacture. Same thing now with the M1 series.
Cycle times are months longer than those good ol'days though, Dan. CC has reiterated on this point time after time that N2 is a FAR more complex process than N3 (which already had much longer cycle times than N5 or N7 before production swapped over to the much simpler N3E). I wouldn't be shocked if the production candidate stepping needed to happen 4-5Q before launch now (rather than 3Q before launch).
If you really want to know why TSMC is so conservative with process development it is because of Apple and their delivery dates. Prior to Apple, process delays were commonplace in the semiconductor industry. Not once in my memory has Apple delayed an iPhone launch.
I wouldn't really call TSMC super conservative. But you are spot on about Apple never missing an iPhone launch. Intel may like Apple always have a new final product every year, but never missing an original launch window I definitely not on Intel's resume.

Which is exactly why I was thinking that 18A would produce more dense chips than N2 ... but there is obviously something wrong in my thinking.
At this time, it is not publicly known what the minimum metal pitch of 18A is. Per intel, intel 4 + powerVia has the same BEOL as 20A and a minimum metal pitch of 36nm (the same as intel 7). Granted, intel said there will be a line width reduction on 18A, but unless you are a moron like Dylan Patel and think min pitch will down to 21nm (tighter than N3 or what people expect from N2) and that intel will switch from Co liner with Cu fill to W with a liner for the metal lines, there is little reason to believe minimum metal pitch would go below the intel 4 value of 30nm. In theory, Intel could have chosen to keep minimum metal pitch the same as intel 4 and more heavily scaled the device so it could fit inside the smaller area. But that would be more difficult to yield, increased per wafer cost, and have higher RC than a relaxed metal stack. Making your process have 7"nm" class minimum metal pitches with 3"nm" class density (which I will remind you even the densest 3"nm" process N3 is only 10-15% less dense than TSMC's own 2"nm" class process N2) seems like a smart idea to minimize process risk, maximize performance, and accelerate TTM.
Perhaps... but many articles I read reference the information TSMC provides and then beg off on the Intel metrics or leave a "?" in the column... so I am not sure why this is. It is also possible that I haven't been following the tech as closely the last 10 years as I did the previous 20 :). Furthermore, the metrics, as you point out, are largely not painting the whole picture (which I believe is intentional).
I have no clue what you are talking about in this case. Without having a specific example, I will point out that most mainstream websites have actually no clue what they are talking about or even looking at when looking at when it comes to semiconductor process technology.
AFAIK, N2 will be producing chips in "Late 2025", but I am not sure what "chips" they are talking about just yet.
iPhone 18 SOCs of course. You need to start making the silicon then if you want to have 10s of millions available on store shelves starting in September.
This is generally how I have said the process is "ready".... when the first products are "ready" that are using it. Intel's 18A will first be used on their Clearwater Forest server processor I believe. If you look at when that processor is expected to launch, it is looking like Q3 2025 from the graphics. This is down from earlier expectations where different quotes (some from Intel) were saying 2024.
Intel said they would be "manufacturing ready" in late 2024. They have never once committed to CWF or PNL launching in 2024 or even in early 2025. You have to keep in mind, when a product is shipped is different from when a process is shipped. Completed wafers need to go through sort, assembly, test, assembly at the system maker, validation, distribution, and all of the shipping in between those various steps and substeps. Considering intel tapped out A0 for CWF and PNL early this year, launched PDK 1.0 mid this year, things look to be progressing well enough. The only black mark I can think of is that, if memory serves, intel said January for full HVM start (which is a month later than intel committed back in 2021 and what was achieved on intel 4).
Since the most I can see right now is the iPhone 18 processor being targeted for N2, that indeed is not due out until 2026 which would make your prediction of "a year behind 18A" prophetic indeed.
There is nothing prophetic about it. TSMC publicly said they started N5 production in like Feb 2020, come September N5 iPhones launched. N3 has a longer process and was facing early yield issues; in the end they said they started production in Dec'22 and N3 iPhones launched in Sept'23. If TSMC says N2 production starts in Q4'25 and A16 around a year after that, then I have no clue why I wouldn't assume that N2 iPhones launch in Sept'26 and why final products with A16 chips in them wouldn't be launching in 2027.
They should just start calling them silly names like Android OS versions of old "Ice Cream Sandwich" as an example. Instead of calling it "18A", it could be "Ultimainium Supreme" ..... you know, like "Netburst" ;).
I wish people would just call their process node names by their internal names. I know GF, Micron, and Intel all have internal naming schemes, but I don't know if Samsung/TSMC do. Failing that, I would just use Greek letters followed by a dash, number, and letter. So maybe do something like intel 4 = alpha-1, intel 3 = alpha-2, intel 3-T = alpha-2T, 18A = beta-2, etc.
In all seriousness, the names are just useless as they are no longer indicative of much of anything that is useful. Once upon a time, CPU's on PC's were named after their clock frequency. That has certainly stopped (finally).
Not entirely, they follow some general trends. 5"nm" class is EUV finFET, 3"nm" is an extension of 5"nm" process with a gimmick. For N3 it was using finFLEX to get significantly better scaling for a given power-performance. For intel 3 it was a HUGE performance increase and significant power savings. For SF3E/SF3 it was reusing MEOL and BEOL from SF4 with a very rough around the edges GAA flow that was lacking major features. For 2"nm" class nodes the theme seems to be full GAA adoption with BSPDN either available at launch or soon thereafter.

Generally, processes within a class tend to have vaguely similar PPAs in spite of the names not technically meaning anything. Intel 7, N7, and 7LPP are remarkably close together. If 18A does indeed sit between N2 and N3P, then it is pretty darn close to N2. Unfortunately, at this time Samsung 2 is hard to place. Intel 4, N5/4, and SF4 are also very close together. I think the biggest divergence is maybe N3 or 16FF/10FF. Intel 3 is well-matched to N3E on performance and HP density but lags significantly on SRAM and HD logic, SF3 is reasonably well-matched on density but very far behind on electricals. TSMC 10FF and Samsung 10LPP are closely matched, as are 14LPP and 16FF. Intel 14nm on the other hand kind of awkwardly sits like 2/3 of the way between them.
Perhaps. I would not presume to debate you seriously as it is silly for me to do so with someone as knowledgeable as you; however, I would contend that it is also impossible to prove that Intel would not have had an easier time with the 14++++ fiasco had they gone all-in with EUV earlier like they have now with High NA.

As someone that has lead up many large scale and very complex product launches, I do absolutely agree that much of what makes or breaks a product is how you manage risk and police good process. The best technology in the world can not save you if you muck these things up. The product is just doomed.
I will use this point I often cite to explain the fallicy of this argument. Samsung went faster on EUV then TSMC, but people generally agree that TSMC had the just right adoption of EUV. Very limited layer count for the contact segment in 2019's N7+ with full adoption throughout the process flow on N5 in 2020. Intel 10nm stabilized in time for icelake in 2019 and was high yield at giga fab scales in 2020 for Tigerlake and Icelake-X. If Intel wanted to use EUV for 10nm, then The absolute earliest 10nm could have come out assuming there were no early challanges on this EUV version of the process would have been 2019. This hypothetical EUV 10nm would have been cheaper to make and intel today would be in a better spot if that was the case (if for no other reason than building out EUV capabale fab shells back when intel had tons of money and better intel 7 margins to better fund 5N4Y). But to say 10nm would have been less late if Intel used EUV is completly wrong. Even under the most charatable timeline of events 10nm gets to market at the same time as it did in our actual timeline.
I have heard that products are planned for 2027; however, I can't fault your logic.
Since A16 is only starting production in late 2026 and by extension products would be launching sometime in 2027 (my guess is 2027's M series chip or maybe even just the M-ultra series maybe launching around mid 27), it would seem weird for TSMC to launch A14 iPhones in that same year.
 
Last edited:
Cycle times are months longer than those good ol'days though, Dan. CC has reiterated on this point time after time that N2 is a FAR more complex process than N3 (which already had much longer cycle times than N5 or N7 before production swapped over to the much simpler N3E).
Silly question - where is N3B on the complexity scale vs N3 and N3E?
 
Silly question - where is N3B on the complexity scale vs N3 and N3E?
Not a silly question, as the terminology is garbled depending on how you talk to. "N3B" is "N3". "N3B" is not an official term you ever see in any of TSMC's official documentation. To them there is N3 and there is N3E and its successors. N3B is a shorthand some people like to use to specifically call out the original N3 from the rest of the N3 family, so there is less ambiguity about what you are talking about. Generally I say N3 to mean what others call N3B and I say N3E family to refer to N3E, and its related processes (but I'm not perfect, and occasionally I say N3 as a catch-all for all of TSMC's 3"nm" class process nodes). Not that there is a huge PPA difference between N3 and N3E anyhow. Actual N3/N3B products released to market in 2023 had an optical bloat to shore up yield risk that for all intents and purposes made N3 have near enough to equivalent density compared to N3E (not sure if Intel's 2024 products also use an optical bloat or if they are at the intended feature sizes as I haven't looked through the teardowns yet). I also just think TSMC kind of did N3E dirty by over emphasizing certain aspects of N3 to make it look better than it was in practice and making N3E look worse by comparison.
 
Silly question - where is N3B on the complexity scale vs N3 and N3E?

Just between you and I Xebec, N3B is what Apple used, it is the first version of N3. Apple gets a custom version of TSMC processes by contract that are specific to Apple requirements. Apple has what you call a "most favored nation contract". Apple gets first access and best pricing. When TSMC speaks about process roadmaps they do not include the Apple versions which is cause for confusion with the media.

The word in the trenches is that Apple will use N3 again this year for iPhones but I have a hard time believing it. The Apple silicon group that I know and love is always first to new TSMC silicon. If not Apple it will be QCOM for the first time since 28nm and I would not want to be the Apple person who let that happen.

The rumors that Apple and other companies are passing on TSMC N2 due to cost are absurd. TSMC builds fabs based on customer orders. Last time the rumor was that Apple bought out all of the N3 capacity. Also nonsense. CC Wei is a shrewd businessman. No way does he leave N2 money on the table. We will know more on the January 16th investor call but remember, TSMC does not single out Apple when talking about roadmaps. They are the customer that must not be named. :ROFLMAO:
 
Cycle times are months longer than those good ol'days though, Dan. CC has reiterated on this point time after time that N2 is a FAR more complex process than N3 (which already had much longer cycle times than N5 or N7 before production swapped over to the much simpler N3E).

Are you saying I'm old? :ROFLMAO: Why is N2 more complex than N3? It has less EUV layers, correct? It is comparable to N3E but uses GAA versus FinFETs, correct? One thing I can tell you is that N2 is not more complex to design to based on the current PDKs. So what is the complexity you speak of?

Being old, I remember when FinFETs came to town. Complexity took on a whole new definition. I do not think GAA is in the same class of complexity as FinFETs were.
 
Just between you and I Xebec, N3B is what Apple used, it is the first version of N3. Apple gets a custom version of TSMC processes by contract that are specific to Apple requirements. Apple has what you call a "most favored nation contract". Apple gets first access and best pricing. When TSMC speaks about process roadmaps they do not include the Apple versions which is cause for confusion with the media.
Apple may have their own optimizations, but the process is largely indistinguishable from the perspective of fab process. In fact, the initial process Apple uses are always strictly inferior to the more mass market follow up. If you look at A14's N5 vs N5P that all others and Apple A15 used, the process was under worse control and has higher capacitance. FWIW, Intel also used their own HPC tuned version of N3/N3B.
1736059258774.png

The Apple silicon group that I know and love is always first to new TSMC silicon.
Correct, and they will. Once N2 enters production in Q4'25 (per TSMC's roadmap) they will be the very first to run N2 production material with their A20 Pro SOCs that will power iPhone 18 pro and maybe even the A20 SOCs that will power the iPhone 18.
If not Apple it will be QCOM for the first time since 28nm and I would not want to be the Apple person who let that happen.
QCOM will not launch N2 nothing in 2025 (you can take that one to the bank), and if the past half decade is anything to go off of, there won't be any QCOM SOCs made on N2 launching in 2026 either. Same deal with MediaTek.
The rumors that Apple and other companies are passing on TSMC N2 due to cost are absurd.
Correct, nobody has passed on running their 2025 products on N2; because N2 doesn't even enter HVM until the very end of the year and the first production wafer won't even leave the fab until 2026.
TSMC builds fabs based on customer orders
When you have the position TSMC does, you would be stupid to do anything else.
Last time the rumor was that Apple bought out all of the N3 capacity. Also nonsense. CC Wei is a shrewd businessman. No way does he leave N2 money on the table. We will know more on the January 16th investor call but remember,
I don't know why CC would suddenly back track and say "actually N2 HVM is not starting in Q4'25 anymore. HVM began last quarter for a 4Q pull-in on our schedule". Heck, I would be willing to put serious money down that he won't even announce N2 getting pulled into this quarter. And if TSMC doesn't start production right this quarter it is physically impossible for any real volume of final N2 products to launch at all this year, flat out.
TSMC does not single out Apple when talking about roadmaps. They are the customer that must not be named. :ROFLMAO:
TSMC doesn't single out any of their customers. It isn't standard practice for any foundry to talk about their customers.
Are you saying I'm old? :ROFLMAO:
That wasn't my intent, but I guess that is what I said :ROFLMAO:
Why is N2 more complex than N3?
GAA process has many novel process modules that do not exist in finFET. Anybody who has seen even a basic HNS process flow could tell you the process is far more complex. There is also a lot of fancy etch/dep hardware that TSMC likely had to buy special to get their GAA process to work.
It has less EUV layers, correct?
Definitely not. 0% chance.
It is comparable to N3E but uses GAA versus FinFETs, correct? One thing I can tell you is that N2 is not more complex to design to based on the current PDKs. So what is the complexity you speak of?
Just off the top of my head forming the fins, channels, S/D, work function/transistor VTs, inner spacer, the nanowire release step, metal gate fill, etch loading effects, LLEs, etc. The only thing more or less the same with N3E is the BEOL. And even then TSMC is greatly beefing up their MIM cap with their "SHP-MiM" in order to edge out 2020's 10nm SuperFin "superMIM", and the adoption of a more expensive/more performant Cu process to replace the Al top metal layers TSMC has been doing. I will also remind you that the FEOL is the majority of the total process flow (especially for non HPC customers where the number of metal layers is more modest).
Being old, I remember when FinFETs came to town. Complexity took on a whole new definition. I do not think GAA is in the same class of complexity as FinFETs were.
To design for them, sure, maybe. Outside my wheelhouse, so I don't have much to say. To make them, is a different story. There was simply a bigger mountain to climb with SF3/2, 18A, and N2 than there ever was with 22nm, 14LPE, or 16FF. If GAA was as easy as finFET it would have probably come at the 5"nm," class of node or maybe pushed to 3"nm" to not compound risk with the EUV insertion Samsung, TSMC, and Intel did.
 
Back
Top