Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/tsmc-n2-specs-improve-while-intel-18a-gets-worse.21692/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

TSMC N2 specs improve, while Intel 18A gets worse

lefty

Active member
TSMC revealed additional details about the N2 node at IEDM: 24% to 35% power reduction, 14% - 15% performance improvement at the same voltage, and 1.15X higher transistor density than N3E.
Previously in 2023, the power reduction was only 25% to 30% and speed was 10% - 15%. While in 2022, the density improvement was only 10%.
Meanwhile, the specs for Intel 18A have got worse, previously 18A was 26% improved from Intel 3, now it's only 15%
Ushering in the Angstrom Era with RibbonFET and PowerVia, Intel 20A will deliver up to a 15% performance per watt improvement [over Intel 3] and will be manufacturing-ready in the first half of 2024. Intel 18A delivers an additional 10% improvement and will be manufacturing-ready in the second half of 2024.
Intel 18A to deliver 15% performance per watt increase over Intel 3
 
Last edited:
TSMC revealed additional details about the N2 node at IEDM: 24% to 35% power reduction, 14% - 15% performance improvement at the same voltage, and 1.15X higher transistor density than N3E.
It is within their estimates of 10-15% and 25-30%
Previously in 2023, the power reduction was only 25% to 30% and speed was 10% - 15%. While in 2022, the density improvement was only 10%.
Meanwhile, the specs for Intel 18A have got worse, previously 18A was 26% improved from Intel 3, now it's only 15%

This one is so true they played in PR here it said upto 15%.
I think what happened is 20A became 18A just add the additional library and feature set and the original 18A spec went to 18AP
 
It is within their estimates of 10-15% and 25-30%

This one is so true they played in PR here it said upto 15%.
I think what happened is 20A became 18A just add the additional library and feature set and the original 18A spec went to 18AP
Reading b/w the lines of recent co-ceos interview: PG was pushed out, partly because he "overstated" progresses along the way.

------ below are quote from https://www.crn.com/news/components...-ai-pc-data-center-and-foundry-efforts?page=9 ------

Zinsner (pictured) said one reason the board of directors chose him and Holthaus to lead the company is because of their emphasis on transparency when communicating progress on the comeback plan devised by Gelsinger.

“We are a little bit more on a say-do basis for we're more likely to tell you things as we've accomplished big milestones that are meaningful, as opposed to early indications of success. So that's our philosophy,” he said.

“I think that's why the board actually chose us to run the interim role, because we are so transparent and operate in that way. And I think the investors can expect us to do that as we go through earnings calls and so forth,” Zinsner added.
 
This one is so true they played in PR here it said upto 15%.
I think what happened is 20A became 18A just add the additional library and feature set and the original 18A spec went to 18AP
I've seen this claim before, and it makes actually no sense. That is not how anything works.

1) 20A "with the 18A" libraries is just 18A. They are the same bloody process, and I can't understand for the life of me why this fact eludes so many people? Per intel, 18A is 20A with some performance enhancements (up to 10%) with line width reductions, and foundry design enablement (figure 1). 18A is still a foundry process, and they can't just unwind the line width reduction this late in the game without breaking design collaterals and IPs, so 18A is 18A. Always has been and always will be. Even if the "performance enhancements" made to 18A reduced performance vs 20A by 99% it is still 18A by intel's definition of what 18A is.

2) You are assuming 18A is a 0% PPW uplift of 20A. I can't for the life of me understand why people would be so certain that intel was able to add enough performance elements/features for 20A to 100% of the final performance target, and 18A is only showing a 0% improvement over 20A. Just making up numbers to demonstrate my point who's to say that 20A isn't 1.07x the PPW of intel 3 and 18A is 1.07x (both short on performance but 20A is more short), that 20A isn't 1.1x the PPW of intel 3 and 18A is 1.045x (both are short on performance but 18A is more short), or even the exact opposite of what you and others posit (that 18A is 10% over 20A but 20A is only 4.5% better than intel 3)?

3) People forget that intel 7 (figure 2), intel 4 (figure 3), and intel 3 (4/5) all exceeded the power and performance goals set in July 2021. By my count, iso power performance of intel 4 is 5.6% above where intel originally promised. If 18A is 1.15x over the intel 3 we got, then 18A is landing at 95.8% of the way to what intel guided at the beginning of their "5N4Y" journey. Is it good that 18A is 4.2% weaker than intel thought they could achieve in 2021? No, absolutely not. But it is hardly the grand fraud of 10% weaker than people make it out to be. Yeah, Intel's a little short on 18A. Go back to 2021 and nobody thought they could ever get within spitting distance of overhauling TSMC. The armchair quarterbacks even "prophesized" that they wouldn't even have intel 4 products launched, to say nothing of running at high yields by now. Despite it all, here they are with intel 4/3 running with good yield, and multiple 18A products set to launch in 2H25. Even with the regression, intel is set to surpass N3P-HPC and N3X PPA about one year before N2 or N3X products come to market.

Fig 1. Intel's 2021 claim for what "20A" is and what "18A" is
1734441601387.png



Fig 2. Intel squeezing out a 50mV power reduction on the intel 7 performance package used for 13th gen, and enabling CCG to covert this to a 4% iso V frequency boost over 12th gen.
1734443456954.png



Fig 3. Intel 4 demonstrating 21.5% iso power performance uplift over intel 7 versus the 20% promised in 2021. Intel 7 is also 4% faster than promised, so intel 4 is really 5.6% better than intel promised in 2021.
1734444042480.png



Fig 4. Intel 3's lower fin count HD library offering an 18% iso power performance uplift rather than the 18% uplift being from a HP to HP cell comparison. This 18% is also off of a 5.6% higher iso power performance baseline. This makes intel 3 24.6% better iso power versus the 2021 intel 4 target.
1734444249567.png


Fig 5. Intel 3 leakage reductions over intel 4 were unpromised and allow for better SRAM and lower chip idle power (especially when low VT devices are being used)
1734444578751.png
 
TSMC revealed additional details about the N2 node at IEDM: 24% to 35% power reduction, 14% - 15% performance improvement at the same voltage, and 1.15X higher transistor density than N3E.
Previously in 2023, the power reduction was only 25% to 30% and speed was 10% - 15%. While in 2022, the density improvement was only 10%.

Few years ago, several friends at TSMC mentioned that developing GAA was extremely challenging and they still lacked confidence in it. This is one reason I believe TSMC was conservative back then. The top priority for N2 was to ensure the new architecture was functional. The performance and efficiency improvements from N3 to N2 just needed to be sufficient to satisfy the main customers. I suppose TSMC now has much more confidence in GAA, allowing them to offer more substantial advancements.
 
I suppose it is possible that Intel has turned another leaf and that 18A is on-track, yielding well, and performing well fulfilling the 5N4Y plan in stellar fashion. I just get stuck recalling the history of 14+++++ and 10+++++. Fool me once, shame on you, fool me 8 times? Ya get what I am saying ;).

I also am basing my skepticism on my engineering background and decades of large, complex programs.

When you are aggressive in your requirements, the chance of failure increases exponentially with the degree of aggression. It is very common for original requirements to be relaxed as you approach production in order to reduce risk (in fact, I can't recall a single time this did not happen).

Also, Samsung has had serious issues yielding good GAA product. The GAA process appears to be much more involved than FinFET is with more steps, and more complex steps needed to achieve the process. Intel is also sticking with low NA for 18A (a good move by my conservative standards). This will also limit the overall outcome of the 18A process. (aside) While I am at it, talking about processes in terms of the label seems quite outdated. Are there ANY traces in Intel's 18A that are only 18A wide? Once upon a time this was a good metric, but surely we can come up with something more comprehensive today?

Just looking at the changes from Intel 3 to Intel 18A kinda makes me terrified for Intel.

So while I have absolutely no data to back my skepticism, it still remains .... a result of over a decade of history.
 
I suppose it is possible that Intel has turned another leaf and that 18A is on-track, yielding well, and performing well fulfilling the 5N4Y plan in stellar fashion. I just get stuck recalling the history of 14+++++ and 10+++++. Fool me once, shame on you, fool me 8 times? Ya get what I am saying ;).

I also am basing my skepticism on my engineering background and decades of large, complex programs.

When you are aggressive in your requirements, the chance of failure increases exponentially with the degree of aggression. It is very common for original requirements to be relaxed as you approach production in order to reduce risk (in fact, I can't recall a single time this did not happen).

Also, Samsung has had serious issues yielding good GAA product. The GAA process appears to be much more involved than FinFET is with more steps, and more complex steps needed to achieve the process. Intel is also sticking with low NA for 18A (a good move by my conservative standards). This will also limit the overall outcome of the 18A process. (aside) While I am at it, talking about processes in terms of the label seems quite outdated. Are there ANY traces in Intel's 18A that are only 18A wide? Once upon a time this was a good metric, but surely we can come up with something more comprehensive today?
The best metric we have now was proposed by Intel to calculate logic density based on Pitch and cell height these are just marketing number

Just looking at the changes from Intel 3 to Intel 18A kinda makes me terrified for Intel.
Why?
So while I have absolutely no data to back my skepticism, it still remains .... a result of over a decade of history.
Yes not just you but everyone including potential customers
 
The best metric we have now was proposed by Intel to calculate logic density based on Pitch and cell height these are just marketing number


Why?

Yes not just you but everyone including potential customers
Well, they are moving to GAA and BSPDN both from Intel 3 finFET. I feel that they may be using many more layers of multi-pattern than before, and I am guessing that for Clearwater Forest, the die sizes will be quite large. They were going to start off with 20A with more relaxed trace widths (I am guessing) then move on to 18A and achieve higher density. Now they have removed that stepping stone and are jumping straight into the water. Their libraries are all totally non-backwards compatible to my understanding so there really isn't much of a backup plan if things don't go well.

The recent departure of Pat G also doesn't give me great confidence that things are going all that rosy over on the 18A line. Why kick out Pat if 18A is going to be the companies savior ...... just as he said it would? I may be reading this all wrong of course, but there seems to be an odd smell to things at Intel.

As an example, Intel had a less-than-stellar introduction of desktop Arrow Lake. A day or two after reviews, Intel announced it would be releasing a major microcode update that would improve performance noticeably ...... which we now have no trace of, and rumors and leaks don't look that favorable for. In other words, it seems like an empty promise again.

Just my limited take on what is making me feel 18A is risky.
 
Well, they are moving to GAA and BSPDN both from Intel 3 finFET. I feel that they may be using many more layers of multi-pattern than before, and I am guessing that for Clearwater Forest, the die sizes will be quite large. They were going to start off with 20A with more relaxed trace widths (I am guessing) then move on to 18A and achieve higher density. Now they have removed that stepping stone and are jumping straight into the water. Their libraries are all totally non-backwards compatible to my understanding so there really isn't much of a backup plan if things don't go well.
Clearwater die size is 55mm2 on 18A and around 114mm2 for Panther lake which are perfectly fine for ramping the process
The recent departure of Pat G also doesn't give me great confidence that things are going all that rosy over on the 18A line. Why kick out Pat if 18A is going to be the companies savior ...... just as he said it would? I may be reading this all wrong of course, but there seems to be an odd smell to things at Intel.
Everyone is still shocked due to his sudden departure there is something going at Intel definitely
As an example, Intel had a less-than-stellar introduction of desktop Arrow Lake. A day or two after reviews, Intel announced it would be releasing a major microcode update that would improve performance noticeably ...... which we now have no trace of, and rumors and leaks don't look that favorable for. In other words, it seems like an empty promise again.

Just my limited take on what is making me feel 18A is risky.
ARL architecture is more Mobile focused add DLVR to it and it is really not suitable for Desktop plus the architectural choices that impacted it negatively
 
Just my limited take on what is making me feel 18A is risky.

My concern was that Intel did not take 20A into HVM. That and Samsung's tragic foray into GAA. It seems that TSMC is very confident of N2 and talk at IEDM was very GAA positive so I am a bit more confident now. I really thought that TSMC N2 was going to be a minor node but from what I am hearing at the conferences everyone is going to use it.
 
ARL architecture is more Mobile focused add DLVR to it and it is really not suitable for Desktop plus the architectural choices that impacted it negatively

Honest question - what WAS the last desktop focused architecture?

On the AMD side, even since Zen 1 (2017) they've all been server focused (scale # of cores, and perf/watt). Perf/watt of course lends well to Mobile. Then Desktop gets whatever comes out of that. I think AMD's last desktop focused architecture was AMD FX which ironically smelled a lot like Pentium 4 Prescott.

On Intel, Core 2 (2006) and beyond seem to all have been mobile first (corporate market), with server getting a little more differentiation in terms of bus and cache. Sandy Bridge ended up really strong on desktop but was still clearly a mobile design. Haswell better on mobile than desktop (see Anandtech's review here), and Skylake same (minor gains over Haswell).

Alder lake and Raptor lake ended up highly competitive with Zen 3 and 4, and perhaps more desktop focused than all other recent architectures. But they offered huge gains in mobile experience (for corporate laptops the e-cores are awesome at running the 30-40+ security and housekeeping agents in the background that is typical these days), and allowed Intel to finally scale out servers appreciably after 14nm. But even Arrow Lake-S beats 14th gen in application performance overall*, despite losing SMT.

*https://www.techpowerup.com/review/intel-core-ultra-9-285k/30.html
(Review is from before recent performance improvement patches)

(Also not popular here but I think Arrow Lake would probably clock higher on Intel 7 Ultra than it's current TSMC N3B node - so I think it's even got a slight disadvantage there on the desktop too).
 
Honest question - what WAS the last desktop focused architecture?
Alder Lake and Raptor Lake we will get one with nova lake with HB if the leaks are true
On the AMD side, even since Zen 1 (2017) they've all been server focused (scale # of cores, and perf/watt). Perf/watt of course lends well to Mobile. Then Desktop gets whatever comes out of that. I think AMD's last desktop focused architecture was AMD FX which ironically smelled a lot like Pentium 4 Prescott.

On Intel, Core 2 (2006) and beyond seem to all have been mobile first (corporate market), with server getting a little more differentiation in terms of bus and cache. Sandy Bridge ended up really strong on desktop but was still clearly a mobile design. Haswell better on mobile than desktop (see Anandtech's review here), and Skylake same (minor gains over Haswell).

Alder lake and Raptor lake ended up highly competitive with Zen 3 and 4, and perhaps more desktop focused than all other recent architectures. But they offered huge gains in mobile experience (for corporate laptops the e-cores are awesome at running the 30-40+ security and housekeeping agents in the background that is typical these days), and allowed Intel to finally scale out servers appreciably after 14nm. But even Arrow Lake-S beats 14th gen in application performance overall*, despite losing SMT.

*https://www.techpowerup.com/review/intel-core-ultra-9-285k/30.html
(Review is from before recent performance improvement patches)

(Also not popular here but I think Arrow Lake would probably clock higher on Intel 7 Ultra than it's current TSMC N3B node - so I think it's even got a slight disadvantage there on the desktop too).
If we are comparing anything to Intel 7 ultra it's difficult to match the clock speed it offers
 
Clearwater die size is 55mm2 on 18A and around 114mm2 for Panther lake which are perfectly fine for ramping the process

Everyone is still shocked due to his sudden departure there is something going at Intel definitely

ARL architecture is more Mobile focused add DLVR to it and it is really not suitable for Desktop plus the architectural choices that impacted it negatively
From here: https://www.techpowerup.com/326966/...-first-18a-node-high-volume-product#g326966-1

All these tiles look quite large to me ..... and particularly large because of their highly rectangular aspect ratio (I am just thinking about what the die pattern might look like on the wafer). Am I missing something? Those die look pretty big to me.
 
Honest question - what WAS the last desktop focused architecture?

On the AMD side, even since Zen 1 (2017) they've all been server focused (scale # of cores, and perf/watt). Perf/watt of course lends well to Mobile. Then Desktop gets whatever comes out of that. I think AMD's last desktop focused architecture was AMD FX which ironically smelled a lot like Pentium 4 Prescott.

On Intel, Core 2 (2006) and beyond seem to all have been mobile first (corporate market), with server getting a little more differentiation in terms of bus and cache. Sandy Bridge ended up really strong on desktop but was still clearly a mobile design. Haswell better on mobile than desktop (see Anandtech's review here), and Skylake same (minor gains over Haswell).

Alder lake and Raptor lake ended up highly competitive with Zen 3 and 4, and perhaps more desktop focused than all other recent architectures. But they offered huge gains in mobile experience (for corporate laptops the e-cores are awesome at running the 30-40+ security and housekeeping agents in the background that is typical these days), and allowed Intel to finally scale out servers appreciably after 14nm. But even Arrow Lake-S beats 14th gen in application performance overall*, despite losing SMT.

*https://www.techpowerup.com/review/intel-core-ultra-9-285k/30.html
(Review is from before recent performance improvement patches)

(Also not popular here but I think Arrow Lake would probably clock higher on Intel 7 Ultra than it's current TSMC N3B node - so I think it's even got a slight disadvantage there on the desktop too).
AMD really did a good job of designing a core architecture and support IOD to cover server and desktop while adequately handling laptop. Since they make the greatest margins on DC, I think this is a very good strategy.

ARL on the other hand seems like the broken child of Lunar Lake. While Lunar Lake had low latency access to its cache, ARL must go through interconnects to another tile IIRC. Lunar Lake shows great promise for thin and light laptops with its very good power effeciency. On the desktop, ARL struggles against AMD and even 14700K and seems weak.

How does this bode for Clear Water Forest? Not sure, but the lack of SMT worries me in a DC chip.
 
I suppose it is possible that Intel has turned another leaf and that 18A is on-track, yielding well, and performing well fulfilling the 5N4Y plan in stellar fashion. I just get stuck recalling the history of 14+++++ and 10+++++. Fool me once, shame on you, fool me 8 times? Ya get what I am saying ;).
I think that is valid. However, to be fair, the situation is far different from it was. Back then, Intel's TD org was running on an insufficient R&D budget to keep pace with a TSMC or a Samsung (although Samsung is hard to say since we don't know what the TD wafer split is between memory and logic at Samsung). Additionally, after intel announced that 6-12 month delay to their "7nm" Intel has largely delivered what they said they would when they said they would. 10nm Super Fin was delivered on time and was a strong enhancement on top of then now functional 10nm. Intel 7 also was on time and with some spare gas in the tank to allow for a meaningful refresh with 13th gen. Intel 4 was on time, exceeded the PPA targets intel suggested, was much more cost competitive, and had the high early yields that Intel was historically known for. Intel said intel 3 was ahead of schedule and enabled Xeon 6 E-core to start shipping final product to customers earlier than what Intel originally was guiding (Q1'24 vs Q2'24). As I had previously pointed out, Intel 3 was also better than intel originally guided, and is evidently yielding pretty well since Intel has launched big die Xeons on it/it is basically intel 4 with extra performance optimizations.

FWIW, it isn't really that long ago that TSMC had a fairly spotty execution record. 130nm had issues with the interconnect dielectric, TSMC 65nm was late so UMC was able to get there first (the first and only time UMC beat TSMC to a given node). 45nm got canceled because it was late to yield, and 40nm had poor early yields and a large PDK shake up that threw off NVIDIA. 32nm-HP (HKMG), 32nm-LP (poly-Si gate), and 28nm-LP (poly-Si) were all canceled. From what I can gather it seems like early 28nm-HP (HKMG) early ramp didn't seem to live up to the standard TSMC normally sets (but given how horrible the gate first processes turned out, TSMC did a pretty very good job here when we grade them on a curve with the other foundries). 20nm had an ill-conceived process definition and poor value prop, so few people used it, and none of those customers used it long term. 16FF had a slow ramp, so Apple had to dual source with the faster ramping Samsung. 10FF had low early yield and also became an orphan process with fab space being converted to N7. In recent times the only public slip-up I can think of was N3 being a year late and required a redesigned FEOL/MEOL process (N3E) that isn't seamless for designers. Beyond that N7 and N5 were REALLY well executed, and 28nm HKMG/16FF were solidly executed. N3E is also superb because TSMC was juggling the N3 issues with Apple and Intel breathing down their necks while also spinning up a new from the ground up process flow.

I say all that to say this. Teams evolve. TSMC has leveled up their game a lot over the past 10–15 years. I have little faith that 2000s TSMC would have been able to navigate the debacle that their SAC/SAGE process caused them anywhere near as well as 2020s TSMC did. Recent history seems to indicate that the woes of intel's process development department are behind them. If TSMC leveled up beyond what they were ever capable of in the past, I don't see why Intel can't level up to what they used to consistently demonstrate every time for the better part of two decades. IMO if 18A continues on a strong yield ramp, then I think you can take an intel process roadmap to the bank, the same way one would implicitly trust in a recent TSMC roadmap.
I also am basing my skepticism on my engineering background and decades of large, complex programs.

When you are aggressive in your requirements, the chance of failure increases exponentially with the degree of aggression. It is very common for original requirements to be relaxed as you approach production in order to reduce risk (in fact, I can't recall a single time this did not happen).
Agreed. That is the beauty in intel doing with 18A, no? Both Ann K at interviews and some SPIE papers talked about how now Intel uses quick turn monitors and develops the different process segments in a modular manner and with parallel schemes to make debugging issues easier and enabling fallback options. They also had a stepwise adoption for BSPDN with a risk reduction process composed of an intel 4 FEOL mated to the 20A BEOL process to derisk powerVIA which they got to have yields trailing intel 4 by a mere 2Q offset. Also using that opportunity to figure out how to do assembly test, and best optimize chip design around in the new BSPDN environment. Then there was also a small die Arrow lake CPU "tile" to further derisk things. Of course that never came to pass due to cost reduction reasons, but Intel claims 18A is in a good enough state that 20A Arrow lake doesn't really serve its purpose of derisking 18A anymore.

Historically, Intel put a focus on only doing one main new thing for a process node, plus maybe one smaller thing to reduce risk. It is good to see that Intel has gone back to what worked so well in the past, and what worked so well for TSMC in that multipatterning/early EUV era.
Also, Samsung has had serious issues yielding good GAA product. The GAA process appears to be much more involved than FinFET is with more steps, and more complex steps needed to achieve the process.
You would be correct here. GAA requires many novel process modules and also breaks things that work in finFET. The three examples I see the most in lititure are lower mobility from the crystal orintation that the majority of the charge carieers move through changing, having enough space for your Gox and metal fill for the replacement metal gate process, and faults in your EPI S/D as the EPI growth fronts from each of the nanosheets collide to create strain relaxing faults.
(aside) While I am at it, talking about processes in terms of the label seems quite outdated. Are there ANY traces in Intel's 18A that are only 18A wide?
If by traces you mean metal lines, then Intel's papers would indicate not. Various FEOL critical dimensions of in production finFET and GAA processes have single digit nanometer in size. But things like gate length and minimum feature size are far larger than 2nm large. But that is a conversation that has been done to death, and it hasn't been true for anyone's process for decades at this point.
Their libraries are all totally non-backwards compatible to my understanding so there really isn't much of a backup plan if things don't go well.
That wasn't the case on intel 4 and 3, so I don't know why it would be assumed that 18A wouldn't offer a superset of 20A. Also, I don't really see your point here. If for example N2 is late, there isn't exactly a backup plan either. You can't just wave a magic wand and have your N2 chip become an N3P chip in under a year.
As an example, Intel had a less-than-stellar introduction of desktop Arrow Lake. A day or two after reviews, Intel announced it would be releasing a major microcode update that would improve performance noticeably ...... which we now have no trace of, and rumors and leaks don't look that favorable for. In other words, it seems like an empty promise again.
Intel products once again falling on their face is hardly intel foundry's fault. If anything, it is evidence to stop assuming that intel foundry is at fault for every little issue at the wider intel if products made at TSMC can disappoint just as poorly as those made at Intel.

Honest question - what WAS the last desktop focused architecture?
Meteor Lake and Arrow lake platforms. The designs are suboptimal for mobile in the name of desktop scalability. At a recent investor meeting, they even mentioned how the current products and many of the immediate future products were designed with the intent of maximum performance at all costs, and now that intel products has to run their business as if they were a fabless firm without the crutch of margin stacking they are designing products in a more deliberate manner. For mobile, a design would ideally be monolithic for lowest power consumption. MTL/ARL are also expensive to produce (large die sizes and advanced packaging). For desktop parts that frequently carry higher margins, not a huge problem. In Intel's bread and butter mobile segment, very much a problem. If anyone doesn't want to believe me that these are platforms compromised by not being mobile focused, I point to Intel making even more expensive lunar lake their main premium mobile offering because MTL/ARL are not desirable laptop chips at your highest volume 15W and below or even 28W and below segments.
(Also not popular here but I think Arrow Lake would probably clock higher on Intel 7 Ultra than it's current TSMC N3B node - so I think it's even got a slight disadvantage there on the desktop too).
If we are comparing anything to Intel 7 ultra it's difficult to match the clock speed it offers
You would be wrong on this count. An Intel 7 version would clock far slower. Refer to this post for why it is physically impossible for an Intel 7 version to be better and how design has a larger impact on Fmax:
 
You would be wrong on this count. An Intel 7 version would clock far slower. Refer to this post for why it is physically impossible for an Intel 7 version to be better and how design has a larger impact on Fmax:

I read the post a few times actually, and I agree and understand it's physically impossible for an Intel 7 backport of ARL to have better perf/watt, but I'm not sure it would lose in all benchmarks/scenarios to a N3B version. I interpreted your post as the Fmax would be more limited on power constrained scenarios, but not necessarily in "not power constrained" scenarios such as a small # of cores running at once (typical desktop application usage).

Intel 7 Ultra seems suited to higher clocks than N3B as it allows both higher max temperatures (115C allowed in BIOS vs 105C) and stable voltages (1.55V on 14900KS for 2 cores vs. 1.45V on 285K).

I think the trade off would be that while an Intel 7 version would have higher overall power consumption, it would actually clock faster for lightly threaded applications. (Idle power would also be higher, but that's less of an issue on the desktop, with both AMD and Intel desktops idling at higher power today than from the Sandy Bridge era..)

P.S. ARL P-Cores may not have much more in the way of logic transistor count than RPL; consider that they gain only 9% IPC while actually dropping SMT all together. ARL E-cores though certainly are much heavier than RPL E-cores..
 
I think that is valid. However, to be fair, the situation is far different from it was. Back then, Intel's TD org was running on an insufficient R&D budget to keep pace with a TSMC or a Samsung (although Samsung is hard to say since we don't know what the TD wafer split is between memory and logic at Samsung). Additionally, after intel announced that 6-12 month delay to their "7nm" Intel has largely delivered what they said they would when they said they would. 10nm Super Fin was delivered on time and was a strong enhancement on top of then now functional 10nm. Intel 7 also was on time and with some spare gas in the tank to allow for a meaningful refresh with 13th gen. Intel 4 was on time, exceeded the PPA targets intel suggested, was much more cost competitive, and had the high early yields that Intel was historically known for. Intel said intel 3 was ahead of schedule and enabled Xeon 6 E-core to start shipping final product to customers earlier than what Intel originally was guiding (Q1'24 vs Q2'24). As I had previously pointed out, Intel 3 was also better than intel originally guided, and is evidently yielding pretty well since Intel has launched big die Xeons on it/it is basically intel 4 with extra performance optimizations.
I was under the impression that Intel 7 was more of an actual additional "+" to the existing enhanced 10nm process? Still, your point is valid. Intel has not had near the disaster of process releases as they did in the 14/10 nm era.

It is also equally true that while TSMC appears to be turning its gears quite effectively today, there was plenty of gunk in the process earlier.
f by traces you mean metal lines, then Intel's papers would indicate not. Various FEOL critical dimensions of in production finFET and GAA processes have single digit nanometer in size. But things like gate length and minimum feature size are far larger than 2nm large. But that is a conversation that has been done to death, and it hasn't been true for anyone's process for decades at this point.
Yes, I agree. I just feel the need to rant about it once every few years now ;).

BSPDN throws yet another complexity in the density and power metrics IMO. Typically, the density metric has been more about the size of the transistor while BSPDN density is more about layout density. AFAIK there is a pretty big difference between the two.

It just further muddies the water on how to compare 2 processes today.
That wasn't the case on intel 4 and 3, so I don't know why it would be assumed that 18A wouldn't offer a superset of 20A. Also, I don't really see your point here. If for example N2 is late, there isn't exactly a backup plan either. You can't just wave a magic wand and have your N2 chip become an N3P chip in under a year.
Fair point.
Intel products once again falling on their face is hardly intel foundry's fault. If anything, it is evidence to stop assuming that intel foundry is at fault for every little issue at the wider intel if products made at TSMC can disappoint just as poorly as those made at Intel.
On the contrary, I believe that Intel foundry has propped up poor Intel designs for quite some time. For many years (a couple of decades actually), I have been arguing that of course Intel processors are better than everyone else's. How hard is it to design a superior processor when you maintain a full 1 to 2 die shrinks ahead? Intel has had twice the transistor budget and twice the power budget compared to all others in the industry for a very long time (clear up to the fateful 14++++ saga I would say).

Lunar Lake, while a quite good thin-and-light processor, is fairly expensive for Intel. Arrow Lake failed to even match the competition from Zen 5 even with a process advantage (N3B vs N4P). I chalk this up entirely to Intel's design team losing their built-in process advantage from the past. I think they have come to depend on it.... perhaps to their own disadvantage now.
 
The recent departure of Pat G also doesn't give me great confidence that things are going all that rosy over on the 18A line. Why kick out Pat if 18A is going to be the companies savior ...... just as he said it would? I may be reading this all wrong of course, but there seems to be an odd smell to things at Intel.

Just my limited take on what is making me feel 18A is risky.
Yeah, neither the board or the temporary CEO's are doing a good job of explaining why Gelsinger had to go since the "core strategy remains intact". Trying to read the tea leaves and hearing some comments from Zinsner indicating that the board is looking for "more incremental returns" on the foundry investment I'm thinking the issue was the speed of the factory build out.

I feel like Intel needs to keep moving on the construction of the shell's in OH, but until they have a clear path to fill the two new AZ fabs I don't see them needing to put tools in the OH fabs. Assuming they are looking at a late 2027 launch (2 years after 18A) they really don't need to start putting tools in those shells before the beginning of 2027 assuming the AZ fabs are full. I suspect Gelsinger wasn't on board with slowing down the build out. That's my best guess anyway. But like free advice, it is probably worth about as much as you paid for it. :)
 
From here: https://www.techpowerup.com/326966/...-first-18a-node-high-volume-product#g326966-1

All these tiles look quite large to me ..... and particularly large because of their highly rectangular aspect ratio (I am just thinking about what the die pattern might look like on the wafer). Am I missing something? Those die look pretty big to me.
Yes actually the big tiles are intel 3 tile with L3 and IMC and below those are 4 55mm2 tile containing 24 Darkmont cores and L2 so 3*4*12=288 Cores
 
Yes actually the big tiles are intel 3 tile with L3 and IMC and below those are 4 55mm2 tile containing 24 Darkmont cores and L2 so 3*4*12=288 Cores
Ahh. I see.

There are 5 large "tiles", 2 smaller, and 3 larger. Underneath the 3 larger "tiles" are 4 55mm2 darkmont "sub tiles" each having 24 cores.

So, 3 super tiles, 4 sub tiles each (12 subtiles total). Each subtile has 24 cores (12x24=288). Got it.

So Intel only has to yield silly big Intel 3 super-tiles and 12 x 55mm2 Darkmont sub-tiles for Clearwater forest.

Thanks for the clarification. The image had me a little confused.
 
Back
Top