Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-products-update.22449/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Intel Products Update

LNL is hilariously more efficient than ARL in the intended scenarios for LNL:
OK, got it, and great explanation.

So let's put that in context of what Intel said "Panther Lake will have the efficiency of Lunar Lake". (while competing in the same space).

How does that sound "good" for 18A vs N3B + new architecture? (Is this the new underpromise ethos at Intel?)

I understand many of the why's you've listed*; but 18A is an expensive process with quite a few tricks up it's sleeve, and Intel should be learning about improving chiplet communications with each generation among other things. It just sounds like a whole lot more tech and potential cost (+R&D for a new node too) to achieve 'the same efficiency'.

Shouldn't we be expecting *better* for this investment?

..

P.S. Overly simplistic but I also recognize architecture 'efficiency' can be a very ill-defined term. Is it efficiency based on 90% idle, 10% work? the opposite? efficiency at 1.2 GHz or 5.0 GHz? what is the temperature of the SoC (hint: mobile has a disadvantage because of lesser cooling), etc.

*Though a larger GPU can also be more efficient because you can clock lower to achieve same/better performance, and Arrow Lake's e-cores are also on the same die as the P-Cores. (Yes the e-core config is a bit different).
 
I had the impression the main changes were EUV reduction and pitch relaxation.
The only pitch reductions were at M1 going from 30nm -> 48nm and the CCP going from 47nm (optical bloat from the originally published 45nm) -> 48nm. On N5/N3E TSMC does poly cut before RMG and uses EUV to direct print their contacts. With N3 the 45nm gate pitch TSMC came to the conclusion that direct print contacts weren't possible so they went to self aligned contacts. But if you want to do SAC your poly cut post RMG etch has no way to distinguish between gate W and contact W, so the cut needs to be done before RMG. If you do the cut before RMG and you run into the concern of making sure you leave enough space at the end so even with the worst EPE you can have space to fill in all of your WFMs (side stepping this issue/the difficulty of the etch is why TSMC's metal gate cut on N5 was such a huge innovation). So they grew dielectric walls that were self aligned to the area between the fins. For it to work, this would have to be formed near the very beginning of the process. If memory serves, from the N3 TEMs I have seen there were also multilayer contacts because of the unique topography of the wall and gate metal. This approach also blew out cell height (mostly for SRAM) since you had a big wall between every fin. For N3 SRAM the cell height was funnily enough worse than N5 despite MMP being scaled by 0.82x. It really is a beautiful and elegant solution. Having perfect end caps every time no matter how big or small. Beautiful. Of course it is expensive, has some downsides, and given its removal was the biggest change going from N3 -> N3E I don't think it is a leap to guess this was the what was causing the yield issues. So I suspect in future TSMC will prefer to refine their EPE and try to continue with unself aligned contacts so they can use their novel metal gate cut process. Kind of funny that 16FF was an aborted attempt at SAC (final products still had the etch stop but weren't self aligned indicating dropping SAC was a last minute call), 10FF had it but was troubled/unliked by customers, and pre N3E N3 is an orphan process that will be lost to the sands of time. TSMC has seemingly been very unlucky in their various SAC escapades over the years.

This difference is why I say TSMC had to completely from scratch redesign N3E from the beginning, because SAC and the SAGE that TSMC did to deal with the consequences of SAC reach into every part of the process all the way from beginning to the contacts segment. TSMC from 10 ago struggling to do compressive strain 8 years after the rest didn't have the technical muscle to pull that off. But the Fab 12 team of today did it and on a compressed timetable. I cannot stress enough how impressive that is.

There are a variety of patents on the topic of self aligned gate end caps (SAGE) out there and they are very interesting. I would recommend giving them a read.
The nanosheet pitch design rules should be roughly half the smallest cell height ~65 nm, it should be comparable to N16 MMP.
I don't think pitch is the problem. It is feature size and uniformity. Yeah, the pitch isn't too wild (albeit still beyond DUV direct print). But the nanowire thickness (especially for SRAM) are tiny. Literature examples have shown devices with the width of individual finFET fins and the wires need to be straight, which is nontrivial with EUV.

OK, got it, and great explanation.

So let's put that in context of what Intel said "Panther Lake will have the efficiency of Lunar Lake". (while competing in the same space).

How does that sound "good" for 18A vs N3B + new architecture? (Is this the new underpromise ethos at Intel?)
That is "good" because they are doing that efficiency at a higher performance envelope and at a lower cost. That second bulleted list I put down explained reasons why an 18A LNL (or N3B PNL) would have worse efficency then the LNL/PNL we know. LNL had a lot of bells and whistles that are expensive for Intel and OEMs and not what PC OEMs want to do on a mass market product. If Pantherlake ditches many of said features that would cover up some of the 18A goodness. Also like I said it wasn't as if the gap between N3 and 18A is a canyon. If 18A sits somewhere between N3 and N2 for a variaty of charcterstics well that means 18A is fairly close to N3 since N3 -> N2 is one of the smallest improvements in the history of the industry.
I understand many of the why's you've listed*; but 18A is an expensive process with quite a few tricks up it's sleeve,
Unless the cost per FET is worse, then how expensive the wafer is irrelevant.
and Intel should be learning about improving chiplet communications with each generation among other things. It just sounds like a whole lot more tech and potential cost (+R&D for a new node too) to achieve 'the same efficiency'.
What extra "tech". From the design side PNL isn't any more complex than LNL. If anything it is simpler (no PMICs no DRAM).
Shouldn't we be expecting *better* for this investment?
I mean it isn't like AMD is pushing the envelop on low power SOCs. And half of the AMD systems you can buy are SOCs from 4 years ago. Vesus that PNL blows the bulk of AMD's volume out of the water and will smoke AMD in real battery life (not this run a synthetic benchmark at full load until the battery dies nonsense that no customer actually does). Qualcomm does the same thing on mobile vs Apple and to a lesser extent MTK. "Hey guys look at our multithreaded performance in a synthetic all core benchmark for our PHONE CPU. So much better than Apple who gives you fewer cores and MTK whos CPU doesn't burn your hand when you hold the phone.". For desktop I get it, but I never understood that mindset in phones or laptops. They are supposed to be portable! But rant over. Hopefully Intel continues to move in that direction and not regress to "muh performance" in the ultra-PORTABLE segment *cough ADL being a battery life regression over TGL cough*.
Arrow Lake's e-cores are also on the same die as the P-Cores. (Yes the e-core config is a bit different).
Not really. The big E cores are for maximizing perf/area, while the LP e cores are for best efficiency. LNL only has LP e cores and ARL's use fewer on an older arch and are on N6 in addition to big E cores. ARL/MTL are the penultimate race to idle machines. With the LP-e cores there to maximize time in idle after the army of P and E cores blitz through a workload at full blast ASAP. Meanwhile, LNL dynamically switches individual parts of the die on and off as needed quickly and expends the minium energy to get the task done be that doing it fast with the P cores or slow with the e cores. The designs implementations couldn't be further apart. LNL really is the biggest design change in a laptop chip since Intel integrated the IMC on die. We just ignore that most of those changes were just aping what Apple has been doing for years on their mobile-APs, but better late then never, as they say.
 
Last edited:
The only pitch reductions were at M1 going from 30nm -> 48nm and the CCP going from 47nm (optical bloat from the originally published 45nm) -> 48nm. On N5/N3E TSMC does poly cut before RMG and uses EUV to direct print their contacts. With N3 the 45nm gate pitch TSMC came to the conclusion that direct print contacts weren't possible so they went to self aligned contacts. But if you want to do SAC your poly cut post RMG etch has no way to distinguish between gate W and contact W, so the cut needs to be done before RMG. If you do the cut before RMG and you run into the concern of making sure you leave enough space at the end so even with the worst EPE you can have space to fill in all of your WFMs (side stepping this issue/the difficulty of the etch is why TSMC's metal gate cut on N5 was such a huge innovation). So they grew dielectric walls that were self aligned to the area between the fins. For it to work, this would have to be formed near the very beginning of the process. If memory serves, from the N3 TEMs I have seen there were also multilayer contacts because of the unique topography of the wall and gate metal. This approach also blew out cell height (mostly for SRAM) since you had a big wall between every fin. For N3 SRAM the cell height was funnily enough worse than N5 despite MMP being scaled by 0.82x. It really is a beautiful and elegant solution. Having perfect end caps every time no matter how big or small. Beautiful. Of course it is expensive, has some downsides, and given its removal was the biggest change going from N3 -> N3E I don't think it is a leap to guess this was the what was causing the yield issues. So I suspect in future TSMC will prefer to refine their EPE and try to continue with unself aligned contacts so they can use their novel metal gate cut process. Kind of funny that 16FF was an aborted attempt at SAC (final products still had the etch stop but weren't self aligned indicating dropping SAC was a last minute call), 10FF had it but was troubled/unliked by customers, and pre N3E N3 is an orphan process that will be lost to the sands of time. TSMC has seemingly been very unlucky in their various SAC escapades over the years.

There are a variety of patents on the topic of self aligned gate end caps (SAGE) out there and they are very interesting. I would recommend giving them a read.
I am a bit surprised that their SAC would get in the way of their gate cut. Granted that it's getting very tight there. The directly focused spot size of EUV is 25 nm, so it's too large for the contact sizes I was expecting.
 
Not really. The big E cores are for maximizing perf/area, while the LP e cores are for best efficiency. LNL only has LP e cores and ARL's use fewer on an older arch and are on N6 in addition to big E cores. ARL/MTL are the penultimate race to idle machines. With the LP-e cores there to maximize time in idle after the army of P and E cores blitz through a workload at full blast ASAP. Meanwhile, LNL dynamically switches individual parts of the die on and off as needed quickly and expends the minium energy to get the task done be that doing it fast with the P cores or slow with the e cores. The designs implementations couldn't be further apart. LNL really is the biggest design change in a laptop chip since Intel integrated the IMC on die. We just ignore that most of those changes were just aping what Apple has been doing for years on their mobile-APs, but better late then never, as they say.

I've been trying to reconcile this discussion and I see where I disconnected with you. I was thinking Arrow Lake-H/-U were just scaled down Arrow Lake-S. -S does not have any LP e-cores, while -H/-U have the additional LP e-cores as described, and on N6.

It was also a little confusing (to me) because LNL's LP e-cores are Skymont, just like Arrow Lake's "big" e-cores. While Arrow Lake's LP e-cores are a different arch.

1743595694229.png


I do see the power benefit on Lunar Lake though with the IMC being on the Compute Tile. I was mistaken thinking it was on the Platform I/O tile like Arrow Lake (And Ryzen).

..

What extra "tech". From the design side PNL isn't any more complex than LNL. If anything it is simpler (no PMICs no DRAM).
Sorry I was referring to the process tech. Backside Power Delivery, GAAFET, to achieve no improvement in efficiency based on Intel's wording of "power efficiency of Lunar Lake".

I suspect Panther Lake is going to be more expensive over it's life time than Lunar Lake to manufacture based on using a ~mature N3 node vs. ramping 18A. Nova Lake will get the benefit of a mature 18A node of course.

..

I still think the bar is really low on Panther Lake because they're basically just describing a scaled up Lunar Lake with no efficiency gains. I can understand why some might say "this is OK", though as it's giving up the closely coupled memory and perhaps more and still achieving equivalent efficiency.

As a consumer/enthusiast -- Panther Lake (the CPU side) already sounds disappointing to me as a product as it doesn't appear to move the needle in any meaningful way that matters to an end user (the customer). If it doesn't matter to customers it won't help Intel.

..

I do agree AMD has been slow on SoC's but they seem to be following 3-4 year cycles instead of 2 years like server/desktop. Strix and Strix Halo are pretty substantial improvements on their SoC recipe; and they have some pretty good looking Zen 6 based APUs coming end of next year. RDNA4 also looks like a "pause" that focused on efficiency that will make it's way down into the SoCs next generation too.
 

The problem with AMD is that they do not have comparable solutions like LNL.

I would choose a Panther Lake laptop:
1. Significantly higher TOPS for the new NPU
2. 12 Xe3 cores (note that B580 only contains 20 Xe2 cores)
3. Same efficiency as LNL, meaning running quiet and sufficient battery life when travelling
4. More CPU cores
5. More memory options

It is a very good package.
 
Last edited:
For AI AMD Strix Halo is better due to the bigger memory bus Panther lake would be bandwidth starved in AI workload as for @Xebec regarding the architecture ARL-H/MTL-H were designed with maximum IP Reuse between them that is why ARL-H has Crestmont LP-E coes on N6.
LNL is near monolithic and a efficiency focused uARCH it just uses Skymont LP-E that is skymont without ring bus Skymont is 1.1mm2 N3B while Lion Cove is near 3.4mm2 N3B and in terms of IPC the difference between them is just 7% and clock difference of 23% so for 3X the area of Skymont the gains are not impressive for Lion Cove

As for efficiency I don't know how they are measuring it if it's idle than it's understandable if it's a heavy workload than it's bad.
 
For AI AMD Strix Halo is better due to the bigger memory bus Panther lake would be bandwidth starved in AI workload as for @Xebec regarding the architecture ARL-H/MTL-H were designed with maximum IP Reuse between them that is why ARL-H has Crestmont LP-E coes on N6.
LNL is near monolithic and a efficiency focused uARCH it just uses Skymont LP-E that is skymont without ring bus Skymont is 1.1mm2 N3B while Lion Cove is near 3.4mm2 N3B and in terms of IPC the difference between them is just 7% and clock difference of 23% so for 3X the area of Skymont the gains are not impressive for Lion Cove

As for efficiency I don't know how they are measuring it if it's idle than it's understandable if it's a heavy workload than it's bad.
I think AI on Strix Halo is not a good option. If for edge inference using SLM, then Intel models can already cover that. If for training, then Strix Halo is under power.

I would rather use a LNL laptop and a workstation simultaneously to cover all scenarios.

 
For AI AMD Strix Halo is better due to the bigger memory bus Panther lake would be bandwidth starved in AI workload as for @Xebec regarding the architecture ARL-H/MTL-H were designed with maximum IP Reuse between them that is why ARL-H has Crestmont LP-E coes on N6.
XYang - just to add - Strix Halo has a full 256-bit wide bus (LPDDR5X) while Panther Lake is very likely to be 128-bit and LPDDR5X. Maybe a higher clock version of LPDDR5X on Panther, but not enough to offset the double bus. Panther Lake will probably only achieve 50-60% of the AI inference speed of AMD Strix Halo, barring some extra large caches.

@Xebec[/USER] regarding the architecture ARL-H/MTL-H were designed with maximum IP Reuse between them that is why ARL-H has Crestmont LP-E coes on N6.
LNL is near monolithic and a efficiency focused uARCH it just uses Skymont LP-E that is skymont without ring bus Skymont is 1.1mm2 N3B while Lion Cove is near 3.4mm2 N3B and in terms of IPC the difference between them is just 7% and clock difference of 23% so for 3X the area of Skymont the gains are not impressive for Lion Cove

As for efficiency I don't know how they are measuring it if it's idle than it's understandable if it's a heavy workload than it's bad.

I've already accepted the beatdown from @nghanayem :). I was confusing the versions of ARL, (though the Intel comment didn't specify what performance level of Arrow Lake they were referring to).
 
I may be looking at this wrong, but here's how I'm interpreting this as meh:

- Lunar Lake and Arrow Lake are both on N3B

- Lunar Lake and Arrow Lake share the same cores and architecture; which means efficiency is in theory the same*

If Panther Lake is a newer/updated architecture on a 'superior node', why would the efficiency and performance being equal to that pair be 'good' ? Panther Lake should be *more* efficient than either LL or ARL, and/or more performant than LL or ARL. The wording chosen was "The efficiency of LL and the performance of ARL" which tells me it can only outperform LL at higher power than LL, and can't outperform ARL outright.

*There's nothing I know of that makes LL inherently more efficient than ARL other than it's lower frequency (+ implied lower voltages). Arrow Lake could in theory (with same core count and frequencies) achieve the same efficiency as LL. There are 'noise' differences such as NPU choices, iGPU choices, and cache differences but I am discounting those here.
LNL has Memory on Package to improve efficiency. Also, the E cores on LNL are LP-E version of Skymont Cores. The LP-E cores are revised to have 4 core clusters. Both LP-E core and P cores are in the same tile which is on N3B.

ARL does not have Memory on Package. E cores on ARL compute tile are just standard Skymont E cores on N3B. The LP-E cores on the SoC tile is Cresmont based (2 core cluster same as MTL version) on N6.

Intel said for LNL, they redesigned the LP-E cores to have 4 LP-E cores because previous version (as in MTL) was not powerful enough to handle all background tasks, this led to Intel Thread director to signal Windows to power on the main compute tile to use the P or std E cores in there for some tasks. This means LP-E cores are not doing the intended job in some cases to save battery life. Another change for LNL is they figured out, turning on the LP-E cores (low power island) without turning on the P cores even though they are located in the same tile (so no need for compute cores in SoC tile as in MTL & ARL).

Now Panther Lake would have this new 4 core cluster LP-E cores same as LNL but will also have standard number of E cores & P cores cluster as Arrow Lake-H. Both the LP-E and E cores will be improved version of Skymont cores as well.

So, I can see how the PTL can be as efficient as LNL (by using the LP-E cores for less intensive compute tasks) and as powerful as ARL-H (by using the P & standard E cores for compute intensive task). All these cores are going to be on a new node with IPC improvements. So could be overall better than LNL and ARL-H combined.

I hope that makes sense, at least this is how I understood what they meant.

EDIT - I just realized there were more discussions in this thread than I saw when I commented (not sure why). So, lot of the stuff I said here is already covered by others with even better explanation.
 
Last edited:
XYang - just to add - Strix Halo has a full 256-bit wide bus (LPDDR5X) while Panther Lake is very likely to be 128-bit and LPDDR5X. Maybe a higher clock version of LPDDR5X on Panther, but not enough to offset the double bus. Panther Lake will probably only achieve 50-60% of the AI inference speed of AMD Strix Halo, barring some extra large caches.



I've already accepted the beatdown from @nghanayem :). I was confusing the versions of ARL, (though the Intel comment didn't specify performance level of Arrow Lake they were referring to).
I agree on the bus part PTL has too much AI Compute but less bandwidth and Strix Halo has more bandwidth than the AI Compute also Intel's Panther lake is based on Xe3 IP and there are some leaks regarding it.
8 Xe2 cores on LNL on N3B is around 39mm2 and Xe3 on PTL is around 54.68mm2 and last I checked N3B is denser in both SRAM/Logic density against N3E also there is 2X L2 cache and 1.5 the Xe Cores which are better than Xe2

Here is LNL Die shot 8.58*16.27 mm2
GY5QE_2a8AAiaLS.jpg



 
Does GAA require more than FinFET?

The answer to this question is not really. TSMC N3E has 17 EUV layers and N2 18 layers. Samsung is the winner for EUV layers with 16 for 2nm but TSMC is the density winner which is impressive given it only uses 18 layers of EUV. Intel 18A is 17 layers and wins the performance crown by a decent margin.

TSMC is also the king of yield which is impressive given the density advantage of N2. Last I heard N2 yield was 80%+ which is 1.5 - 3x greater than what I have heard about Intel and Samsung. I still have a couple of more calls to make but this is what I see thus far.

The other issue is PDKs. TSMC N2 PDK is rock solid while Intel 18A and Samsung 2nm PDKs are still a work in process. Remember, TSMC has the most foundry PDK experience and a big pile of customers and partners driving PDK maturity so it really is not a fair race, but it is what it is.
 
I am a bit surprised that their SAC would get in the way of their gate cut. Granted that it's getting very tight there. The directly focused spot size of EUV is 25 nm, so it's too large for the contact sizes I was expecting.
The problem with SAC is etch. If you do your gate cut after RMG your etch needs to eat W but the contacts are also W. And the whole reason TSMC went with SAC was it was getting to hard to place the contacts lithograpically. If you wanted to try an etch mask over the contacts to protect them during metal gate cut you run into the same EPE issues as placing the contacts that lead to wanting to self align them. It would defeat the point of self alignment.

For the various Intel and Samsung finFET nodes that have SAC (which I believe is all of them), the gate cut is before RMG so there isn't that incompatibility. The problem at that point is just litho reliably placing the poly cut far enough away from the fins that there is space for WFMs. For SF4 and Intel 4/3 this seems to work well enough. But GAA throws things for a loop because there is even less space for WFMs. Samsung "solved" the issue by not having WFMs or multi VT. Which is obviously not really a solution. Time will tell if 18A or SF 1.4 use gate cut pre (with SAC) or post RMG (no SAC).
I do see the power benefit on Lunar Lake though with the IMC being on the Compute Tile. I was mistaken thinking it was on the Platform I/O tile like Arrow Lake (And Ryzen).
Minor correction: the IMC was on the SOC die rather than the io/PCH die.
Sorry I was referring to the process tech. Backside Power Delivery, GAAFET, to achieve no improvement in efficiency based on Intel's wording of "power efficiency of Lunar Lake".
Just because PNL isn't wildly more efficient than N3 doesn't mean 18A isn't an improvement. ARL on N3 wasn't meaningfully better than Intel 4 MTL and didn't have better performance than i7 desktop parts. BSPDN also isn't a gigantic performance enhancement. IMO the largest benefit is freeing up routing space as well being necessary for future process node scaling. As for GAA power performance it isn't as big of a boon as finFET (just look at N2). Makes sense at a high level too. At risk of over simplifying finFETs added 2 gate interfaces while GAA only adds 1 and you lose A LOT of carrier mobility.
I suspect Panther Lake is going to be more expensive over it's life time
Maybe circa January 25 it was more expensive. But circa January 26 it shouldn't be. one would hope the head of CCG knows how much her chips cost
than Lunar Lake to manufacture based on using a ~mature N3 node vs. ramping 18A. Nova Lake will get the benefit of a mature 18A node of course.
SInce most laptops launch in starting in January 18A should be very mature by the time PNL starts making up a large fraction of total unit shipments.
..

I still think the bar is really low on Panther Lake because they're basically just describing a scaled up Lunar Lake with no efficiency gains. I can understand why some might say "this is OK", though as it's giving up the closely coupled memory and perhaps more and still achieving equivalent efficiency.
I think this metaphor I thought of while at work paints a decent analogy. Take the biggest baddest Dodge engine you can think of. Slam it into a Pacifica (and I recently learned yes apparently that is a real car for some ungodly reason). Then take like a BRZ or tuned up Miata. Which car will run a test circuit faster. On paper the Chrysler's got WAY more horsepower and probably better 0-60 and top speed. But the nimbler BRZ can corner much better with its better steering, transmission, balance, lower center of gravity, weight, etc. I'm not enough of a car guy to say what will be better (although I suspect it is very course dependent). But I think it illustrates the idea. The node is only one part of the story. If a good node was all one needed for a good chip, every smartphone today would be using an Intel CPU. At risk of getting a little lost in the sauce by returning to the car metaphor. If an apple Mx SOC is the 911, then LNL is the Cayman, and PNL is like the WRX. Yeah the Cayman or 911 is sportier but that doesn't mean WRX is bad. The WRX is aiming for a different market, is still a beloved driver's car that is also more practical, and affordable.

PNL is seemingly trying to take lessons learned from LNL and scale them across Intel's line up and bring that LNL goodness to the masses.
As a consumer/enthusiast -- Panther Lake (the CPU side) already sounds disappointing to me as a product as it doesn't appear to move the needle in any meaningful way that matters to an end user (the customer). If it doesn't matter to customers it won't help Intel.

..
I mean they are cheaper to produce and will presumably be in cheaper/more laptops. Which is nice. Because even though Intel is practically subsiding OEMs to use LNL, LNL laptops aren't exactly cheap. But if as an enthusiast I doesn't light your fire, there is nothing I can do to change that. After all nobody can tell how you feel or how you should feel. I can certainly understand the disappointment after ARL was RPL again again but lower power and more money. Getting RPL performance in 2025 3 years after RPL doesn't feel right. But if it gives that perf with less power and cost. I am all for it, because I think lower cost and power is what Intel needs to succeed.
I do agree AMD has been slow on SoC's but they seem to be following 3-4 year cycles instead of 2 years like server/desktop. Strix and Strix Halo are pretty substantial improvements on their SoC recipe; and they have some pretty good looking Zen 6 based APUs coming end of next year. RDNA4 also looks like a "pause" that focused on efficiency that will make it's way down into the SoCs next generation too.
Performance of Strix and Halo isn't my problem. My problem is idle/low utilization power, high minimum TDP, price, availability. Are there improvements, sure. But they are incremental. I feel AMD's limited die configurations shoots themselves in the foot. And they picked the wrong segments to target. I think they need a 2+2 config that fully ditches the race to idle mindset, has a separate PCH to minimize cost and maximize unit shipments from limited wafer allocation, better lower power uncore, and more frequent updates to match OEM design cycles. Not a CPU designer ofc so take my opinion with a grain of salt. But as it stands AMD doesn't have a real U or V series competitor. And that is like what 80% of the laptop market? Niche how performance parts aren't what the market wants and Qualcomm has proved it in spite of all of the ankle weights they are dragging. If the rapid rise of Qualcomm doesn't show that AMD has been squandering the openings they've had in laptop I don't know what does.
 
The answer to this question is not really. TSMC N3E has 17 EUV layers and N2 18 layers. Samsung is the winner for EUV layers with 16 for 2nm but TSMC is the density winner which is impressive given it only uses 18 layers of EUV. Intel 18A is 17 layers and wins the performance crown by a decent margin.

TSMC is also the king of yield which is impressive given the density advantage of N2. Last I heard N2 yield was 80%+ which is 1.5 - 3x greater than what I have heard about Intel and Samsung. I still have a couple of more calls to make but this is what I see thus far.

The other issue is PDKs. TSMC N2 PDK is rock solid while Intel 18A and Samsung 2nm PDKs are still a work in process. Remember, TSMC has the most foundry PDK experience and a big pile of customers and partners driving PDK maturity so it really is not a fair race, but it is what it is.
It still reminds me of classic Intel Performance Focused as always and TSMC leading in density as expected but Samsung is missing here 🤣 .
I don't think N2 is 30K/Wafer from the rumours that is too much as for 18A I expect it to be more expensive than N3 for Samsung I don't have a clue.
 
It still reminds me of classic Intel Performance Focused as always and TSMC leading in density as expected but Samsung is missing here 🤣 .
I don't think N2 is 30K/Wafer from the rumours that is too much as for 18A I expect it to be more expensive than N3 for Samsung I don't have a clue.

From what I know Intel 18A is a little more expensive than TSMC N2 given yield is the same. Samsung is in the same range but since they don't yield like TSMC does it is a much bigger cost difference. Hopefully Intel 18A has a quick yield ramp for foundry customers.

I remember Samsung Foundry yield being so bad they sold good die versus wafers. It was hilarious! How do you spin that to customers? :ROFLMAO:
 
Back
Top