You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Intel Core Ultra is a dumpster fire of a brand - but (speculation on my part) maybe it was renamed because Pat wanted to focus on getting the Tick-Tock strategy going again.
Just a fun little aside. Intel tends to change branding when the new CPU has less clock speed than the previous generation (but higher performance).
Pentium hit 66 MHz on the same node as 486 which hit 100 MHz. (0.8 micron?)
Core 2 hit ~ 3 GHz when Pentium 4 Cedar Mill was a 4 GHz+ chip on 65nm.
Core Ultra is the first time we’ seeing a gen on gen clock regression since Core series. (5.7 vs 6.2 GHz).
(For leading products) Intel used 80x86 branding for 15 years, Pentium for 13 years, Core 2/i-series for 16 years. An update is probably due as almost another generation of engineers and marketing people have come and gone. . The Core Ultra branding also comes with more focus on ‘balance’ - faster iGPU, an NPU, etc. thanks to tile architecture.
Is it because GNR/SRF is using a new platform that needs validation ? Would AMD Turin, that is a drop-in replacement for AMD Genoa, have a quicker uptake in this case ?
I suspect that CWF (consisting of 12 computing tiles) will utilize some advanced packaging technology, which is not mature enough yet. In that sense, PTL will show if 18A is successful, and CWF will show if 18A and also advanced packaging are successful.
Yes, Intel will make plenty of other processors with the 18A node, but mass-producing the Clearwater Forest chips on time is paramount to building confidence in Intel Foundry for potential customers, and that's the key to Gelsinger's entire turnaround plan. It also marks the culmination of Gelsinger’s audacious and now seemingly last-ditch effort to develop five nodes in four years to spark a resurgence at the ailing chipmaker.
AMD was rasing prices over the last 5 years; they can buy all capacity they want at a much higher volume capable pure play foundry.
On the consumer side, Strix Point is very profitable for AMD. Small fish gets it priced above $230. Lenovo, Acer barely below 200.
And they have finally unloaded old inventories of 3-4 years old dies, which they made a terrible mistake to first order too little of them, and then too much when demand fell due to Osbourne effect.
If Clearwater will be very good, AMD will put up huge order at TSM, and sell it quickly, dropping prices.
AMD was rasing prices over the last 5 years; they can buy all capacity they want at a much higher volume capable pure play foundry.
On the consumer side, Strix Point is very profitable for AMD. Small fish gets it priced above $230. Lenovo, Acer barely below 200.
And they have finally unloaded old inventories of 3-4 years old dies, which they made a terrible mistake to first order too little of them, and then too much when demand fell due to Osbourne effect.
If Clearwater will be very good, AMD will put up huge order at TSM, and sell it quickly, dropping prices.
Which is the predicament I see Intel in. At the end of the day, their biggest issue is the financial hole they are digging. Most of us are all engineers and tend to think that all Intel needs to do is regain technical process superiority. My contention is that Intel's actual problem is that even if it manages to catch up with TSMC from a technical standpoint, it will have done so by selling the farm to get there.
I just don't think that Intel has the financial runway needed to fund the break-neck paced process tools it will take to catch up to TSMC, attract foundry customers, and build market share to the point of being profitable with those tools.
I could be wrong of course. Intel has dug itself out of crazy situations before (Itanium and P4 come to mind); however, I don't recall Intel ever producing a CPU design without a process advantage (until the 14++++ and 10++++ debacle).
It’s usually the other way around. You produce smaller tiles first because your yield will be higher for smaller tiles (defects increase to the square of die size). As you ramp and improve yields you start making bigger tiles.
I certainly see the point from a # of defect free dies point of view; however, the smaller dies generally go into products that are high volume and therefore can not demand the high prices needed for the relatively poor yields while the larger dies (for server parts) you can charge 10x and more for.
I am not sure where the graph of profitability would cross for each approach.
Intel appears to be banking on CWF that has quite large dies, on a process that has a bunch of new technologies in it. By your logic, this should not be done?
I am guessing you make this statement because you believe (maybe rightfully so) that Intel 18A yields are quite bad. If this is your assertion, then there is certainly some evidence to support this from Samsung's GAA experience on 3nm.
Intel 3 seems to have decent characteristics, but I have no idea what their yields are on that process.
Is it because GNR/SRF is using a new platform that needs validation ? Would AMD Turin, that is a drop-in replacement for AMD Genoa, have a quicker uptake in this case ?
Changing architecture slows it. drop in is better. But the main issue is that servers need to be the same to make management of them efficient. So you need to plan the whole datacenter location efficiency.