Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/amd-says-intels-horrible-product-is-causing-ryzen-9-9800x3d-shortages.21836/page-4
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AMD says Intel's 'horrible product' is causing Ryzen 9 9800X3D shortages

Fine product (now). Terrible launch. New platform which costs too much.

On laptops it looks much better which lends credence to Arrow Lake being a portability platform which was “scaled up” for desktop.

Intel really needs to get it together with Panther / Nova Lake. Can’t say this too many more times before this ship takes on too much water… Intel cannot under any circumstances cede more ground in laptops.
Agree. Losing market in desktop (which is shrinking anyway) isn't as big a deal. Losing market in laptop that is still strong is catastrophic. Lunar Lake is actually a pretty decent thin-and-light product, and Arrow Lake H is also decent. As you say, it looks as if the core design was targeted at the laptop market .... but I think it is expensive for Intel to produce .... and from a financial standpoint, Intel can't afford to stay on the same financial path they are currently on.

Worse that losing laptop market though, is the devastating loss Intel has had in DC. If AMD continues to dominate DC as they have in recent years (and look to continue that domination through at least this year and into next year), pretty soon AMD will be able to use their high profit DC division to finance a loss leader in laptop to pressure Intel out of the market (as Intel has done to AMD in the past).

I know that Intel is introducing Clearwater Forest late this year (at least that is what my most current info is .... but perhaps they will do Panther Lake first?), so there is hope for intel if they can ramp that production up and make some revenue and halt the steady loss of market share in DC.

Then there is the specter of the 2nm EPYC Venice with double the cores per CCD and Zen 6 under the hood about 1 year after the first CWF hits the shelves. With 384 Zen 6 cores ..... all getting a 1.4 performance boost in MT from SMT, even a very successful CWF may have a very short lived victory.

Really though, all this comes down to the money for Intel. It isn't just a matter of IF they can produce a superior chip, its can they make a profit doing it!
 
The RnD for DC and other products all more or less depend on how good the core is, which is all depends on very desktop workload centric pathfinding.

And how consumer software is written largely guides how non-consumer one functions. I.E. you get much faster float math, even if you only use it for integer number crunching.

So being in the lead in the desktop is what will determine how well received the CPU will be in the market niches below in the technological hierarchy. Not to say, gamers do pay insane premiums for faster chips, bigger than any business user.
 
There is no 386 Core Zen 6 we will have 192C/384T Zen 6 and 256C/512T zen 6c btw 6C will hit Q4 2026 and Clearwater Forest Q3/Q4 2025. Clearwater Forest will have 1 year to reign
DMR is also 192C/384T P core with APX and more ISA Features than Z6 on 18A

The DC Space will be decided by who has the better Core/design and ofc Prices also the laptop market which is the main money bringer for Intel

From the rumors Z6 is N3E and Z6C is N2
 
There is no 386 Core Zen 6 we will have 192C/384T Zen 6 and 256C/512T zen 6c btw 6C will hit Q4 2026 and Clearwater Forest Q3/Q4 2025. Clearwater Forest will have 1 year to reign
DMR is also 192C/384T P core with APX and more ISA Features than Z6 on 18A

The DC Space will be decided by who has the better Core/design and ofc Prices also the laptop market which is the main money bringer for Intel

From the rumors Z6 is N3E and Z6C is N2
EPYC Zen 6 "Venice" is supposed to have 64 vs 32 core CCD's. Venice is going to be on a new socket that more than doubles (IIRC) the power the socket can deliver with more bandwidth as well. While I can't find any information stating the actual core count of a future EPYC Zen 6 Dense, I simply assumed that they would be able to keep the same number of CCD's as Turin D.

Do you have other information you could link to?

I suspect that all desktop Z6 and the EPYC non-dense cores may well all be on N3P (not E) while it is likely that EPYC Dense Z6C and possibly Laptop Z6 will be on N2.
 
The RnD for DC and other products all more or less depend on how good the core is, which is all depends on very desktop workload centric pathfinding.

And how consumer software is written largely guides how non-consumer one functions. I.E. you get much faster float math, even if you only use it for integer number crunching.

So being in the lead in the desktop is what will determine how well received the CPU will be in the market niches below in the technological hierarchy. Not to say, gamers do pay insane premiums for faster chips, bigger than any business user.
Not sure I agree. DC is much more about feeding a ton of cores with bandwidth while keeping socket power below the spec and getting the best performance from that combination. This is a growing market with large margins and is therefore key to any CPU makers success (especially x86).

The desktop market is small and shrinking. Intel has made a living here with a "who cares about power" attitude for some time. Only recently have they turned their attention to PPA and IMO are way behind the curve here.

Gamers are a niche within a small and shrinking market. While they are loud and active on forums, I suspect their financial impact to a CPU company is negligible.
 
Strix Point doesn't seem to be a loss leader at all. OEM price is much more expensive than the previous generation product. AMD now gets chipset shortages at launch.
I agree .... today. I was pointing out that since AMD is now making tons of profit on DC, they could potentially use that cash to slim margins in the laptop market ...... where Intel is currently making the most profit.

In other words, it is a financial strategic move more than a technical one.

If Intel can level the playing field in DC (they have been WAY behind forever here .... and with the most recent releases from AMD and Intel are now ONLY significantly behind), the financial game changes for both Intel and AMD.
 
One Point they are not massively behind in DC like they were 2 years ago like they were with SPR they are like 20% behind from the Phoenix review.
They need is to improve the performance cause their launch day review had bugs which can be ironed out for a bit more performance GNR is good step not a ground shattering but very good there are different TCO Consideration to buy for Zen5/GNR
 
Last edited:
Gamers are a niche within a small and shrinking market. While they are loud and active on forums, I suspect their financial impact to a CPU company is negligible.

They are small market, but gamers are the trend-setters for the entire CPU/GPU industry. Not a single CPU/GPU design ever took lead outside of server/laptop space without being in the lead in high-performance desktop, with an exception of Intel Yonah, which came to desktop from mobile pentium.
 
They are small market, but gamers are the trend-setters for the entire CPU/GPU industry. Not a single CPU/GPU design ever took lead outside of server/laptop space without being in the lead in high-performance desktop, with an exception of Intel Yonah, which came to desktop from mobile pentium.
I would say by having the fastest CPU, AMD is seen by the consumer as a premium brand. Intel used to be the premium brand, but now they lose that status.
 
One Point they are not massively behind in DC like they were 2 years ago like they were with SPR they are like 20% behind from the Phoenix review.
They need is to improve the performance cause their launch day review had bugs which can be ironed out for a bit more performance GNR is good step not a ground shattering but very good there are different TCO Consideration to buy for Zen5/GNR
40% in dual socket. Still, how is 20% in single socket an OK thing?

Still, point taken. Intel is no longer 100% - 200% slower in DC. It's still crazy to me that Intel allowed the golden goose to cook!

They are small market, but gamers are the trend-setters for the entire CPU/GPU industry. Not a single CPU/GPU design ever took lead outside of server/laptop space without being in the lead in high-performance desktop, with an exception of Intel Yonah, which came to desktop from mobile pentium.
I don't think this is where AMD is making their money, and the gaming architecture doesn't do much of anything for DC. Memory bandwidth and PPA does though. Hard to feed all those cores.

Alas, back in the Yonah day's things were quite different than they are today. I think Intel is in the process of re-aligning to the new reality.
Intel cuts the prices of Xeon 6 CPUs. The prices are now more in line with AMD's Genoa. It looks like both AMD and Intel are involved in a pricing war in enterprise segment for DC CPUs.

I saw that as well. How are the prices compared to AMD's Turin?
 
Then there is the specter of the 2nm EPYC Venice with double the cores per CCD and Zen 6 under the hood about 1 year after the first CWF hits the shelves. With 384 Zen 6 cores ..... all getting a 1.4 performance boost in MT from SMT, even a very successful CWF may have a very short lived victory.
Minor comment here - while Zen 6 is supposed to get a significantly reworked memory controller and I/O die, we don't know whether scaling significantly beyond the # of cores per socket today will scale well or poorly.

(I do think AMD's roadmap looks really strong at this point, and Intel is competing against both TSMC and AMD).
 
Minor comment here - while Zen 6 is supposed to get a significantly reworked memory controller and I/O die, we don't know whether scaling significantly beyond the # of cores per socket today will scale well or poorly.

(I do think AMD's roadmap looks really strong at this point, and Intel is competing against both TSMC and AMD).
well we have AMD 26 Roadmap but not Intel 26 Roadmap to compare it to
 
April Fool's has begun.. :)

Assuming you're serious - 720p/1080p scenarios are perfect for testing gaming performance on CPUs because you want to guarantee CPU limited scenarios in testing. There's a few reasons this is relevant:

1. You may upgrade your GPU in the future; and if you're GPU bound on a RTX 3060, you might not be when you get a RTX 5070 later. If you based your purchase decision on high resolution average fps you'll suddenly find your CPU lacking with a better GPU.

2. Testing a 30 second section of a game doesn't give you the full feel for the game; there will be some areas that are GPU limited and some that are CPU limited. Lowering resolution guarantees more CPU limitations*.

3. 1% lows matter more than average fps as that determines perceived smoothness. 1080p average = ~ 1% lows at 1440p/4K for example.

4. You play simulators -- Microsoft Flight Simulator, X4 Foundations, Elite Dangerous, etc -- these are very CPU limited, even at 4K with (in the case of X4) lower end graphics cards. (*MSFS is a perfect example of a game that is both CPU and GPU limited at times).

Other considerations:

A. Modern games are heavily dependent upon upscaling now, which effectively lowers the real render resolution anyway.. increasing demand on the CPU.

B. Some modern engines are pushing work back to the CPU -- Unreal Engine 5.x Lumen hits the CPU a lot.
 
April Fool's has begun.. :)

Assuming you're serious - 720p/1080p scenarios are perfect for testing gaming performance on CPUs because you want to guarantee CPU limited scenarios in testing. There's a few reasons this is relevant:

1. You may upgrade your GPU in the future; and if you're GPU bound on a RTX 3060, you might not be when you get a RTX 5070 later. If you based your purchase decision on high resolution average fps you'll suddenly find your CPU lacking with a better GPU.

2. Testing a 30 second section of a game doesn't give you the full feel for the game; there will be some areas that are GPU limited and some that are CPU limited. Lowering resolution guarantees more CPU limitations*.

3. 1% lows matter more than average fps as that determines perceived smoothness. 1080p average = ~ 1% lows at 1440p/4K for example.

4. You play simulators -- Microsoft Flight Simulator, X4 Foundations, Elite Dangerous, etc -- these are very CPU limited, even at 4K with (in the case of X4) lower end graphics cards. (*MSFS is a perfect example of a game that is both CPU and GPU limited at times).

Other considerations:

A. Modern games are heavily dependent upon upscaling now, which effectively lowers the real render resolution anyway.. increasing demand on the CPU.

B. Some modern engines are pushing work back to the CPU -- Unreal Engine 5.x Lumen hits the CPU a lot.
I don't think the results are due to April 1st fool.

I think it is better to test at 4K and 1440p as those are the resolutions that people use.

Attached is my monitor, Asus ProArt 4K at 60Hz. I was using it with a 4090 until I sold the card.
 

Attachments

  • Image_20250401222706.jpg
    Image_20250401222706.jpg
    106.7 KB · Views: 71
Last edited:
Back
Top