Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/nvidia-vs-amd-vs-intel-comparing-ai-chip-sales.18693/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Nvidia vs. AMD vs. Intel: Comparing AI Chip Sales

Daniel Nenni

Admin
Staff member

AI Chip Race 2023.jpg

Nvidia vs. AMD vs. Intel: Comparing AI Chip Sales​

Nvidia has become an early winner of the generative AI boom.

The company reported record revenue in its second quarter earnings report, with sales of AI chips playing a large role. If we compare to other American competitors, what do the AI chip sales of Nvidia vs. AMD vs. Intel look like?

In this graphic, we use earnings reports from each company to see their revenue over time.

A Clear Leader Emerges​

While the companies don’t report revenue for their AI chips specifically, they do share revenue for their Data Center segment.

The Data Center segment includes chips like Central Processing Units (CPUs), Data Processing Units (DPUs), and Graphic Processing Units (GPUs). The latter are preferred for AI because they can perform many simple tasks simultaneously and efficiently.

Below, we show how quarterly Data Center revenue has grown for Nvidia vs. AMD vs. Intel.

AI AMD Intel Nvidia Growth 2023.jpg


Source: Nvidia, AMD, Intel. Quarters are based on the calendar year. In cases where revenue was revised at a later date, we have used the latest available revision. Intel integrated its Accelerated Computing Systems & Graphics (AXG) group into its Data Center Group in 2023. We have included revenue from the AXG group in the Data Center revenue for quarters prior to 2023 except for Q1 and Q2 2022, where revised Data Center revenue was provided by Intel.

Nvidia’s Data Center revenue has quadrupled over the last two years, and it’s estimated to have more than 70% of the market share for AI chips.

The company achieved dominance by recognizing the AI trend early, becoming a one-stop shop offering chips, software, and access to specialized computers. After hitting a $1 trillion market cap earlier in 2023, the stock continues to soar.

Competition Between Nvidia vs. AMD vs. Intel​

If we compare Nvidia vs. AMD, the latter company has seen slower growth and less revenue. Its MI250 chip was found to be 80% as fast as Nvidia’s A100 chip.

However, AMD has recently put a focus on AI, announcing a new MI300X chip with 192GB of memory compared to the 141GB that Nvidia’s new GH200 offers. More memory reduces the amount of GPUs needed, and could make AMD a stronger contender in the space.

In contrast, Intel has seen yearly revenue declines and has virtually no market share in AI chips. The company is better known for making traditional CPUs, and its foray into the AI space has been fraught with issues. Its Sapphire Rapids processor faced years of delays due to a complex design and numerous glitches.

Going forward, all three companies have indicated they plan to increase their AI offerings. It’s not hard to see why: ChatGPT reportedly runs on 10,000 Nvidia A100 chips, which would carry a total price tag of around $100 million dollars.

As more AI models are developed, the infrastructure that powers them will be a huge revenue opportunity.

 
AMD and Nvidia seems to be solely focusing on LLM and higher end of AI that require significant amount of compute. But Intel is going for all direction. They are releasing CPU with onboard AI accelerator Like Xeon SPR and Core MTL. Dedicated AI (Gaudi) and GPU hardware. Nvidia seems not passionate about edge AI and so does AMD. Only time can tell who’s winning. But Intel seem to be winning either way
 
AMD and Nvidia seems to be solely focusing on LLM and higher end of AI that require significant amount of compute. But Intel is going for all direction. They are releasing CPU with onboard AI accelerator Like Xeon SPR and Core MTL. Dedicated AI (Gaudi) and GPU hardware. Nvidia seems not passionate about edge AI and so does AMD. Only time can tell who’s winning. But Intel seem to be winning either way
Sure Intel seems to be going all directions, AMD is only focusing on one. Ehm sure thing.
5d57444c-19b8-499d-b4dd-31470460a9b5_1036x526.png
 
Sure Intel seems to be going all directions, AMD is only focusing on one. Ehm sure thing.View attachment 1421
We are reading the same table I am right? Those are 4 skus of a huge high power DC chip. Not a 10W laptop chip, a 5W phone, automotive chip, or some small chip you slap inside some embedded application. Heck it isn't even something that integrates with EPYC either. Is that to say MI300 is bad, no. That's for the market to decide. But MI300P is not going into next year's galaxy, some DOD missile guidance system, or some random warehouse to power AI at the edge. I have to assume Xilinx has solutions for these sorts of applications beyond that random section of Phoenix, but those solutions aren't GPUs or from Radeon.
 
Kind of odd that companies like Google (who design/outsource their TPUs for training & inference) are ignored from this summary.
 
We are reading the same table I am right? Those are 4 skus of a huge high power DC chip. Not a 10W laptop chip, a 5W phone, automotive chip, or some small chip you slap inside some embedded application. Heck it isn't even something that integrates with EPYC either. Is that to say MI300 is bad, no. That's for the market to decide. But MI300P is not going into next year's galaxy, some DOD missile guidance system, or some random warehouse to power AI at the edge. I have to assume Xilinx has solutions for these sorts of applications beyond that random section of Phoenix, but those solutions aren't GPUs or from Radeon.
I was reacting to this - "they are releasing CPU with onboard AI accelerator Like Xeon SPR and Core MTL." And he made it sound that Intel has this grand strategy for AI. That's not true. Both Nvidia and Intel have better product lineups geared for AI overall. Also please can you tell me which solutions are you talking about that intel has? Missile guidance, 5w phone, laptop or automotive chip? I honestly didn't hear Intel making any of those.
 
I was reacting to this - "they are releasing CPU with onboard AI accelerator Like Xeon SPR and Core MTL." And he made it sound that Intel has this grand strategy for AI. That's not true. Both Nvidia and Intel have better product lineups geared for AI overall. Also please can you tell me which solutions are you talking about that intel has? Missile guidance, 5w phone, laptop or automotive chip? I honestly didn't hear Intel making any of those.
I was more so talking about how “AI” isn’t going to just be chatgpt and H100 data centers. It seems folks often forget about edge and industrial solutions. As for intel specific AI solutions for these different environments; phones are obviously not something intel really plays in anymore. ML there is done by your Apples MTKs and QCOMs. As for missile guidance systems and other small edges I would have to assume Altera has an fpga/asic portfolio just as robust as Xilinx given how the number 2 fpga maker keeps getting shoutouts from intel in their earnings calls about PSG having record quarter after record quarter. Automotive would mostly be mobileye. Laptop is their client products. I remember intel touting the GNA in tgl not that is seems like anything used it, and intel seems to love to talk about the ai/ml capabilities of mtl. At the recent financial event I only skimmed the ai part because it doesn’t really interest me, but funnily enough Pat said something to the tune of how intel will be delivering more tops than anyone else on the back of how many client SOCs they will ship. Not sure if this claim was in relation to total tops or total tops on client platforms though.
 
I was more so talking about how “AI” isn’t going to just be chatgpt and H100 data centers. It seems folks often forget about edge and industrial solutions. As for intel specific AI solutions for these different environments; phones are obviously not something intel really plays in anymore. ML there is done by your Apples MTKs and QCOMs. As for missile guidance systems and other small edges I would have to assume Altera has an fpga/asic portfolio just as robust as Xilinx given how the number 2 fpga maker keeps getting shoutouts from intel in their earnings calls about PSG having record quarter after record quarter. Automotive would mostly be mobileye. Laptop is their client products. I remember intel touting the GNA in tgl not that is seems like anything used it, and intel seems to love to talk about the ai/ml capabilities of mtl. At the recent financial event I only skimmed the ai part because it doesn’t really interest me, but funnily enough Pat said something to the tune of how intel will be delivering more tops than anyone else on the back of how many client SOCs they will ship. Not sure if this claim was in relation to total tops or total tops on client platforms though.
To be honest I didn't research edge that much maybe you're right. But I suppose xilinx and altera are pretty much same on the number of offerings. I might be wrong though. Well yes Intel loves to talk about AI in ML but it hasn't come out yet while AMD has chips right now that contain AI.
Second thing I agree with you that AI won't just be H100 or MI300 type of products. But I think Intel is again more talking then doing. Sapphire Rappids was good example of how Intel executes. Sapphire Rappids is pretty good in AI but it's weaker in all other areas. So was the silicon area for the AI worth it? If you ask me I don't think so.
 

Attachments

  • image-3.png
    image-3.png
    59.2 KB · Views: 77
To be honest I didn't research edge that much maybe you're right. But I suppose xilinx and altera are pretty much same on the number of offerings. I might be wrong though. Well yes Intel loves to talk about AI in ML but it hasn't come out yet while AMD has chips right now that contain AI.
Anybody know how that's going; as all I know is that Phoenix has this capability? Is it a TGL situation where basically nothing uses it, or is there lots of added functionality in various audio, video, communication, and creator apps for Phoenix owners. Also is there a directX/openGL like lib for client NPUs, or do folks need to program for the hardware?
Second thing I agree with you that AI won't just be H100 or MI300 type of products. But I think Intel is again more talking then doing. Sapphire Rappids was good example of how Intel executes. Sapphire Rappids is pretty good in AI but it's weaker in all other areas. So was the silicon area for the AI worth it? If you ask me I don't think so.
Eh I don't know about that take. Let's say no AMX and a weaker AVX-512 implementation was used, would that half core area/power? I would assume no. On the other hand the accelerators/extra instructions (both AI adjacent and not) are certainly system sellers if your workload uses or can be programed to use them. I have to think it also helps that the default H100 config is the SPR one rather than what happened with A100 where Milan was the default. Although maybe like 6-12mo after launch this has changed to Grace, I don't know. My not so hot take is that what would have made SPR rapids better is if it came out on time and competed with Milan.
 
Back
Top