Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/amd-stock-tumbles-after-unveiling-new-ai-chip-it-might-not-beat-nvidia%E2%80%99s-blackwell.21195/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AMD Stock Tumbles After Unveiling New AI Chip. It Might Not Beat Nvidia’s Blackwell.

Daniel Nenni

Admin
Staff member
Advanced Micro Devices is getting more optimistic about the long-term market size potential for artificial intelligence chips.

im-87543342

AMD CEO Lisa Su in June. Annabelle Chih/Bloomberg

Advanced Micro Devices stock is falling after the company launched its latest data center AI chip.

“We certainly have the best portfolio in the industry to address end-to-end AI,” AMD CEO Lisa Su said Thursday at the chip maker’s Advancing AI event in San Francisco.

During her keynote speech at the event, she announced AMD’s newest AI data center GPU, the Instinct MI325X. Graphics-processing units are well-suited for the parallel computations needed for AI.

Su said the MI325X incorporates 256 gigabytes of HBM3E memory, versus the 192 gigabytes of its predecessor. HBM, or high-bandwidth memory, is primarily used for artificial intelligence GPUs. Su noted that MI325X platform servers can outperform Nvidia’s rival GPU, the H200, by up to 40% in certain inference benchmarks. Inference is the process of generating answers from popular AI models.

Su said the MI325X is on track for production shipments in the fourth quarter and is expected to be available from AMD’s partners—including Dell , Hewlett Packard Enterprise and Supermicro —in the first quarter of 2025.

The Nvidia H200 may be a stale comparison as it made its debut earlier this year. Nvidia’s newer more powerful Blackwell GPUs have just started reaching customers this quarter—and those are likely to far outperform AMD’s MI325X, which based on the same CDNA 3 microarchitecture as AMD’s older MI300X GPU.
In late trading, AMD shares were down 4.8% to $162.75.

Su is getting more optimistic about the long-term market size potential for AI chips. AI demand had exceeded the company’s expectations over the past year. She now expects the market for AI data center graphics processing units will grow by more than 60% a year and reach $500 billion by 2028. Last December, she predicted the market would exceed $400 billion by 2027.

 
On a positive note, Llama and variants appear to work on AMD Radeon and MI accelerators now at least:


Slightly older news as this is from July, but this is a potentially big deal as home and small researchers will be able to use commodity AMD Radeon hardware (and MI accelerators) to build/learn/run (actually) open source LLMs. Nvidia cements itself when your first and only experience is their hardware..
 
First it was AMD chasing Intel in x86, too a long while and Intel faceplant but AMD is riding high there!

Can they catch Nvidia w/o a Nvidia stumble? There won’t be manufacturing stumble as both their futures are tied to Taiwan and TSMC
 
First it was AMD chasing Intel in x86, too a long while and Intel faceplant but AMD is riding high there!

Can they catch Nvidia w/o a Nvidia stumble? There won’t be manufacturing stumble as both their futures are tied to Taiwan and TSMC

Using the same foundry makes it more interesting. It really is a design challenge with no process excuses. They even use the same packaging. Go AMD!
 
Can they catch Nvidia w/o a Nvidia stumble? There won’t be manufacturing stumble as both their futures are tied to Taiwan and TSMC
There are still opportunities for manufacturing stumble. They can still design something there is challenging to manufacture (see Blackwell), or mispredict demand for something requiring packaging that is in high demand vs capacity for the foreseeable future.

AMD is playing most markets conservatively being slower to adopt N3 than Apple or even Intel in volume. That said Nvidia isn't a super fast mover here either.

All that to say there is a lot less mfg risk for AMD now than 5-7 years ago, but there's still some risk.
 
That MI325X is an answer to H200 and falls short of Blackwell should come as no surprise. AMD has been clear about this for some time. Is it better on inference workloads on H200? It's put up or shut up. Nvidia (and Intel) have been good about posting MLPerf results. AMD, not so much.

I'd argue that GPUs are not well suited to AI processing. They're better than CPUs and the HPC community (which is the feeder for AI programmers) has gotten Cuda and to a lesser extent other GPU-programming languages entrenched. This matters for training, where the big dollars are in the big systems, but less so for inference. Nonetheless, the merchant-market purpose-built inference solutions which can ingest ONNX-format models and run them more efficiently than a GPU have yet to gain traction despite their advantages. It's a lot like the RISC vs x86 argument. We all know which is better, and we all know which prevailed.

More thoughts on MI325X, Turin, Salina, and the rest of AMD's rah-rah event are here: https://xpu.pub/2024/10/11/amd-mi325x/
 
Back
Top