Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/why-has-amd-managed-to-beat-intel-yet-continues-to-lag-further-behind-nvidia.22712/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Why has AMD managed to beat Intel, yet continues to lag further behind Nvidia?

Jin Chen

New member
8d311547-f9e0-43fa-9469-7090bddccf66_1600x1009.jpeg


AMD is set to become the first customer of TSMC’s 2nm process, beating Apple by at least half a year, an industry first.

Since Apple first became a TSMC customer in 2014 with the 20nm process, it has always been the first to adopt each new node. Even Qualcomm and HiSilicon, which have equally high demands for cutting-edge mobile processors, have never managed to tape out before Apple

Why Did Apple Fall Behind AMD?

In reality, Moore’s Law has slowed to a three-year cadence starting with the 5nm node. The first 5nm processor debuted in late 2020 with the iPhone 12. The first 3nm chip arrived in late 2023 with the iPhone 15.

Based on that timeline, the iPhone 18, due in late 2026, will be Apple’s first 2nm device. “Apple didn’t slow down—AMD suddenly sped up,” the analyst noted.

But being the first to adopt the world’s most advanced semiconductor process is no easy feat. Early in mass production, yields are extremely low. Customers must work closely with TSMC and commit to high wafer volumes, enabling TSMC to identify problems through big data and gradually improve yield rates.

For the past decade, Apple has consistently led this charge—pouring massive sums into using processors in over 100 million iPhones each year as test cases, absorbing massive losses from defective wafers in the early stages. In return, it enjoys the marketing halo of having the world’s most advanced chip in every new iPhone.

Other companies have typically waited their turn, letting Apple iron out the kinks before rushing in.

Yet AMD’s server processor wafer volume is estimated to be less than a fifth of Apple’s. Financially, it’s far less equipped to handle low early-stage yields. “No one understands why they’re so eager to be the guinea pig,” a former TSMC executive said.

Most believe Lisa Su’s bold bet on 2nm is meant to pre-empt Intel’s upcoming 18A server chips, comparable in class to TSMC’s 2nm, which are supposed to enter mass production this year. Intel’s former CEO Pat Gelsinger had touted 18A as his ultimate weapon for reviving Intel’s fortunes.

As we now know, Gelsinger is gone, and optimism about Intel’s 18A has steadily faded. According to market research firm SemiAnalysis, AMD’s server revenue surpassed Intel’s for the first time in Q3 2023. Intel’s market cap has since fallen to just 60% of AMD’s.

Despite hopes that AMD’s GPUs might serve as a viable alternative to Nvidia’s, the gap between the two companies has only widened.


(Continue reading the full story on Tech Taiwan Substack.)
 
re: AMD vs Intel -- Intel defeated Intel.

For AMD GPUs; it's been an underinvestment loop for the last decade; AMD (rightly) prioritized CPUs in ~ 2015 for surival reasons. That meant reduced $$ going into GPU development, and a few years later they saw reduced sales as a result. Then the bean counters kept going for lower production quantities and SKUs to keep costs down which limits their profit and further investment in the area. Also, the market for discrete GPUs has softened a bit over time, and AMD's own 'victory' in the console market has reduced margins a bit in this area too.

(AMD manufactured 1/3rd as many discrete GPUs in recent quarters as they used to a decade ago, source: https://www.tomshardware.com/tech-i...m-nvidia-as-gpu-shipments-rise-slightly-in-q4 )

P.S. The easiest place to see the divergence in AMD vs Nvidia GPUs is to look at the Nvidia Maxwell generation. Before Maxwell's release, AMD and Nvidia regularly traded blows on who was 'king'. but with Maxwell (~2014), Nvidia improved perf/watt by about 40% in one generation, without changing a node (same TSMC 28nm as Kepler). AMD has yet to catch up on that perf/watt.
 
Last edited:
re: AMD vs Intel -- Intel defeated Intel.

For AMD GPUs; it's been an underinvestment loop for the last decade; AMD (rightly) prioritized CPUs in ~ 2015 for surival reasons. That meant reduced $$ going into GPU development, and a few years later they saw reduced sales as a result. Then the bean counters kept going for lower production quantities and SKUs to keep costs down which limits their profit and further investment in the area. Also, the market for discrete GPUs has softened a bit over time, and AMD's own 'victory' in the console market has reduced margins a bit in this area too.

(AMD manufactured 1/3rd as many discrete GPUs in recent quarters as they used to a decade ago, source: https://www.tomshardware.com/tech-i...m-nvidia-as-gpu-shipments-rise-slightly-in-q4 )

P.S. The easiest place to see the divergence in AMD vs Nvidia GPUs is to look at the Nvidia Maxwell generation. Before Maxwell's release, AMD and Nvidia regularly traded blows on who was 'king'. but with Maxwell (~2014), Nvidia improved perf/watt by about 40% in one generation, without changing a node (same TSMC 28nm as Kepler). AMD has yet to catch up on that perf/watt.
Yes but partly thanks to separation of consumer and compute architectures. Maxwell had poor FP64, ~10x worse than Kepler, AMD achieved 20-30x better FP64 performance (in Firepro cards). But definitely good decision and AMD later copied that with RDNA/CDNA...
 
I think the article is missing two other important considerations:
* Integrated graphics - reduced demand for external graphics GPUs
* Software - AMD piggybacked on the huge x86 software ecosystem for both Windows and Linux, from both the OS and graphics perspective. AI GPU requires an entirely different software ecosystem.
 
I think the article is missing two other important considerations:
* Integrated graphics - reduced demand for external graphics GPUs
* Software - AMD piggybacked on the huge x86 software ecosystem for both Windows and Linux, from both the OS and graphics perspective. AI GPU requires an entirely different software ecosystem.
Yes they got carried by Intel's software with Nvidia it's not possible so they are facing difficulties.
 
Back
Top