Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/broadcoms-thor-ultra-a-new-front-in-the-ai-networking-wars-against-nvidia.23822/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Broadcom's Thor Ultra: A New Front in the AI Networking Wars Against Nvidia

Daniel Nenni

Admin
Staff member
On October 14, 2025, Broadcom announced the launch of its Thor Ultra networking chip, a high-performance processor designed to connect hundreds of thousands of AI accelerators in massive data centers. This move directly challenges Nvidia's dominance in AI infrastructure, particularly its NVLink Switch and InfiniBand networking solutions, by offering superior scalability for distributed computing clusters essential for training and running large language models like those powering ChatGPT.

Key Details on the Thor Ultra Chip​

  • Purpose and Capabilities: The Thor Ultra acts as a high-speed "traffic controller" for data flowing between closely packed chips within data centers. It enables operators to deploy far more chips—up to 100,000+ in a single cluster—while doubling bandwidth compared to prior generations. This is critical for AI workloads, where low-latency communication between processors determines efficiency.

  • Technical Edge: Built on advanced processes (likely 5nm or better, similar to Broadcom's prior Tomahawk series), it supports 800G Ethernet speeds and is optimized for "scale-out" architectures in hyperscale AI environments. Broadcom executives emphasized its role in building "large clusters" for distributed systems.

  • Availability: Shipping begins immediately, with early adopters including major cloud providers like Google (for whom Broadcom already custom-designs AI processors).
The launch builds on Broadcom's earlier July 2025 release of the Tomahawk Ultra, which targeted intra-rack connections and could link four times more chips than Nvidia's NVLink. Thor Ultra extends this to broader data center-scale networking, further solidifying Broadcom's position in the $60–90 billion AI chip market projected for 2027.

Context: The OpenAI Deal and Broader AI Push​

This announcement comes just one day after Broadcom revealed a blockbuster partnership with OpenAI on October 13, 2025:
  • The companies will co-develop and deploy 10 gigawatts of custom AI accelerators (equivalent to powering over 8 million U.S. households) starting in the second half of 2026, ramping up to full deployment by 2029.

  • These "Tensor" chips, optimized for inference and networked via Broadcom's Ethernet stack, aim to reduce OpenAI's reliance on Nvidia GPUs, which currently dominate the market.

  • OpenAI CEO Sam Altman highlighted the need for integrated systems (compute, memory, and networking) to cut costs—estimated at $35 billion per gigawatt data center, mostly for chips.
Broadcom's AI revenue has surged: $4.4 billion in Q2 2025 (up 170% YoY for networking alone), with forecasts of $19–20 billion in fiscal 2025 and $33 billion by 2026. The stock jumped 9–10% on the OpenAI news, pushing its market cap above $1.5 trillion.

The Intensifying Broadcom-Nvidia Rivalry​

Nvidia holds ~80–90% of the AI accelerator market, but Broadcom is carving out the networking layer—often called the "plumbing" of AI data centers. Here's a quick comparison:

1760455321885.png


Broadcom doesn't compete directly in GPUs but powers alternatives (e.g., Google's TPUs) and now OpenAI's in-house designs. This "picks and shovels" strategy—supplying the infrastructure—positions Broadcom to capture 20–30% of the AI hardware pie without Nvidia's supply constraints.

Market Reaction and Outlook​

  • Stock Impact: AVGO shares rose ~10% post-OpenAI deal, reflecting investor bets on diversification from Nvidia (NVDA), which has faced volatility from export restrictions and competition.

  • Broader Implications: As AI demand explodes (OpenAI alone committed to 33GW across partners like Nvidia, AMD, and Oracle), open standards like Ethernet could erode Nvidia's moat. Analysts see Broadcom's AI business growing 60% YoY, but risks include execution delays and U.S.-China trade tensions.
This launch underscores the AI chip wars shifting from compute to connectivity—Broadcom's quiet ascent could make it the unsung hero (or Nvidia's biggest threat) in the race to build the next generation of intelligent infrastructure.

 
Back
Top