On October 14, 2025, Broadcom announced the launch of its Thor Ultra networking chip, a high-performance processor designed to connect hundreds of thousands of AI accelerators in massive data centers. This move directly challenges Nvidia's dominance in AI infrastructure, particularly its NVLink Switch and InfiniBand networking solutions, by offering superior scalability for distributed computing clusters essential for training and running large language models like those powering ChatGPT.
Broadcom doesn't compete directly in GPUs but powers alternatives (e.g., Google's TPUs) and now OpenAI's in-house designs. This "picks and shovels" strategy—supplying the infrastructure—positions Broadcom to capture 20–30% of the AI hardware pie without Nvidia's supply constraints.
Key Details on the Thor Ultra Chip
- Purpose and Capabilities: The Thor Ultra acts as a high-speed "traffic controller" for data flowing between closely packed chips within data centers. It enables operators to deploy far more chips—up to 100,000+ in a single cluster—while doubling bandwidth compared to prior generations. This is critical for AI workloads, where low-latency communication between processors determines efficiency.
- Technical Edge: Built on advanced processes (likely 5nm or better, similar to Broadcom's prior Tomahawk series), it supports 800G Ethernet speeds and is optimized for "scale-out" architectures in hyperscale AI environments. Broadcom executives emphasized its role in building "large clusters" for distributed systems.
- Availability: Shipping begins immediately, with early adopters including major cloud providers like Google (for whom Broadcom already custom-designs AI processors).
Context: The OpenAI Deal and Broader AI Push
This announcement comes just one day after Broadcom revealed a blockbuster partnership with OpenAI on October 13, 2025:- The companies will co-develop and deploy 10 gigawatts of custom AI accelerators (equivalent to powering over 8 million U.S. households) starting in the second half of 2026, ramping up to full deployment by 2029.
- These "Tensor" chips, optimized for inference and networked via Broadcom's Ethernet stack, aim to reduce OpenAI's reliance on Nvidia GPUs, which currently dominate the market.
- OpenAI CEO Sam Altman highlighted the need for integrated systems (compute, memory, and networking) to cut costs—estimated at $35 billion per gigawatt data center, mostly for chips.
The Intensifying Broadcom-Nvidia Rivalry
Nvidia holds ~80–90% of the AI accelerator market, but Broadcom is carving out the networking layer—often called the "plumbing" of AI data centers. Here's a quick comparison:Broadcom doesn't compete directly in GPUs but powers alternatives (e.g., Google's TPUs) and now OpenAI's in-house designs. This "picks and shovels" strategy—supplying the infrastructure—positions Broadcom to capture 20–30% of the AI hardware pie without Nvidia's supply constraints.
Market Reaction and Outlook
- Stock Impact: AVGO shares rose ~10% post-OpenAI deal, reflecting investor bets on diversification from Nvidia (NVDA), which has faced volatility from export restrictions and competition.
- Broader Implications: As AI demand explodes (OpenAI alone committed to 33GW across partners like Nvidia, AMD, and Oracle), open standards like Ethernet could erode Nvidia's moat. Analysts see Broadcom's AI business growing 60% YoY, but risks include execution delays and U.S.-China trade tensions.