Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/openai-taps-broadcom-to-build-its-first-ai-processor-in-latest-chip-deal.23785/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

OpenAI taps Broadcom to build its first AI processor in latest chip deal

Daniel Nenni

Admin
Staff member
1760372344952.png


OpenAI Partners with Broadcom for Custom AI Chips
Yes, this is a confirmed development in the AI hardware space. On October 13, 2025, OpenAI announced a major collaboration with Broadcom to design and produce its first in-house AI processors, marking a significant step toward reducing reliance on third-party chips like those from Nvidia. The deal underscores the intensifying race among tech giants to build custom silicon for AI workloads amid skyrocketing demand for computational power.

Key Details of the Deal:
  • Timeline and Scope: OpenAI will handle the chip design, while Broadcom takes on development and deployment. Production is slated to begin in the second half of 2026, with an initial rollout of custom chips equivalent to 10 gigawatts of capacity. For context, this power draw is comparable to the electricity needs of over 8 million U.S. households or five times the output of the Hoover Dam.

  • Financials: The agreement's value isn't publicly disclosed, but earlier rumors tied it to a $10 billion Broadcom order for custom AI accelerators, potentially covering millions of units. Unlike recent deals with AMD (which includes stock options worth up to 10% of the company) and Nvidia (up to $100 billion investment plus 10 gigawatts of data-center systems), Broadcom is not providing equity or direct funding to OpenAI.

  • Technical Specs: The chips are expected to feature a systolic array architecture optimized for AI inference (running trained models), high-bandwidth memory (HBM, possibly HBM3E or HBM4), and manufacturing on TSMC's 3nm process node. This setup aims to optimize for OpenAI's specific needs, like powering ChatGPT and future models.

  • Strategic Context
OpenAI's move aligns with broader industry trends:
  • Diversification from Nvidia: With Nvidia's GPUs in short supply and high demand, companies like Google (using Broadcom for its TPUs) and Amazon (with Trainium chips) are developing in-house alternatives to cut costs and improve efficiency.

  • Recent OpenAI Deals: This follows a 6-gigawatt supply agreement with AMD last week and Nvidia's massive investment disclosure, highlighting OpenAI's aggressive infrastructure buildout—potentially hundreds of billions in spending for global data centers.

  • Market Impact: Broadcom's stock surged over 10% on the announcement, reflecting investor optimism about AI revenue growth (projected to "improve significantly" in fiscal 2026). However, analysts doubt this will immediately threaten Nvidia's dominance, given the complexities of scaling custom chip production.
OpenAI CEO Sam Altman emphasized the partnership's importance: "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential." Funding for the project may come from a mix of investor rounds, Microsoft backing, and future revenues, though details remain unclear.

This deal positions OpenAI to potentially control more of its AI stack, from models to hardware, accelerating advancements in generative AI.

 
Either way, Open AI or Nvidia TSMC wins. In fact, OpenAI/AVGO will probably pay more for wafers than Nvidia. GO TSMC!

Nvidia is TSMC's #2 customer with an estimated 18% of revenue while Broadcom is #6 at 5%.
 
Back
Top