Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/openai-ceo-rumored-to-secure-ai-chip-and-server-supply-in-low-key-visit-to-taiwan.23733/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

OpenAI CEO rumored to secure AI chip and server supply in low-key visit to Taiwan

Daniel Nenni

Admin
Staff member
2_b.jpg

Credit: AFP

OpenAI is intensifying efforts to expand its artificial intelligence computing infrastructure, relying heavily on Taiwan's semiconductor supply chain. On September 30, OpenAI CEO Sam Altman conducted a discreet visit to Taiwan, holding separate meetings with TSMC and Foxconn. The discussions focused on collaboration in AI chip manufacturing and AI server infrastructure development.

The company's flagship project, Stargate, represents a major US initiative to build a network of AI data centers. Under this plan, OpenAI aims to construct five large data centers across the US. Three of these facilities are being developed together with Oracle, while the remaining two are in partnership with Japan's SoftBank. As Foxconn is Oracle's largest supplier of AI servers, Altman's engagement with the firm is seen as critical to securing production capacity to meet increasing hardware demands. SoftBank CEO Masayoshi Son has also reportedly visited Foxconn, highlighting the supplier's central role in this multinational effort.

OpenAI's increasing server needs and plans for proprietary chip development​

With the continuous expansion of new data center sites, OpenAI expects its demand for servers to climb rapidly. Although Foxconn remains the primary supplier in terms of shipment volume, the company has brought additional partners like Quanta on board to handle the growing hardware requirements beyond Foxconn's original scope.

In parallel with scaling server supply, OpenAI is advancing its own AI chip design to lessen dependency on Nvidia GPUs. According to reports, OpenAI formed an AI ASIC design team in 2024 and is collaborating with Broadcom to create chips using TSMC's advanced 3nm manufacturing process. These chips will feature sophisticated Chip-on-Wafer-on-Substrate (CoWoS) packaging technology combined with high-bandwidth memory (HBM), with mass production anticipated to begin in 2026.

Altman's visit likely covered updates on cooperation progress and offered an opportunity to deepen understanding of TSMC's cutting-edge process capabilities and CoWoS packaging capacity, both crucial for meeting OpenAI's ambitious infrastructure goals.

 

Altman makes AI deal with Samsung, SK to buy chips, build data centers in Korea​



Is there a way to estimate the wafer volume required to produce 900,000 DRAM chips per month?


"OpenAI estimated that the Stargate Project will need up to 900,000 high-performance DRAM chips per month, including high-bandwidth memory semiconductors. The two companies said they are planning to overhaul their production to meet Stargate's demand."
 
Is there a way to estimate the wafer volume required to produce 900,000 DRAM chips per month?


"OpenAI estimated that the Stargate Project will need up to 900,000 high-performance DRAM chips per month, including high-bandwidth memory semiconductors. The two companies said they are planning to overhaul their production to meet Stargate's demand."
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

If we assume 900,000 * ~70mm2 chips (~8.3*8.3) = 876 dies per wafer, or > 1,020 wafers/month on 300mm wafers
or
If we assume 900,000 * 100mm2 chips (more complex signalling, pushing the density further, etc.) , 10*10 die = 600 dpw, or > 1,500 wafers/month

Note this assumes perfect yields - raise these numbers accordingly...

Die per Wafer calc: https://www.silicon-edge.co.uk/j/index.php/resources/die-per-wafer
(A calc link from Fred Chen on an earlier thread no longer works)
 
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

If we assume 900,000 * ~70mm2 chips (~8.3*8.3) = 876 dies per wafer, or > 1,020 wafers/month on 300mm wafers
or
If we assume 900,000 * 100mm2 chips (more complex signalling, pushing the density further, etc.) , 10*10 die = 600 dpw, or > 1,500 wafers/month

Note this assumes perfect yields - raise these numbers accordingly...

Die per Wafer calc: https://www.silicon-edge.co.uk/j/index.php/resources/die-per-wafer
(A calc link from Fred Chen on an earlier thread no longer works)

Great analysis. Even if we double your estimate of 1,020 or 1,500 monthly wafers, the numbers are large but still not huge, especially if SK Hynix and Samsung are splitting the orders from OpenAI.
 
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

How many HBMs ? (which drives a lot more wafer starts)
 
The question is though, will SK & SEC (Micron too) take the bait and ramp DRAM output beyond the 15~20% we've been doing for the past 7 or 8 years?
 
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

If we assume 900,000 * ~70mm2 chips (~8.3*8.3) = 876 dies per wafer, or > 1,020 wafers/month on 300mm wafers
or
If we assume 900,000 * 100mm2 chips (more complex signalling, pushing the density further, etc.) , 10*10 die = 600 dpw, or > 1,500 wafers/month

Note this assumes perfect yields - raise these numbers accordingly...

Die per Wafer calc: https://www.silicon-edge.co.uk/j/index.php/resources/die-per-wafer
(A calc link from Fred Chen on an earlier thread no longer works)
We can expect about half that die size (~35 mm2), so we can expect about half that many wafers/month.
 
Is there a way to estimate the wafer volume required to produce 900,000 DRAM chips per month?


"OpenAI estimated that the Stargate Project will need up to 900,000 high-performance DRAM chips per month, including high-bandwidth memory semiconductors. The two companies said they are planning to overhaul their production to meet Stargate's demand."
On openai’s own website, it saids 900,000 wafer starts per month instead of 900,000 chips. Which one is true?

 
On openai’s own website, it saids 900,000 wafer starts per month instead of 900,000 chips. Which one is true?

I'd trust OpenAI's website before Yahoo Finance any day. On the other hand, let's examine the quote on the website a little more:

Through these partnerships, Samsung Electronics and SK hynix plan to scale up production of advanced memory chips, targeting 900,000 DRAM wafer starts per month at an accelerated capacity rollout, critical for powering OpenAI’s advanced AI models.
The website does say wafer starts, but notice that the joint Samsung / Hynix plan does not state all of these *additional* wafer starts are for OpenAI, but directly implies they're for the "advanced memory chip" market in general.

As usual, you have to critically examine every financial press article about the semiconductor market and projects.
 
Back
Top