Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/openai-ceo-rumored-to-secure-ai-chip-and-server-supply-in-low-key-visit-to-taiwan.23733/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

OpenAI CEO rumored to secure AI chip and server supply in low-key visit to Taiwan

Daniel Nenni

Admin
Staff member
2_b.jpg

Credit: AFP

OpenAI is intensifying efforts to expand its artificial intelligence computing infrastructure, relying heavily on Taiwan's semiconductor supply chain. On September 30, OpenAI CEO Sam Altman conducted a discreet visit to Taiwan, holding separate meetings with TSMC and Foxconn. The discussions focused on collaboration in AI chip manufacturing and AI server infrastructure development.

The company's flagship project, Stargate, represents a major US initiative to build a network of AI data centers. Under this plan, OpenAI aims to construct five large data centers across the US. Three of these facilities are being developed together with Oracle, while the remaining two are in partnership with Japan's SoftBank. As Foxconn is Oracle's largest supplier of AI servers, Altman's engagement with the firm is seen as critical to securing production capacity to meet increasing hardware demands. SoftBank CEO Masayoshi Son has also reportedly visited Foxconn, highlighting the supplier's central role in this multinational effort.

OpenAI's increasing server needs and plans for proprietary chip development​

With the continuous expansion of new data center sites, OpenAI expects its demand for servers to climb rapidly. Although Foxconn remains the primary supplier in terms of shipment volume, the company has brought additional partners like Quanta on board to handle the growing hardware requirements beyond Foxconn's original scope.

In parallel with scaling server supply, OpenAI is advancing its own AI chip design to lessen dependency on Nvidia GPUs. According to reports, OpenAI formed an AI ASIC design team in 2024 and is collaborating with Broadcom to create chips using TSMC's advanced 3nm manufacturing process. These chips will feature sophisticated Chip-on-Wafer-on-Substrate (CoWoS) packaging technology combined with high-bandwidth memory (HBM), with mass production anticipated to begin in 2026.

Altman's visit likely covered updates on cooperation progress and offered an opportunity to deepen understanding of TSMC's cutting-edge process capabilities and CoWoS packaging capacity, both crucial for meeting OpenAI's ambitious infrastructure goals.

 

Altman makes AI deal with Samsung, SK to buy chips, build data centers in Korea​



Is there a way to estimate the wafer volume required to produce 900,000 DRAM chips per month?


"OpenAI estimated that the Stargate Project will need up to 900,000 high-performance DRAM chips per month, including high-bandwidth memory semiconductors. The two companies said they are planning to overhaul their production to meet Stargate's demand."
 
Is there a way to estimate the wafer volume required to produce 900,000 DRAM chips per month?


"OpenAI estimated that the Stargate Project will need up to 900,000 high-performance DRAM chips per month, including high-bandwidth memory semiconductors. The two companies said they are planning to overhaul their production to meet Stargate's demand."
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

If we assume 900,000 * ~70mm2 chips (~8.3*8.3) = 876 dies per wafer, or > 1,020 wafers/month on 300mm wafers
or
If we assume 900,000 * 100mm2 chips (more complex signalling, pushing the density further, etc.) , 10*10 die = 600 dpw, or > 1,500 wafers/month

Note this assumes perfect yields - raise these numbers accordingly...

Die per Wafer calc: https://www.silicon-edge.co.uk/j/index.php/resources/die-per-wafer
(A calc link from Fred Chen on an earlier thread no longer works)
 
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

If we assume 900,000 * ~70mm2 chips (~8.3*8.3) = 876 dies per wafer, or > 1,020 wafers/month on 300mm wafers
or
If we assume 900,000 * 100mm2 chips (more complex signalling, pushing the density further, etc.) , 10*10 die = 600 dpw, or > 1,500 wafers/month

Note this assumes perfect yields - raise these numbers accordingly...

Die per Wafer calc: https://www.silicon-edge.co.uk/j/index.php/resources/die-per-wafer
(A calc link from Fred Chen on an earlier thread no longer works)

Great analysis. Even if we double your estimate of 1,020 or 1,500 monthly wafers, the numbers are large but still not huge, especially if SK Hynix and Samsung are splitting the orders from OpenAI.
 
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

How many HBMs ? (which drives a lot more wafer starts)
 
The question is though, will SK & SEC (Micron too) take the bait and ramp DRAM output beyond the 15~20% we've been doing for the past 7 or 8 years?
 
I'll take a stab at backing into the wafer count:

Current high end DRAM for GPUs is 2GB / 16 gigabit per chip, with 3 GB / 24 gigabit right around the corner. I can't find die sizes for 2GB chips (advanced nodes, lack of teardowns I can source), but older/smaller GDDR6 chips used to be around 60-70mm2 (per sources Grok found).

If we assume 900,000 * ~70mm2 chips (~8.3*8.3) = 876 dies per wafer, or > 1,020 wafers/month on 300mm wafers
or
If we assume 900,000 * 100mm2 chips (more complex signalling, pushing the density further, etc.) , 10*10 die = 600 dpw, or > 1,500 wafers/month

Note this assumes perfect yields - raise these numbers accordingly...

Die per Wafer calc: https://www.silicon-edge.co.uk/j/index.php/resources/die-per-wafer
(A calc link from Fred Chen on an earlier thread no longer works)
We can expect about half that die size (~35 mm2), so we can expect about half that many wafers/month.
 
Is there a way to estimate the wafer volume required to produce 900,000 DRAM chips per month?


"OpenAI estimated that the Stargate Project will need up to 900,000 high-performance DRAM chips per month, including high-bandwidth memory semiconductors. The two companies said they are planning to overhaul their production to meet Stargate's demand."
On openai’s own website, it saids 900,000 wafer starts per month instead of 900,000 chips. Which one is true?

 
On openai’s own website, it saids 900,000 wafer starts per month instead of 900,000 chips. Which one is true?

I'd trust OpenAI's website before Yahoo Finance any day. On the other hand, let's examine the quote on the website a little more:

Through these partnerships, Samsung Electronics and SK hynix plan to scale up production of advanced memory chips, targeting 900,000 DRAM wafer starts per month at an accelerated capacity rollout, critical for powering OpenAI’s advanced AI models.
The website does say wafer starts, but notice that the joint Samsung / Hynix plan does not state all of these *additional* wafer starts are for OpenAI, but directly implies they're for the "advanced memory chip" market in general.

As usual, you have to critically examine every financial press article about the semiconductor market and projects.
 
On openai’s own website, it saids 900,000 wafer starts per month instead of 900,000 chips. Which one is true?


From OpenAI official announcement web link you provided:

"Through these partnerships, Samsung Electronics and SK hynix plan to scale up production of advanced memory chips, targeting 900,000 DRAM wafer starts per month at an accelerated capacity rollout, critical for powering OpenAI’s advanced AI models."

It would be incredible, if not impossible, for that number to actually represent 900,000 “wafers.”

And this is in addition to all the contracts and commitments Samsung and SK Hynix have already signed with other customers for the next several years.

There is no publicly announced capacity for Samsung or SK Hynix. A Google search yielded an estimate of SK Hynix’s monthly 300 mm DRAM wafer capacity at 440,000 units as of the fourth quarter of 2024, across all memory products including DRAM, flash memory, and HBM used for AI chips. Do we have an expert on Samsung, SK Hynix, and Micron’s HBM memory capacity?
 
They mentioned 900,000 wafers per month in the press release. It's more than half the proportion of all DRAM wafers made per month currently, not just HBM, so the volume is incredible. Most of the news outlets reporting on this in Korea is assuming this volume to be the expectations by 2030 riding on the growth trajectory of AI over the next few years. As blueone pointed out above, their press release on the OpenAI website doesn't say it is exclusivelyfor OpenAI, but in the media:


1759625608251.png

1759625634088.png

The top presidential adviser was quoted saying 900,000 wafers in 2029, and the article above suggests that's more than twice the entire current global production capacity of HBM. They'll need a lot of new fab space in the near future to accommodate such volumes, that's for sure!
 

Attachments

  • 1759625678232.png
    1759625678232.png
    26.9 KB · Views: 14
They mentioned 900,000 wafers per month in the press release. It's more than half the proportion of all DRAM wafers made per month currently, not just HBM, so the volume is incredible. Most of the news outlets reporting on this in Korea is assuming this volume to be the expectations by 2030 riding on the growth trajectory of AI over the next few years. As blueone pointed out above, their press release on the OpenAI website doesn't say it is exclusivelyfor OpenAI, but in the media:


View attachment 3704
View attachment 3705
The top presidential adviser was quoted saying 900,000 wafers in 2029, and the article above suggests that's more than twice the entire current global production capacity of HBM. They'll need a lot of new fab space in the near future to accommodate such volumes, that's for sure!
Amazing and confusing. Thanks for posting this. 20 MW? That's about 200 state of the art AI racks. Now that's a slow start.

It's getting to be that any announcement from OpenAI raises my skepticism level a lot.
 
Amazing and confusing. Thanks for posting this. 20 MW? That's about 200 state of the art AI racks. Now that's a slow start.

It's getting to be that any announcement from OpenAI raises my skepticism level a lot.
That's just throwing a bone to Korea, just to say they're investing locally, average Joe would have zero grasp of the size and scale of such investment. That's paltry even considering the Korean domestic market isn't exactly huge.

What is more interesting for me however is the discussions involving Samsung C&T and Samsung Heavy Industries collaborating with OpenAI for floating datacenters utilizing the ocean for cooling. To date, very limited size systems were trialed, but Microsoft's experiment showing their servers were 8 times more reliable than onshore systems is definitely something worth looking into. The conclusion from their trial showed the cost benefits from the improvement in reliability exceeded the additional cost of deploying the datacenter in the ocean, and they can be deployed without real estate concerns and easier hookup to subsea fiber lines, the latter of which is ideal for a country like Korea. If there's one thing the Koreans excel in, it's ocean fairing structures and ships!
 
From OpenAI official announcement web link you provided:
"Through these partnerships, Samsung Electronics and SK hynix plan to scale up production of advanced memory chips, targeting 900,000 DRAM wafer starts per month at an accelerated capacity rollout, critical for powering OpenAI’s advanced AI models."

For AI processors -- the DRAM content would almost definitely be mostly HBMs -- probably 12 high. (volume in 2-3 years ... to match Nvidia/AMD deals) It could easily be a deal where OpenAI acquires the HBMs -- and AMD/Nvidia uses this supply for the products that OpenAI has ordered. (saving $$). This ensures HBM supply for those deals. 12-high HBMs could drive a lot of wafer starts.
 
Back
Top