Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/exclusive-openai-set-to-finalize-first-custom-chip-design-this-year.22060/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Exclusive: OpenAI set to finalize first custom chip design this year

soAsian

Active member
"TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process technology. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and extensive networking capabilities, sources said."

Would this impact the relationship between Nvidia and OpenAI? Correct me if I'm wrong, but didn't Jensen say that Nvidia started CUDA because he heard that OpenAI was using their GPUs to train, and Nvidia decided to throw its support behind OpenAI?

 
"A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step."

Good grief.

Those Cadence licences is why these guys are burning through piles of money!

Good grief.
 
Would this impact the relationship between Nvidia and OpenAI? Correct me if I'm wrong, but didn't Jensen say that Nvidia started CUDA because he heard that OpenAI was using their GPUs to train, and Nvidia decided to throw its support behind OpenAI?

Throughout the history of semiconductors there has been price gouging. Intel is a good example, and look where they are today? Arm is having their RISC-V moment. Nvidia will be in the same boat ($40k Chips!?!?!) and OpenAI has already been DeepSeeked. The semiconductor industry is all about offering better price/performance and that requires competition.

I don't recall exactly when but the concept of "Frenemies" or "cooptiition" came to the semiconductor industry but I remember discussing it in many meetings. Has it ever really worked? Not in my experience. I do remember many times when it did not end well.
 
"TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process technology. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and extensive networking capabilities, sources said."

Would this impact the relationship between Nvidia and OpenAI? Correct me if I'm wrong, but didn't Jensen say that Nvidia started CUDA because he heard that OpenAI was using their GPUs to train, and Nvidia decided to throw its support behind OpenAI?
I don't understand your post. CUDA was originally defined about 20 years ago.

Those Cadence licenses is why these guys are burning through piles of money!

You can't be serious. Are you? The most expensive Cadence license I've heard about is approximately $15K per year, and there are annual support fees, but these costs are relatively trivial compared to what OpenAI is probably paying engineers.
 
I don't understand your post. CUDA was originally defined about 20 years ago.



You can't be serious. Are you? The most expensive Cadence license I've heard about is approximately $15K per year, and there are annual support fees, but these costs are relatively trivial compared to what OpenAI is probably paying engineers.

lol

Cadence licences are several orders of magnitude more expensive
 
SAN FRANCISCO/NEW YORK, Feb 10 (Reuters) - OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia (NVDA.O), opens new tab for its chip supply by developing its first generation of in-house artificial-intelligence silicon.

The ChatGPT maker is finalizing the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co (2330.TW), opens new tab, sources told Reuters. The process of sending a first design through a chip factory is called "taping out."


 
Last year TSMC’s C.C. Wei mentioned that a new customer gave them an incredibly ambitious projection. Internally, some TSMC executives expressed skepticism about the young man because of quantity is way too big.
 
I don't understand your post. CUDA was originally defined about 20 years ago.



You can't be serious. Are you? The most expensive Cadence license I've heard about is approximately $15K per year, and there are annual support fees, but these costs are relatively trivial compared to what OpenAI is probably paying engineers.

That is a license to.do what?

That have to be the most basic with no support.
 
"A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step."

Good grief.


Good grief.
What!!! since when can we go from 0 to finished chip in 6 months.
 
Although I feel like the most interesting topic in this issue is the fact that Richard Ho is in charge of ASIC Design. He is a pope on ML algorithms for chip design, so I guess OpenAI is after hardware too.
 
All things started to make sense now when Sam said he wants to spend "7 trillion" on A.I.
  1. 1. 500 billion for the Stargate project for data centers/power
  2. 2. Custom-built chip for OpenAI
 
All things started to make sense now when Sam said he wants to spend "7 trillion" on A.I.
  1. 1. 500 billion for the Stargate project for data centers/power
  2. 2. Custom-built chip for OpenAI
I've never read anything that Altman says that has made any sense.
 
Back
Top