You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
"TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process technology. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and extensive networking capabilities, sources said."
Would this impact the relationship between Nvidia and OpenAI? Correct me if I'm wrong, but didn't Jensen say that Nvidia started CUDA because he heard that OpenAI was using their GPUs to train, and Nvidia decided to throw its support behind OpenAI?
"A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step."
Would this impact the relationship between Nvidia and OpenAI? Correct me if I'm wrong, but didn't Jensen say that Nvidia started CUDA because he heard that OpenAI was using their GPUs to train, and Nvidia decided to throw its support behind OpenAI?
Throughout the history of semiconductors there has been price gouging. Intel is a good example, and look where they are today? Arm is having their RISC-V moment. Nvidia will be in the same boat ($40k Chips!?!?!) and OpenAI has already been DeepSeeked. The semiconductor industry is all about offering better price/performance and that requires competition.
I don't recall exactly when but the concept of "Frenemies" or "cooptiition" came to the semiconductor industry but I remember discussing it in many meetings. Has it ever really worked? Not in my experience. I do remember many times when it did not end well.
"TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process technology. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and extensive networking capabilities, sources said."
Would this impact the relationship between Nvidia and OpenAI? Correct me if I'm wrong, but didn't Jensen say that Nvidia started CUDA because he heard that OpenAI was using their GPUs to train, and Nvidia decided to throw its support behind OpenAI?
You can't be serious. Are you? The most expensive Cadence license I've heard about is approximately $15K per year, and there are annual support fees, but these costs are relatively trivial compared to what OpenAI is probably paying engineers.
I don't understand your post. CUDA was originally defined about 20 years ago.
You can't be serious. Are you? The most expensive Cadence license I've heard about is approximately $15K per year, and there are annual support fees, but these costs are relatively trivial compared to what OpenAI is probably paying engineers.
SAN FRANCISCO/NEW YORK, Feb 10 (Reuters) - OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia (NVDA.O), opens new tab for its chip supply by developing its first generation of in-house artificial-intelligence silicon.
The ChatGPT maker is finalizing the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co (2330.TW), opens new tab, sources told Reuters. The process of sending a first design through a chip factory is called "taping out."
Last year TSMC’s C.C. Wei mentioned that a new customer gave them an incredibly ambitious projection. Internally, some TSMC executives expressed skepticism about the young man because of quantity is way too big.
I don't understand your post. CUDA was originally defined about 20 years ago.
You can't be serious. Are you? The most expensive Cadence license I've heard about is approximately $15K per year, and there are annual support fees, but these costs are relatively trivial compared to what OpenAI is probably paying engineers.
"A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step."
Although I feel like the most interesting topic in this issue is the fact that Richard Ho is in charge of ASIC Design. He is a pope on ML algorithms for chip design, so I guess OpenAI is after hardware too.