Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/samsung-foundry-nabs-nvidia.24757/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030970
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Samsung Foundry nabs Nvidia

benb

Well-known member

Samsung shares rise after Nvidia's Huang flags tie-up on new AI chips​

By Hyunjoo Jin
March 16, 20265:20 PM CDTUpdated 12 hours ago


Illustration shows Samsung Electronics logo and computer motherboard

A Samsung Electronics logo and a computer motherboard appear in this illustration taken August 25, 2025. REUTERS/Dado Ruvic/Illustration Purchase Licensing Rights, opens new tab

March 17 (Reuters) - Shares of Samsung Electronics (005930.KS), opens new tab rose as much as 5% on Tuesday after Nvidia (NVDA.O), opens new tab CEO Jensen Huang said ‌the South Korean company was producing Nvidia's new artificial intelligence chips.
The news fuelled expectations that Samsung's foundry division, which makes logic chips for customers including Tesla (TSLA.O), opens new tab , Apple (AAPL.O), opens new tab and Samsung's phone division, may be able to turn around as early as next year after posting ⁠billions of dollars in annual losses in recent years, analysts said.

At Nvidia's GTC developer conference in California on Monday, Huang unveiled Nvidia's new AI inference processor based on technology from chip startup Groq.

"I want to thank Samsung who manufactures the Groq LP30 chip for us and they're cranking as hard as they can," he said, adding the chips were in production, and would be shipped in the second half of this ‌year.
Samsung ⁠also showcased the Nvidia chips made using its 4-nanometer manufacturing process at the GTC.
Samsung shares were up 4.3% at 196,800 won as of 0252 GMT, after earlier reaching 198,000 won. The broader market (.KS11), opens new tab was up 2.7%.

Sohn In-joon, an ⁠analyst at Heungkuk Securities, expected Samsung's foundry business would be able to reach breakeven later next year. But he said weak demand from mobile phones stemming ⁠from surging memory chip prices could weigh on foundry earnings.
Advanced Micro Devices' (AMD.O), opens new tab CEO Lisa Su will meet Samsung Electronics Chairman Jay Y. Lee ⁠in South Korea on Wednesday, media reports said, with eyes on whether the two would discuss cooperation in memory chips and logic semiconductors.

 
It looks like Samsung is making the Groq 3 LPU (Language Processing Unit) inference chip for Nvidia. Certainly SRAM heavy but also definitely a "logic" chip:


The Groq 3 incorporates SRAM, which offers faster speeds than DRAM. Memory bandwidth ranges from 22 terabytes per second (TB/s) to as high as 150 TB/s—about seven to 45 times faster than high-bandwidth memory (HBM). This significantly reduces latency in AI token generation compared with HBM-based systems. Samsung Foundry is producing the chips. “I would like to thank Samsung Electronics for manufacturing the LPU chips,” Mr. Huang said. “Samsung is currently producing as many as possible.”
 
This Groq 3 initiative by Nvidia is a curiosity to me. So all this time Nvidia knew inference processing was coming up fast (they probably had more intelligence about where AI was going than any organization on the planet), they knew GPUs don't have a great architecture for inference, yet they have do to this weird acqui-hire to get a leadership inference design quickly? Nvidia admittedly does have a stellar track record with acquisitions (their entire networking strategy, now the highest revenue networking product group in the industry (supposedly), is based on two acquisitions), but the LPU is just another processor, albeit one with an avant-garde flexible HW processing flow chip architecture. I can see Intel needing SambaNova after PatG (and others) hollowed out Intel's R&D for anything that wasn't an x86 CPU, but Nvidia? I expected more from Nvidia internally.
 
Nvidia admittedly does have a stellar track record with acquisitions (their entire networking strategy, now the highest revenue networking product group in the industry (supposedly), is based on two acquisitions), but the LPU is just another processor, albeit one with an avant-garde flexible HW processing flow chip architecture.

The issue is rapid evolution and specialization of processor architectures. Two things happened about 2-3 years ago that radically changed what AI chip companies need to build:
* Transformer-based MoE LLMs became huge - big enough to focus hardware optimizations specifically on their general structure.
* Disaggregation - DeepSeek showed the practical performance benefits of splitting prefill and decode phases of transformers. That resulted in a rapid rethink of AI processor clusters and software.

We're now seeing distinct hardware / processors developed/tuned for prefill and decode, linked together via fast networking and KV memory / caches.
* NVIDIA - Rubin CPX for prefill, Groq 3 LPU for decode, lots of special networking and storage for KV cache.
* Intel - NVIDIA Blackwell prefill, Gaudi 3 for decode
* Cerebras/Amazon - Trainium 3 prefill, Cerebras decode.

But the real magic is the rack level software that makes it all happen, and does intelligent resource allocation and routing. I think we're going to see a couple more generations of rack level innovation with increasingly better tuned hardware, for the way current frontier models work.
 
Last edited:
When I read about the complexity of AI hardware, software, administration, and datacenter requirements (power, cooling, etc), it still looks to me like future of enterprise AI is in and with the cloud computing companies. Even if big corporations wanted to do their own AI, how can they compete for workers at any of these levels with the tech companies? I don't think they can. That Amazon deal with Cerebras gives me a lot more confidence in Cerebras's future.
 
As I was reading the description of the Groq 3 LPU, I couldn't help but think it sounded alot like a Cerebras compute unit. And then Kevin's post showed both doing decode in separate announced deals.

Mr Blue, I get the feeling Jensen chose this path because his other teams were already working balls-out on other projects.
 
It looks like Samsung is making the Groq 3 LPU (Language Processing Unit) inference chip for Nvidia. Certainly SRAM heavy but also definitely a "logic" chip:


The Groq 3 incorporates SRAM, which offers faster speeds than DRAM. Memory bandwidth ranges from 22 terabytes per second (TB/s) to as high as 150 TB/s—about seven to 45 times faster than high-bandwidth memory (HBM). This significantly reduces latency in AI token generation compared with HBM-based systems. Samsung Foundry is producing the chips. “I would like to thank Samsung Electronics for manufacturing the LPU chips,” Mr. Huang said. “Samsung is currently producing as many as possible.”
Must be a big chip, i. e., hard to yield.
 
As I was reading the description of the Groq 3 LPU, I couldn't help but think it sounded alot like a Cerebras compute unit. And then Kevin's post showed both doing decode in separate announced deals.
Have you seen this paper?

Mr Blue, I get the feeling Jensen chose this path because his other teams were already working balls-out on other projects.
I have no doubt Jensen keeps his staff busy, but we're talking about the richest company on the planet. Money is essentially no object. Hiring is whatever and whomever it takes. I'm skeptical. As unbelievable as it sounds, it feels like they missed a big transition while focusing on training. Nah, can't be.
 

Samsung shares rise after Nvidia's Huang flags tie-up on new AI chips​

By Hyunjoo Jin
March 16, 20265:20 PM CDTUpdated 12 hours ago



Illustration shows Samsung Electronics logo and computer motherboard

A Samsung Electronics logo and a computer motherboard appear in this illustration taken August 25, 2025. REUTERS/Dado Ruvic/Illustration Purchase Licensing Rights, opens new tab
Show more companies

March 17 (Reuters) - Shares of Samsung Electronics (005930.KS), opens new tab rose as much as 5% on Tuesday after Nvidia (NVDA.O), opens new tab CEO Jensen Huang said ‌the South Korean company was producing Nvidia's new artificial intelligence chips.
The news fuelled expectations that Samsung's foundry division, which makes logic chips for customers including Tesla (TSLA.O), opens new tab , Apple (AAPL.O), opens new tab and Samsung's phone division, may be able to turn around as early as next year after posting ⁠billions of dollars in annual losses in recent years, analysts said.

At Nvidia's GTC developer conference in California on Monday, Huang unveiled Nvidia's new AI inference processor based on technology from chip startup Groq.
"I want to thank Samsung who manufactures the Groq LP30 chip for us and they're cranking as hard as they can," he said, adding the chips were in production, and would be shipped in the second half of this ‌year.
Samsung ⁠also showcased the Nvidia chips made using its 4-nanometer manufacturing process at the GTC.
Samsung shares were up 4.3% at 196,800 won as of 0252 GMT, after earlier reaching 198,000 won. The broader market (.KS11), opens new tab was up 2.7%.

Sohn In-joon, an ⁠analyst at Heungkuk Securities, expected Samsung's foundry business would be able to reach breakeven later next year. But he said weak demand from mobile phones stemming ⁠from surging memory chip prices could weigh on foundry earnings.
Advanced Micro Devices' (AMD.O), opens new tab CEO Lisa Su will meet Samsung Electronics Chairman Jay Y. Lee ⁠in South Korea on Wednesday, media reports said, with eyes on whether the two would discuss cooperation in memory chips and logic semiconductors.

On August 15, 2023, Groq announced that it had chosen Samsung Foundry’s SF4X process to manufacture its next‑generation LPU (Language Processing Unit) silicon. Back then, people speculated that Samsung’s new Taylor, Texas fab would be the production site.

Four months ago, in December 2026, Nvidia initiated an acqui‑hire of key Groq personnel and licensed Groq’s technology, in a deal valued at about $20 billion. Jensen Huang’s GTC announcement is a continuation of Groq’s earlier direction.

Because Samsung’s Taylor, Texas fab will not be ready until the end of 2026, it is possible that Groq’s LP30 chips are currently being manufactured in Samsung’s Korean fabs.
 
Samsung Foundry Nabs Nvidia? :ROFLMAO:

Groq started with GF 14 which really is Samsung 14 so it tracks they would use Samsung 4nm. Groq is an N-1 company with limited volumes so Samsung is an okay place for them. I was told that Groq is currently designing to TSMC N3. Can anyone confirm? I will do some more digging.

Nvidia's foundry team will probably have some influence over that. They are next level smart with PDKs.
 
Samsung Foundry Nabs Nvidia?

The thinking logic is:

  1. 1. Groq selected Samsung SF4X in August 2023.

  2. 2. Nvidia did an acqui‑hire of key Groq personnel and licensed Groq’s technology in December 2025.

  3. 3, That's why Samsung nabs Nvidia, or Nvidia selected Samsung for Groq’s LP30, as reported.
You can say it’s a little bit stretched, but compared to many even more serious current events and the bizarre logic behind them, it’s OK.
 
Groq is an N-1 company with limited volumes so Samsung is an okay place for them.
Groq already lost confidence in themselves last year: https://www.investing.com/news/comp...-to-500-million--the-information-93CH-4158309

Investing.com -- AI chipmaker Groq has significantly reduced its 2025 revenue projections from more than $2 billion to over $500 million within the past month, according to documents viewed by The Information.

The startup, which is competing with Nvidia (NASDAQ:NVDA) in the AI chip market, had shared the higher projection with investors early this year, around the same time it secured a $1.5 billion deal with Saudi Arabia to expand its business in that country.

When questioned about the revenue figures, Groq’s Chief Operating Officer Sundeep Madra initially denied both numbers, stating, "I think you’re misinformed. The numbers are wrong."

The dramatic reduction in projected revenue suggests Groq may be facing challenges in securing data center space as it works to sell its hardware to large companies and foreign governments.
 
The startup, which is competing with Nvidia (NASDAQ:NVDA) in the AI chip market, had shared the higher projection with investors early this year, around the same time it secured a $1.5 billion deal with Saudi Arabia to expand its business in that country.

Sadly, considering the ongoing chaos in the Middle East, this $1.5 billion Saudi Arabia deal along with many other AI and datacenter related deals in the region, may be in a shaky situation.
 
The dramatic reduction in projected revenue suggests Groq may be facing challenges in securing data center space as it works to sell its hardware to large companies and foreign governments.

My take is that Groq only had a partial solution for data-center scale inference. The Groq chips are great at simple, fast , low latency decode (MoE execution), but don’t have enough memory or the right memory management (KV store and caches) for optimized long-context prefill, where raw memory bandwidth isn’t as important. Plus they lacked rack/pod level resource management, routing, and memory tiering orchestration to make it all work efficiently.

Add in that they had to spin up and operate their own data centers to sell their solutions since there weren’t any CSP / hyperscaler takers for their raw boards / racks without trialing on real hardware. I don’t think they directly ran into a data centers power / capacity issue, but did hit a cost of operations vs revenue wall.
 
Back
Top