Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/what-will-happen-with-micron.19339/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

What will happen with Micron?

Arthur Hanson

Well-known member
The US will not want to rely on foreign suppliers for memory, especially HBM. Will this be enough to keep Micron a viable and cost effective producer of memory? Also, do the readers feel Micron will be a leader or a follower in advanced memory technologies that will be needed more and more as both AI and ML continue to advance at a blistering pace? The market is already predicting a better future, is it correct?
 
Because Micron has a node advantage they already have a cost advantage (all else being equal). As for Micron’s talking about making if memory serves 40% of their DRAM in the USA… Well Micron is the scrappiest memory company around. Heck I would even say scrappiest semiconductor manufacturer around. The other two needed massive state backing and mega corporations to get where they are today. Heck Hynix went bankrupt like twice and needed the government to intervene. Micron’s current CEO also has quite the resume, and seems pretty shrewd. If they say they can make the economics work if they get some CHIPs goodness, I believe them. As for HBM aurthur you must remember that while the margins are good, volumes fall well behind other dram segments. As a result I would not bank Micron’s success or failure on HBM alone. As for who will lead HBM, I have no reason to not expect it will continue to be SK as they presumably continue to be the leader in introducing the next generations of HBM early.
 
Because Micron has a node advantage they already have a cost advantage (all else being equal). As for Micron’s talking about making if memory serves 40% of their DRAM in the USA… Well Micron is the scrappiest memory company around. Heck I would even say scrappiest semiconductor manufacturer around. The other two needed massive state backing and mega corporations to get where they are today. Heck Hynix went bankrupt like twice and needed the government to intervene. Micron’s current CEO also has quite the resume, and seems pretty shrewd. If they say they can make the economics work if they get some CHIPs goodness, I believe them. As for HBM aurthur you must remember that while the margins are good, volumes fall well behind other dram segments. As a result I would not bank Micron’s success or failure on HBM alone. As for who will lead HBM, I have no reason to not expect it will continue to be SK as they presumably continue to be the leader in introducing the next generations of HBM early.
Thanks for the information, any further elaboration is welcome and appreciated.
 
Considering the big 2 are South Korean, I really don't see that the US has a security of supply memory issue (DRAM or NAND).

Hynix has the HBM lead today but there is nothing but time & application stopping Micron and Samsung catching up. By 2025 Hynix will likely still be the HBM leader but Samsung & Micron's market share will not be far behind.

Also, as nghanayem has pointed out, HBM is only a small part of the total DRAM market, even though its profile is very high these days. HBM will struggle to hit even 10% of total bit output in '24. So it is a "nice to have" for the manufacturers but if they want to keep the fabs utilised & make money they need the hyperscalers, smart phone makers & the PC OEM's to pick up their demand for "standard" DRAM next year.

Memory always has and always will be a cyclical business. The future looks good because we are now off the bottom of the last cycle, even if this turnaround has been supply driven rather than demand driven so far.

 
If you believe in AI + silicon substrates (interposers, chiplets, SIPs, etc), then HBM is huge. The $$$ going into fabs and packaging is changing the game.
 
This is really hard question to answer. NVIDIA have been using HBM from 2016(P100 GPU), so HBM itself is not really a new product in AI. What really changed is the importance of 'how important large capacity + wide bandwidth'. ChatGPT showed that large AI models are rockin' good, so everyone's hoarding to HBM to train big AIs. But how big? 185B (GPT-3) was great(proven), then will 500B, or 1T...10T as well? If big models keep improving then Micron will be losing rapidly growing market a bit.

Also, I'd like to also point out that HBM inventories are really hard to manage. HBM needs special cares which DRAM makers didn't do before. HBM needs specialized memory dies with large pad(IO and power) and base dies from foundry. Then it also requires customers to design their chips to support HBMs(IOs and memcon) + needs to be packaged from foundries(CoWoS...etc) meaning that large CAPEX might backdraft if large AI models don't provide better results every year(because you cannot sell HBM to client products)...
 
If you believe in AI + silicon substrates (interposers, chiplets, SIPs, etc), then HBM is huge. The $$$ going into fabs and packaging is changing the game.
(y) I understand the ultra bull case but I'm not convinced yet. Also, it's not likely to happen in this market cycle.

In the meantime if "AI" servers continue to take share/budget from "general" servers, the total DRAM bit demand will be lower.
 
Last edited:
If you believe in AI + silicon substrates (interposers, chiplets, SIPs, etc), then HBM is huge. The $$$ going into fabs and packaging is changing the game.
I’m not convinced it is as big as you expect. Let’s take the iPhone 14 as 6gb (and just based on my knowledge of galaxies, I am pretty sure the average flagship phone has more RAM than an iPhone). Then let’s say the average new PC is 16gb. That means 2 PCs and 8 iPhones have as much RAM as an H100. Heck even modern 2S servers have RAM measuring in the terabytes rather than double digit gigabytes. You would need 13 H100s per 2S server for the H100s to have more RAM, and I don’t even know if having a ratio like that is even something you would do in a training server. Back to the client side; in 2022 74M PCs were shipped and 1.4B phones were shipped. Even if nobody bought another server CPU again I don’t think NVIDIA has sold anywhere near enough H100s to get close to the DDR or LPDDR markets in volume. The packaging on a base die is also not value that memory makers get to capture.
 
HBM uses less power (yes, also less latency, but the power is the big thing for me). IMO, much of the packaging will happen in less expensive locations than TSMC.

We are personally betting on this, but we are not in the 64 bit market, training, nor cellphones. We are focused on EDA, IP, and for relatively high performance ASICs on the edge, etc and looking to not melt the chips. As an example, we have young engineers working on a quad 32 bit RISC-V + exotic memory (L3) + HBM.

Costs are astronomical now, but costs come down in the future. We are in the business of R&D and at the IoT level. You guys are on the cutting edge (servers and cellphones). We are looking at processing on the edge. In R&D, you target the future. For 32 bits, I believe utilizing HBM is a no-brainer. This is supplemental to the larger DRAM.
 
As an example, we have young engineers working on a quad 32 bit RISC-V + exotic memory (L3) + HBM.
That's interesting project. HBM comes with high bandwidth, but with high(slow) latency as well. Quad RISC-V 32bit sound to me that it's latency sensitive SoC, but with HBM. Maybe HBM is there to be used as buffers(no direct access from RISC-V cores) or SoC's cache subsystem is strong enough to hide slow HBM latency? Maybe network infra SoCs?
 
This is what we are doing...

L1 embedded with each core
L2 shared with 4 cores onto a common tile with pads included
L3 made up of exotic RAM (probably MRAM)
Then a fast fetch from a short stack HBM with 2 or 4 die surrounding it
Pads going to DRAM as well
Logic/CPU done on 40nm, 22nm, 16nm, 14nm (we automigrate and adjust)
180nm die acting as the PC board inside the SIP with active circuitry

We use our own tools (except for open source simulators and Yosys), PDKs, and IP.

The goal is to replace FPGAs with a high performance SIPs at an NRE of < $3M including a shared mask.

Note: I doubt we are the only ones doing this. Our spin is that we are mostly automated and foundry agnostic.
 
Last edited:
One more thing, it is possible that matrix multiplication can be handled by us neanderthals. We will see...
 
Back
Top