Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/is-single-die-dram-gone-in-favor-of-stacked-die.15756/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021071
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Is single-die DRAM gone in favor of stacked-die?

jms_embedded

Active member
If I buy DRAM for my computer today, is any of it single-die, or are all the latest DRAM chips stacked-die? I've looked around and it looks like stacked-die is common, but I wasn't sure if it has completely displayed single-die DRAM.
 

Daniel Payne

Moderator
Several companies have announced stacked DRAM chips:
 
S

smeyer0028

Guest
Sort of relevant to this. I think some of the new
AMD processors stack SRAM on top of some
CPU dies. Result is that those chips can not
be over clocked because of heat problems.
 

Daniel Payne

Moderator
Sort of relevant to this. I think some of the new
AMD processors stack SRAM on top of some
CPU dies. Result is that those chips can not
be over clocked because of heat problems.
Yes, I'm amazed at the thermal challenges for stacked chips, what an engineering effort to keep these stacked die systems operating reliably.
 
Consumer DIMMs do not use stacking. You might find stacked DRAM if you buy a smartphone.

BTW, stacked memory means a lot. HBM is stacked(which is used for expensive devices like A100), and in some way ePOP packaged APs are also stacked.
 

Paul2

Active member
Consumer DIMMs do not use stacking. You might find stacked DRAM if you buy a smartphone.

BTW, stacked memory means a lot. HBM is stacked(which is used for expensive devices like A100), and in some way ePOP packaged APs are also stacked.
DDR5 dimms already use stacked dies in a package as I understand.

With that, there is less, and less incentives to keep DRAM off package. If you get stacked dies at the cost of single die package anyways, why do you even need memory on a DIMM?
 

jms_embedded

Active member
With that, there is less, and less incentives to keep DRAM off package. If you get stacked dies at the cost of single die package anyways, why do you even need memory on a DIMM?
In what context? In a smartphone that makes sense; in a computer I would think that would help with local fast memory, but you'd need DIMM to add external memory.
 

Paul2

Active member
In what context? In a smartphone that makes sense; in a computer I would think that would help with local fast memory, but you'd need DIMM to add external memory.

In a PC context as well. It's already possible to put way, way more memory per single DIMM with DDR5 stacked packages than an average user will need. So keeping DIMMs will make mostly sense for server only, and not every of them. Nvidia put 192GB of HBM3 on a package. That's more than what 9 out of 10 mainstream servers have.

And for consumer devices, more integration is always good. I never seen people liking to route super touchy DDR lines by hand. With DDR lanes gone, I think an average laptop motherboard can lose a few layers, get smaller, and use a cheaper material.
 
M

mgoldsmith1979

Guest
In a PC context as well. It's already possible to put way, way more memory per single DIMM with DDR5 stacked packages than an average user will need. So keeping DIMMs will make mostly sense for server only, and not every of them. Nvidia put 192GB of HBM3 on a package. That's more than what 9 out of 10 mainstream servers have.

And for consumer devices, more integration is always good. I never seen people liking to route super touchy DDR lines by hand. With DDR lanes gone, I think an average laptop motherboard can lose a few layers, get smaller, and use a cheaper material.
I recall Samsung discussing stacked DDR4 dice within a DIMM, using TSV rather than wirebond, so similar to HBM DRAM dice. Agree that the capacity of die stacking means you get more per-DIMM, but my understanding is that these dice are on separate channels, and so your BW into the die stack is the same for single as double. In that context, the additional DIMMs make sense only for additional channels, and not as additional ranks-per-channel (because you can collapse that onto single DIMM). Would be interesting to see something like SO-DIMM used in Server to reduce the physical area, but I don't know if those have RDIMM capability.

Certainly Apple seems to think that consumers of all stripes won't regret a lack of post-purchase upgradability of DRAM. Whenever they launch their "M2 Ultra" solution to replace Xeon, it will be interesting to see if they retain that design decision, or if they allow a Professional market to have the same post-purchase choices that they currently enjoy, or not.

Server market requires the option to expand capacity and replace failing modules without replacing the entire system. But again, that work model may get questioned in the future if the majority of the DRAM is remote memory pools versus local DIMMs.
 
Top