Array ( [content] => [params] => Array (  => /forum/index.php?threads/is-single-die-dram-gone-in-favor-of-stacked-die.15756/ ) [addOns] => Array ( [DL6/MLTP] => 13 [Hampel/TimeZoneDebug] => 1000070 [SV/ChangePostDate] => 2010200 [SemiWiki/Newsletter] => 1000010 [SemiWiki/WPMenu] => 1000010 [SemiWiki/XPressExtend] => 1000010 [ThemeHouse/XLink] => 1000970 [ThemeHouse/XPress] => 1010570 [XF] => 2021071 [XFI] => 1050270 ) [wordpress] => /var/www/html )
Yes, I'm amazed at the thermal challenges for stacked chips, what an engineering effort to keep these stacked die systems operating reliably.Sort of relevant to this. I think some of the new
AMD processors stack SRAM on top of some
CPU dies. Result is that those chips can not
be over clocked because of heat problems.
DDR5 dimms already use stacked dies in a package as I understand.Consumer DIMMs do not use stacking. You might find stacked DRAM if you buy a smartphone.
BTW, stacked memory means a lot. HBM is stacked(which is used for expensive devices like A100), and in some way ePOP packaged APs are also stacked.
In what context? In a smartphone that makes sense; in a computer I would think that would help with local fast memory, but you'd need DIMM to add external memory.With that, there is less, and less incentives to keep DRAM off package. If you get stacked dies at the cost of single die package anyways, why do you even need memory on a DIMM?
In what context? In a smartphone that makes sense; in a computer I would think that would help with local fast memory, but you'd need DIMM to add external memory.
I recall Samsung discussing stacked DDR4 dice within a DIMM, using TSV rather than wirebond, so similar to HBM DRAM dice. Agree that the capacity of die stacking means you get more per-DIMM, but my understanding is that these dice are on separate channels, and so your BW into the die stack is the same for single as double. In that context, the additional DIMMs make sense only for additional channels, and not as additional ranks-per-channel (because you can collapse that onto single DIMM). Would be interesting to see something like SO-DIMM used in Server to reduce the physical area, but I don't know if those have RDIMM capability.In a PC context as well. It's already possible to put way, way more memory per single DIMM with DDR5 stacked packages than an average user will need. So keeping DIMMs will make mostly sense for server only, and not every of them. Nvidia put 192GB of HBM3 on a package. That's more than what 9 out of 10 mainstream servers have.
And for consumer devices, more integration is always good. I never seen people liking to route super touchy DDR lines by hand. With DDR lanes gone, I think an average laptop motherboard can lose a few layers, get smaller, and use a cheaper material.