You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
If I buy DRAM for my computer today, is any of it single-die, or are all the latest DRAM chips stacked-die? I've looked around and it looks like stacked-die is common, but I wasn't sure if it has completely displayed single-die DRAM.
Viking's DRAM Memory Stacking technology supports high density DIMMs for servers in enterprise markets, as well as small form factor solutions for telecommunication, embedded, and industrial markets.
Nobody knows high-bandwidth memory like Micron, and our highest-end product HBM2E is perfect for AI training, machine learning and predictive modeling.
Sort of relevant to this. I think some of the new
AMD processors stack SRAM on top of some
CPU dies. Result is that those chips can not
be over clocked because of heat problems.
Sort of relevant to this. I think some of the new
AMD processors stack SRAM on top of some
CPU dies. Result is that those chips can not
be over clocked because of heat problems.
DDR5 dimms already use stacked dies in a package as I understand.
With that, there is less, and less incentives to keep DRAM off package. If you get stacked dies at the cost of single die package anyways, why do you even need memory on a DIMM?
With that, there is less, and less incentives to keep DRAM off package. If you get stacked dies at the cost of single die package anyways, why do you even need memory on a DIMM?
In what context? In a smartphone that makes sense; in a computer I would think that would help with local fast memory, but you'd need DIMM to add external memory.
In what context? In a smartphone that makes sense; in a computer I would think that would help with local fast memory, but you'd need DIMM to add external memory.
In a PC context as well. It's already possible to put way, way more memory per single DIMM with DDR5 stacked packages than an average user will need. So keeping DIMMs will make mostly sense for server only, and not every of them. Nvidia put 192GB of HBM3 on a package. That's more than what 9 out of 10 mainstream servers have.
And for consumer devices, more integration is always good. I never seen people liking to route super touchy DDR lines by hand. With DDR lanes gone, I think an average laptop motherboard can lose a few layers, get smaller, and use a cheaper material.
In a PC context as well. It's already possible to put way, way more memory per single DIMM with DDR5 stacked packages than an average user will need. So keeping DIMMs will make mostly sense for server only, and not every of them. Nvidia put 192GB of HBM3 on a package. That's more than what 9 out of 10 mainstream servers have.
And for consumer devices, more integration is always good. I never seen people liking to route super touchy DDR lines by hand. With DDR lanes gone, I think an average laptop motherboard can lose a few layers, get smaller, and use a cheaper material.
I recall Samsung discussing stacked DDR4 dice within a DIMM, using TSV rather than wirebond, so similar to HBM DRAM dice. Agree that the capacity of die stacking means you get more per-DIMM, but my understanding is that these dice are on separate channels, and so your BW into the die stack is the same for single as double. In that context, the additional DIMMs make sense only for additional channels, and not as additional ranks-per-channel (because you can collapse that onto single DIMM). Would be interesting to see something like SO-DIMM used in Server to reduce the physical area, but I don't know if those have RDIMM capability.
Certainly Apple seems to think that consumers of all stripes won't regret a lack of post-purchase upgradability of DRAM. Whenever they launch their "M2 Ultra" solution to replace Xeon, it will be interesting to see if they retain that design decision, or if they allow a Professional market to have the same post-purchase choices that they currently enjoy, or not.
Server market requires the option to expand capacity and replace failing modules without replacing the entire system. But again, that work model may get questioned in the future if the majority of the DRAM is remote memory pools versus local DIMMs.