Will we see DDR5 memory (device) and memory controller (IP) in the near future? According with Cadence who has released the first test chip in the industry integrating DDR5 memory controller IP, fabricated in TSMC’s 7nm process and achieving a 4400 megatransfers per second (MT/sec) data rate, the answer is clearly YES !
Let’s come back to DDR5, in fact a preliminary version of the DDR5 standard being developed in JEDEC, and the memory controller achieving a 4400 megatransfers per second. This means that the DDR5 PHY IP is running at 4.4 Gb/s or quite close to 5 Gb/s, the speed achieved by the PCIe 2.0 PHY 10 years ago in 2008. At that time, it was the state of the art for a SerDes, even if engineering teams were already working to develop faster SerDes (8 Gb/s for SATA 3 and 10 G for Ethernet). Today, the DDR5 PHY will be integrated in multiple SoC, at the beginning in these targeting enterprise market, in servers, storage or data center applications.
These applications are known to always require more data bandwidth and larger memories. But we know that, in data center, the power consumption has become the #1 cost source leading to incredibly high electricity bill and more complex cooling systems. If you increase the data width for the memory controller while increasing the speed at the same time (the case with DDR5) but with no power optimization, you may come to an unmanageable system!
This is not the case with this new DDR5 protocol, as the energy per bit (pJ/b) has decreased. But the need for much higher bandwidth translates into larger data bus width (128-bit wide) and the net result is to keep the power consumption the same as it was for the previous protocol (DDR4). In summary: larger data bus x faster PHY is compensated by lower energy/bit to keep the power constant. The net result is higher bandwidth!
You have probably heard about other emerging memory interface protocols, like High Bandwidth Memory 2 (HBM2) or GraphicDDR5 (GDDR5) and may wonder why would the industry need another protocol like DDR5?
The answer is complexity, cost of ownership and wide adoption. It’s clear that all the DDRn protocols, as well as the LPDDRn, have been dominant and saw the largest adoption since their introduction. Why will DDR5 have the same future as a memory standard?
If you look at HBM2, this is a very smart protocol, as the data bus is incredibly wide, but keeping the clock rate pretty low (1024 bit wide bus gives 256 Gbyte/s B/W)… Except that you need to implement 2.5D Silicon technology, by the means of an interposer. This is a much more complex technology leading to much higher cost, due to the packaging overcost to build 2.5D, and also because of the lower production volume for the devices which theoretically lead to higher ASP.
GDDR5X (standardized in 2016 by JEDEC) targets a transfer rate of 10 to 14 Gbit/s per pin, which is clearly an higher speed than DDR5, but requires a re-engineering of the PCB compared with the other protocols. Sounds more complex and certainly more expansive. Last point, if HBM2 has been adopted for systems where the bandwidth need is such than you can afford an extra cost, GDDR5X is filling a gap between HBM2 and DDR5, this sounds like the definition of a niche market!
If your system allows you to avoid it, you shouldn’t select a protocol seen as a niche. Because the lower the adoption, the lower the production volume, and the lower the competition pressure on ASP device cost… the risk of paying higher price for the DRAM Megabyte is real.
If you have to integrate DDR5 in your system, most probably because your need for higher bandwidth is crucial, Cadence memory controller DDR5 IP will offer you two very important benefits: low risk and fast TTM. Considering that early adopters have already integrated Cadence IP in TSMC 7nm, the risk is becoming much lower. Marketing a system faster than your competitors is clearly a strong advantage and Cadence is offering this TTM benefit. Last point, Cadence memory controller IP has been designed to offer high configurability, to stick with your application needs.Share this post via: