Once upon a time, RAM technology was the driver of the semiconductor process. DRAM products were the first to be designed on a newest technology node and DRAM was used as a process driver. It was 30 years ago and the most aggressive process nodes were ranging between 1um and 1.5 um (1 500 nm!). Then in the 1990 the Synchronous Dynamic Random Access Memory (SDRAM) has been introduced, and the Double Data Rate (DDR) specified in June 2000: the DDR SDRAM was born, and our PC, Laptop or smartphone are still running using this DDR architecture. The DDR4 specification was issued in 2014 (it took 10 years to finalize it) and the industry consensus is that DDR4 will be the last version, don’t ever expect to see DDR5.
Does that mean that DDR4 will disappear? Yes… and no! In fact DDR4 based systems will be developed for a long time, probably up to 2020 and probably later. Why? It’s a pricing related reason! After paying a premium price at introduction, DRAM pricing is declining going to the low range ($ per sq mm of Silicon), so the product is widely used and the price eventually stabilize at a low point. In marketing you call such product a commodity, like beans or nails!
What about new system targeting applications like networking, servers, graphic or the next smartphones? All of these are memory hungry and if you want to offer higher performance than for the previous version (that you do want!), you need to increase the memory interface bandwidth. To do so, you have two options. One is to use a wider data bus (like for example with Wide I/O architecture) the other is to increase the clock speed. Increasing the clock speed for a parallel data bus with a separate clock line (DDRn architecture) inevitably leads to a feasibility limit, say around 4 Gbps for data busses like DDRn. Let’s take a look at two emerging technologies, Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM) to understand how these architectures could support performance hungry applications.
To write this post I have used resources from the Cadence’s web. The company, being strongly involved in the memory controller and PHY IP market since Denali acquisition, is developing IP to support the post-DDR4 technologies. You can see the related links at the end of this post.
At first we should note that these two protocols, HBM and HMC are based on either 3D or 2.5D technologies. DDR4 is not only the last DDR, it’s also the last protocol based on two dimensions only. To simplify, we can say that 3D is when you package multiple chips (8 memories + 1 logic IC in the above example) using Through Silicon Vias (TSV) to interconnect these chips when 2.5D use a Silicon interposer to interconnect two ICs (IC could be itself a 3D package). This Silicon interposer will be equiped with micro-bumps (50um size) allowing to connect the 2.5D device to a traditional PC board (for example).
Hybrid Memory Cube: 3D + SerDesà360 GB/s
HMC (Figure 4) is being developed by the Hybrid Memory Cube Consortium and has already reach production. The architecture essentially combines SerDes based, high-speed logic process technology with a stack of through-silicon-via (TSV) bonded memory die. According to the Hybrid Memory Cube Consortium, a single HMC can deliver more than 15X the performance of a DDR3 module and consume 70% less energy per bit than DDR3. So, Hybrid Memory Cube is based on ultra-high speed (10, 12.5 or 15 Gbps today, 25 Gbps for the next release) SerDes I/O, the memory chip maker supplying the “cube” also integrating a logic die. Once implemented on a classical board, the memory cube is interfaced with a SoC through several Very High Speed SerDes. The semiconductor industry uses SerDes based interconnects for many years (PCI Express, Ethernet protocols to name a few); special care need to be taken when implementing interconnects on the board, but the feasibility is asserted.
Such architecture providing the highest possible bandwidth (up to 360 GBytes/s today) is attractive for networking applications, but the cost per bit is also the highest, which is not a killing factor for this type of business dedicated applications…
HBM: 2.5D + Wide Data Busà 256 GB/s
HBM (Above Figure) is another emerging memory standard defined by the JEDEC organization. HBM was developed as a revolutionary upgrade for graphics applications. Expected to be in mass production in 2015, the HBM standard applies to stacked DRAM die, and is built using TSV technologies to support bandwidth from 128GB/s to 256GB/s. JEDEC’s HBM task force is now part of the JC-42.3 Subcommittee, which continues to work to define support for up to 8-high TSV stacks of memory on a 1,024-bit wide data interface. In October 2013, the Subcommittee published JESD235: High Bandwidth Memory (HBM) DRAM, which uses wide-interface architecture to achieve high-speed, low-power operation. Please note that HBM is still parallel protocol and that a logic die is inserted between the SoC and the stacked memory dies.
I suggest you to listen to the Whiteboard Wednesday (7/7/2015) where Lou Ternullo is giving a live presentation about specialty memories it last less than 5 minutes and it’s very helpful.
This white paper from cadence: “Five Emerging DRAM Interfaces You Should Know for Your Next Design” will certainly help you deeper your knowledge about these emerging protocols, when “3D Memory Landscape take Shape” specifically addresses the 3D related architectures. The table below is extracted for this paper:
One or several technologies will eventually replace DDR4, but DDR4 will be used doe a long time, especially because it’s the last protocol iteration. The cumulated DDR4 Memory Controller IP sales have weighted more than $100 million in 2014 (source IPnest) and will generate several $100s million during 2015-2020. But IP vendors have to prepare the future and Cadence will have to support some of these emerging technologies.
By Eric Esteve from IPNEST