Before we jump into the specifics, let us understand what’s driving custom solutions in the high performance computing and networking space. It’s the growing demand for core capacity and greater performance, which is due to the increase in the level of parallelism and multitasking required to handle the enormous amount of data traffic. According to market research, the increase in the core capacity has gone up from just a few cores to nearly 60+ cores. The memory and network bandwidth requirements will, by default, increase to keep pace with the increase in the core capacity and performance. As per the market research, the increase in the memory bandwidth has increased from 10Gbytes to roughly 400Gbytes, and the increase in the network IP traffic has gone up from 90 Exabytes to close to 300 Exabytes.
HIGH BANDWIDTH MEMORY (HBM2) CONTROLLER AND PHY
All these factors are pushing the need for custom processors, custom SoCs and specialized memories, like HBM, in the high performance computing and networking market segments. There are several high performance applications that demand high bandwidth memory access. Some examples are data center, networking, artificial intelligence, augmented reality and virtual reality, cloud computing, neural networks and several other high end applications. An HBM solution is ideas for these applications for three key reasons:
- It currently supports a huge bandwidth of up 256GBps
- It improves the power efficiency per pin
- It offers a massive reduction in space, resulting in a form factor reduction of the end productOpen-Silicon’s first HBM2 IP subsystem in 16FF+ is silicon-proven at 2Gbps data rate, achieving bandwidths up to 256GBps, and being deployed in many customs SoCs. However, the data-hungry, multicore processing units needed for machine learning require even greater memory bandwidth to feed the processing cores with data. Keeping pace with the ecosystem, Open-Silicon’s next generation HBM2 IP subsystem is ahead of the curve with 2.4Gbps in 16FFC, achieving bandwidths up to >300GBps.
This 7nm custom SoC platform is based on a PPA-optimized HBM2 IP subsystem supporting 3.2Gbps and beyond data rates, achieving bandwidths up to >400GBps. It supports JEDEC HBM2.x and includes a combo PHY that will support both JEDEC standard HBM2 and non-JEDEC standard low latency HBM. High speed SerDes IP subsystems (112G and 56G SerDes) enable extremely high port density for switching and routing applications, and high bandwidth inter-node connections in deep learning and networking applications. The DSP subsystem is responsible for detecting and classifying camera images in real time. Video frames or images are captured in real time and stored in HBM, then processed and classified by the DSP subsystem using the pre-trained DNN network.
One application that goes hand-in-hand with high performance computing is AI. AI is revolutionizing and transforming virtually every industry in the digital world. Advances in computing power and deep learning have enabled AI to reach a tipping point toward major disruption and rapid advancement. Custom SoC platforms enable AI applications through training in deep learning and high speed inter-node connectivity, by deploying high speed SerDes, a deep neural network DSP engine, and a high speed high bandwidth memory interface with High Bandwidth Memory (HBM) within a 2.5D system-in-package (SiP). Open-Silicon’s implementation of a silicon-proven system custom SoC platform is centrally located within this ecosystem.
Open-Silicon is a system-optimized ASIC solution provider that innovates at every stage of design to deliver fully tested IP, silicon and platforms. To learn more, please visit www.open-silicon.com