WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 696
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 696
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)
            
800x100 Efficient and Robust Memory Verification
WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 696
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 696
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

How PCI Express 6.0 Can Enhance Bandwidth-Hungry High-Performance Computing SoCs

How PCI Express 6.0 Can Enhance Bandwidth-Hungry High-Performance Computing SoCs
by gruggles on 04-12-2021 at 2:00 pm

How PCI Express 6.0 Can Enhance Bandwidth Hungry High Performance Computing SoCs

What do genome sequencing, engineering modeling and simulation, and big data analytics have in common? They’re all bandwidth-hungry applications with complex data workloads. High-performance computing (HPC) systems deliver the parallel processing capabilities to generate detailed and valuable insights from these applications. To break through any bandwidth limitations, HPC SoCs need the fast data transfer and low latency that high-speed interfaces like PCI Express® (PCIe®) provide. With each new generation of PCIe delivering double the bandwidth of its predecessor, the latest iteration, PCIe 6.0, promises to be a boon for compute-intensive applications.

The HPC solutions that transform high volumes of data into valuable knowledge can be deployed in the cloud or on on-site data centers. Regardless, they demand compute, networking, and storage technologies with high performance and low latency, as well as artificial intelligence (AI) prowess. PCIe 6.0, which is expected to be released sometime this year, is expected to help solve the bandwidth limitations that HPC SoCs are constantly facing. The I/O bus specification will provide:

  • An increased data transfer rate of 64 GT/s per pin, compared to 32 GT/s per pin for PCIe 5.0
  • Power efficiency via a new low-power state
  • Cost-effective performance
  • Backwards compatibility to previous generations

Faster data transfer via PCIe 6.0 will result in faster computations for HPC, as well as cloud computing and AI applications. For example, as an AI algorithm is being trained, data needs to move back and forth quickly across multiple processors. PCIe 6.0 will remove the bottlenecks to allow a fast data flow for a more rapid training process. At the moment, the HPC landscape is dominated by hyperscale data centers. Given their disaggregated computing structure, hyperscale data centers currently provide the most powerful HPC capabilities for applications like AI engines. PCIe 6.0 will be beneficial by supporting more efficient disaggregated computing.

Another emerging application for PCIe 6.0 is storage, namely solid-state drives (SSD) used in data centers. Technology advances in SSD manufacturing—including stacked die—have increased storage capacity. At the same time, this has pushed the limits of 4-lane PCIe form factors. PCIe 6.0 will open the doors to the bandwidth and fast data transfer needed to take full advantage of the increased storage capacity.

New Architecture Brings New Challenges

PCIe 6.0 does come with a new architecture, moving from the non-return-to-zero (NRZ) structure with two logic levels of previous generations to Pulse Amplitude Modulation with four levels (PAM-4). PAM-4 encoding brings the increased data transfer rate and bandwidth. The latest generation also introduces forward error correction (FEC) to address raw bit error rate (BER) challenges that result from the new architecture. FEC traditionally introduces latency; however, PCI-SIG has defined a “lightweight FEC” with retry buffers and cyclic redundancy check (CRC) to maintain low latency. Another change in this iteration is the move to FLIT (flow control unit) mode, which also supports low latency and high efficiency.

PCIe 6.0 moves from the NRZ structure to PAM-4 for faster data transfer and higher bandwidth

Transitioning to this new architecture from earlier PCIe generations will involve some design considerations. For example, the receiver architecture for the PAM-4 PHY is based on an analog-to-digital converter, which calls for optimization of analog and digital equalization to achieve the optimal power efficiency regardless of the channel. Given the massive data pipe involved—potentially up to 1TB per second of data being moved in each direction—proper management of this data is critical. Another consideration relates to testbench development for verification, which ideally should be as efficient a process as possible while also accounting for factors like functional coverage.

Complete IP Solution for PCIe 6.0

Synopsys, which has long been a key contributor to PCI-SIG workgroups, has unveiled a complete IP solution that will allow for early development of PCIe 6.0 SoC designs. Synopsys DesignWare® IP for PCIe 6.0 is built on the silicon-proven DesignWare IP for PCIe 5.0 and supports the latest features of the upcoming new specification. As such, the solution is designed to address the bandwidth, latency, and power-efficiency demands of HPC, AI, and storage SoCs. The solution consists of:

  • The DesignWare Controller for PCIe 6.0, which utilizes a MultiStream architecture consisting of multiple interfaces to provide the lowest latency with maximum throughput for all data transfer sizes. Available in a 1024-bit architecture, the controller allows designers to achieve 64 GT/s x16 bandwidth while closing timing at 1 GHz.
  • The DesignWare PHY for PCIe 6.0, which provides unique, adaptive digital signal processing (DSP) algorithms that optimize analog and digital equalization for maximum power efficiency across backplane, network interface cards, and chip-to-chip channels. With its diagnostic features, the PHY enables near-zero link downtime. Its placement-aware architecture minimizes package crosstalk and allows dense SoC integration for x16 links.
  • Verification IP, which uses a native SystemVerilog/UVM architecture that can be integrated, configured, and customized with minimal effort to help accelerate testbench development while providing a built-in verification plan, sequences, and functional coverage.

Data Makes the World Go ’Round

It’s a data-driven world, and this will only intensify in the coming years. By 2025, according to IDC estimates, worldwide data will grow to 175 zettabytes, with as much of this data residing in the cloud as in data centers. That’s a 61% compounded annual growth rate from 33 zettabytes in 2018. While PCIe 6.0 early adopters are anticipated to be hyperscalers and other HPC SoC designers, the newest standard promises to eventually gain traction among designers working on edge, mobile, and automotive applications. While leading the shift to PCIe 5.0 with hundreds of design wins, Synopsys is helping designers gain a head-start on PCIe 6.0 designs with a complete PCIe 6.0 IP solution and expertise in the popular high-speed SerDes IP. As bandwidth demands increase, designers of PCIe 6.0-based applications can be well-positioned to keep the data moving.

By Priyank Shukla, Staff Product Marketing Manager, High-Speed SerDes IP, and Gary Ruggles, Sr. Staff Product Marketing Manager; Solutions Group
Also Read:

Why In-Memory Computing Will Disrupt Your AI SoC Development

Using IP Interfaces to Reduce HPC Latency and Accelerate the Cloud

USB 3.2 Helps Deliver on Type-C Connector Performance Potential

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.