Key Takeaways
- The webinar on PCIe Multistream Architecture addresses the challenges of moving massive datasets for AI workloads efficiently.
- Multistream allows multiple independent data flows to travel in parallel, significantly improving throughput and reducing latency compared to traditional PCIe.
- At 64 GT/s, PCIe 6.0 offers 256 GB/s bidirectional bandwidth, while PCIe 7.0 doubles this to 512 GB/s, enhancing disaggregated AI architectures.
- The architecture minimizes jitter in GPU communication, accelerates model convergence, and supports secure multi-tenancy with per-stream encryption.
- Understanding PCIe Multistream Architecture is essential for AI system designers to build scalable, sustainable, and secure AI platforms.

In the race to power ever-larger AI models, raw compute is only half the battle. The real challenge lies in moving massive datasets between processors, accelerators, and memory at speeds that keep up with trillion-parameter workloads. Synopsys tackles this head-on with its webinar, How PCIe Multistream Architecture is Enabling AI Connectivity at 64 GT/s and 128 GT/s, set for November 18, 2025, at 9:00 AM PST. This 60-minute session, led by Diwakar Kumaraswamy, a veteran SoC architect with over 15 years in high-speed interconnects, targets engineers and system designers building the next wave of AI infrastructure.
At the heart of the discussion is PCIe Multistream Architecture, a fundamental redesign that breaks away from the single-stream limitations of earlier PCIe generations. In traditional PCIe, all data packets, whether from storage, networking, or GPU memory, share a single serialized path. This creates bottlenecks during bursty AI traffic, such as gradient updates in distributed training or real-time inference across multiple streams. Multistream changes the game by allowing multiple independent data flows to travel in parallel over the same physical link. Each stream gets its own error handling, ordering rules, and quality-of-service controls, dramatically improving throughput and reducing latency.
The webinar will contrast this with legacy designs and show how Multistream unlocks the full potential of PCIe 6.0 (64 GT/s) and PCIe 7.0 (128 GT/s). At 64 GT/s, a single x16 link delivers 256 GB/s bidirectional bandwidth, enough to feed an entire rack of GPUs without throttling. Double that to 128 GT/s in PCIe 7.0, and you’re looking at 512 GB/s per link, a leap that makes disaggregated AI architectures viable. Think GPU clusters spread across racks, NVMe storage pools serving petabytes to language models, or 800G Ethernet backhauls, all connected with microsecond-level coherence.
Diwakar will dive into the mechanics: how 1024-bit datapaths at 1 GHz clocks enable 2x performance across transfer sizes, how PAM4 signaling and FLIT-based error correction tame signal loss over long traces, and how adaptive equalization in the PHY layer keeps power in check. He’ll also cover practical integration, linking user logic to Multistream controllers, managing timing closure in complex SoCs, and verifying compliance with advanced test environments.
For AI system designers, the implications are profound. Multistream minimizes jitter in all-to-all GPU communication, accelerates model convergence, and supports secure multi-tenancy through per-stream encryption. It enables chip-to-chip links in SuperNICs, smart SSDs, and AI switches, all while cutting idle power by 20–30% through efficient low-power states. This isn’t just about speed—it’s about building scalable, sustainable, and secure AI platforms.
Bottom line: As PCIe 8.0 begins to take shape on the horizon, this webinar positions Multistream as the cornerstone of future AI connectivity. Whether you’re designing edge inference engines or exascale training clusters, understanding this architecture is no longer optional, it’s essential. The session promises not just theory, but actionable insights to future-proof your designs in an AI-driven world.
Also Read:
Synopsys and NVIDIA Forge AI Powered Future for Chip Design and Multiphysics Simulation
Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys
Share this post via:


Comments
There are no comments yet.
You must register or log in to view/post comments.