Key Takeaways
- AI's exponential growth is driving the importance of memory, with custom High Bandwidth Memory (cHBM) becoming essential in multi-die architecture.
- The transition from 2.5D to 3D packaging offers significant improvements in bandwidth and power efficiency, but raises challenges related to performance versus interoperability.
- There is an urgent need for faster-moving standards bodies to create interoperable HBM frameworks, with UCIe emerging as a key standard for device-to-device IP.
AI’s exponential growth is transforming semiconductor design—and memory is now as critical as compute. Multi-die architecture has emerged as the new frontier, and custom High Bandwidth Memory (cHBM) is fast becoming a cornerstone in this evolution. In a panel session at the Synopsys Executive Forum, leaders from AWS, Marvell, Samsung, SK Hynix, and Synopsys discussed the future of cHBM, its challenges, and the collective responsibility to shape its path forward.
Pictured above: Moderator: Will Townsend, Moor Insights & Strategy; Panelists: Nafea Bshara, VP/Distinguished Engineer, AWS; Will Chu, SVP & GM Custom Cloud Solutions Business Unit, Marvell; Harry Yoon, Corporate EVP, Products and Solutions Planning, Samsung; Hoshik Kim, SVP/Fellow, Memory Systems Research, SK Hynix; John Koeter, SVP, IP Group, Synopsys.
The Rise of cHBM in Multi-Die Design
HBM has already propelled AI performance to new heights. Without it, the AI market wouldn’t be where it is today. The evolution from 2.5D to 3D packaging brings a nearly 8x improvement in bandwidth and up to 5x power efficiency improvement—transformative gains for data-intensive workloads. Custom HBM further optimizes performance, reducing I/O area by at least 25% compared to standard HBM implementations. But with this comes a familiar tension: performance versus interoperability.
The Customization Dilemma
While hyperscalers may benefit from custom configurations—given they control the entire stack—the broader industry risks fragmentation. As panelists noted, every time memory innovation went too custom (e.g., HBM1, RDRAM), interoperability and industry adoption suffered. Without shared standards, memory vendors face viability issues, and smaller players risk exclusion. Custom HBM must not become a barrier to collaboration.
The Need for Speed in Standardization
A major takeaway from the panel was the urgency for faster-moving standards bodies. JEDEC has traditionally led HBM standardization, but with the pace of AI, panelists discussed how a new and more agile standards body could accelerate interoperable HBM frameworks. The industry needs validated, standardized D2D IP as well to preserve ecosystem harmony while scaling performance. The UCIe standard is fast establishing itself as that D2D IP standard.
Implications for Memory Vendors
Memory vendors are in a tough spot. On one hand, custom HBM demands more features and integration flexibility; on the other, it erodes volume leverage and introduces supply chain risks. To stay competitive, vendors must support both standard and semi-custom memory—while collaborating more deeply with SoC architects, EDA tool providers, and packaging experts.
The Power of the Ecosystem
To unlock cHBM’s full potential, ecosystem-wide collaboration is non-negotiable. Synopsys is playing a central role for system-level enablement—offering integrated IP, power modeling, thermal simulation, and system-level co-design tools. Only through coordinated efforts can companies navigate packaging complexity, ensure test coverage, and deliver performant, scalable AI systems.
The 3D Packaging Imperative
3D packaging is the physical foundation of multi-die design. Compared to 2.5D solutions, 3D integration supports significantly higher bandwidth and tighter physical proximity. However, the benefits come with challenges: thermal hotspots, TSV congestion, and signal integrity must be carefully managed. Architects must co-design silicon and packaging to meet AI’s escalating demands.
From Custom to Computational HBM
The panel reached consensus on one transformative idea: the future isn’t just custom HBM. Computational HBM may be a better way to describe cHBM. This paradigm emphasizes workload-aware partitioning across logic and memory, where memory becomes an active participant in AI processing, not just a passive storage layer. Right now, hyperscalers may be the main drivers for cHBM. But, unlike proprietary custom approaches, computational HBM can scale across markets—cloud, edge, automotive—and thrive through standardization and reuse.
Summary
cHBM holds tremendous promise—but how the industry moves forward will determine whether it accelerates innovation or impedes it. With standardization, agile packaging integration, and coordinated ecosystem efforts, computational HBM can power the next generation of intelligent systems. With ecosystem aligned vision and execution, the industry will be on its way to multi-die design success to the fullest extent.
The “cHBM for AI” panel session recording can be accessed on-demand from here.
Share this post via:
Comments
One Reply to “cHBM for AI: Capabilities, Challenges, and Opportunities”
You must register or log in to view/post comments.