
In a strategic move that could reshape the future of AI data center design, SiFive, a leading developer of RISC-V processor IP and compute subsystems, has announced plans to integrate NVIDIA’s NVLink Fusion interconnect technology into its high-performance data center platforms. This collaboration bridges the open-architecture innovation of the RISC-V ecosystem with NVIDIA’s industry-leading high-bandwidth interconnects, creating new opportunities for scalable, efficient, and customizable AI infrastructure.
At its core, the partnership is about unlocking seamless, high-speed communication between SiFive’s RISC-V CPUs and NVIDIA’s GPUs and accelerators. NVLink Fusion, NVIDIA’s rack-scale interconnect technology, enables coherent linking of CPUs, GPUs, and other accelerators with extremely high bandwidth. By adopting NVLink Fusion, SiFive’s compute platforms will be able to connect directly to NVIDIA accelerators eliminating the traditional bottlenecks of PCIe-based CPU-to-GPU communication and enabling data center architects to build tightly coupled heterogeneous systems optimized for the demands of AI workloads.
Why This Matters
Artificial intelligence workloads especially large language models (LLMs), recommendation engines, and real-time analytics are rapidly outpacing conventional data center designs. These AI workloads demand not only high throughput, but also efficient data movement and power-optimized compute architectures. Traditional x86- or Arm-based CPUs paired with discrete accelerators over PCIe can struggle to deliver the low latency and high bandwidth required at scale, especially as models grow and power costs skyrocket.
SiFive’s RISC-V IP is prized for its configurability and power efficiency. Customers can tailor processor designs to specific workload requirements, tuning for performance per watt and overall system efficiency advantages that are increasingly valuable in hyperscale environments. Integrating NVLink Fusion expands this value proposition by giving RISC-V CPUs a direct, coherent high-performance path to the acceleration layer of modern AI systems.
NVLink Fusion itself is designed to address the needs of next-generation AI “factories” data centers that treat AI compute as a first-class workload rather than a specialized add-on. The technology offers rack-scale performance and a unified interconnect fabric that scales across hundreds of compute units, significantly improving the performance-per-watt equation for AI training and inference across distributed systems.
Strategic Implications for RISC-V
For years, RISC-V has been touted as the future of open-source compute architecture offering an alternative to proprietary instruction sets like x86 and Arm. However, one of the obstacles RISC-V has faced in the high-performance AI space is ecosystem maturity, especially around high-speed interconnects and software support that large data center players demand.
By aligning with NVIDIA’s NVLink Fusion ecosystem, SiFive helps overcome those barriers. Now, RISC-V processors can participate as “first-class citizens” in rack-scale designs with computed-intensive accelerators, supported by the broader NVIDIA stack including CUDA-based libraries and orchestration tools. This increases RISC-V’s attractiveness for cloud providers, hyperscalers, and custom silicon designers who previously might have defaulted to x86 or Arm platforms due to ecosystem inertia.
In the announcement, SiFive President and CEO Patrick Little emphasized the shift toward co-design in AI infrastructure where open, customizable CPUs are built from the ground up alongside accelerators and interconnects. NVIDIA CEO Jensen Huang echoed this sentiment, framing the partnership as a way to bring coherent, high-bandwidth NVLink into the RISC-V world and enable flexible, scalable AI systems.
Broader Industry Context
This collaboration also signals a broader trend in the semiconductor and data center industries: a move away from one-size-fits-all hardware toward heterogeneous, domain-optimized architectures. Hyperscalers and enterprise data centers alike are investing in bespoke solutions that match compute resources to specific workload profiles — whether that’s training next-generation AI models, delivering low-latency inference, or supporting mixed-use enterprise services.
In addition, NVIDIA’s strategy with NVLink Fusion licensing the interconnect for integration with third-party CPUs expands its ecosystem beyond systems built entirely in-house. By bringing partners like SiFive into the fold, NVIDIA strengthens the adoption of its rack-scale architecture as a de-facto standard for high-performance AI infrastructure.
Bottom Line: The integration of NVIDIA NVLink Fusion into SiFive’s RISC-V data center platforms represents a key milestone for open-architecture AI computing. It combines the flexibility and power efficiency of customizable RISC-V designs with the high-throughput, low-latency fabric needed to unify CPUs and accelerators in modern AI systems. As AI models continue to grow in complexity and scale, such innovations may redefine how data centers are architected — enabling not just faster performance, but smarter, more efficient infrastructure tailored to the real-world needs of AI.
Also Read:
Tiling Support in SiFive’s AI/ML Software Stack for RISC-V Vector-Matrix Extension
RISC-V Extensions for AI: Enhancing Performance in Machine Learning
SiFive Launches Second-Generation Intelligence Family of RISC-V Cores
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.