WP_Term Object
(
    [term_id] => 51
    [name] => RISC-V
    [slug] => risc-v
    [term_group] => 0
    [term_taxonomy_id] => 51
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 138
    [filter] => raw
    [cat_ID] => 51
    [category_count] => 138
    [category_description] => 
    [cat_name] => RISC-V
    [category_nicename] => risc-v
    [category_parent] => 178
)
            
SemiWiki Podcast Banner
WP_Term Object
(
    [term_id] => 51
    [name] => RISC-V
    [slug] => risc-v
    [term_group] => 0
    [term_taxonomy_id] => 51
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 138
    [filter] => raw
    [cat_ID] => 51
    [category_count] => 138
    [category_description] => 
    [cat_name] => RISC-V
    [category_nicename] => risc-v
    [category_parent] => 178
)

Can RISC-V Help Recast the DPU Race?

Can RISC-V Help Recast the DPU Race?
by Jonah McLeod on 08-26-2025 at 10:00 am

Key Takeaways

  • ARM has displaced Intel and AMD in the Data Processing Unit (DPU) market, capturing the ecosystem with its efficient cores used in SmartNICs.
  • The DPU market is projected to grow significantly, from $1.5 billion in 2023 to $9.8 billion by 2032, driven by increasing data generation and the need for efficient data management.
  • RISC-V's rise in DPUs presents an opportunity to redefine the category, offering a customizable alternative to ARM's fixed licensing model and enhancing architectural choice for vendors.

Can RISC V Help Qualcomm

ARM’s Quiet Coup in DPUs

The datacenter is usually framed as a contest between CPUs (x86, ARM, RISC-V) and GPUs (NVIDIA, AMD, custom ASICs). But beneath those high-profile battles, another silent revolution has played out: ARM quietly displaced Intel and AMD in the Data Processing Unit (DPU) market.

DPUs — also called SmartNICs — handle the “plumbing” of the datacenter. They offload networking by managing packet processing, TCP/IP, and RDMA. They handle storage services such as compression, encryption, and NVMe-over-Fabrics (NVMe-oF). They enforce security isolation, a critical requirement in multi-tenant cloud environments where trust boundaries are constantly tested. And they take responsibility for orchestration tasks that would otherwise burn valuable CPU cycles.

NVIDIA (BlueField via Mellanox), Marvell (OCTEON), AMD (Pensando), and Broadcom all adopted ARM cores for their DPUs. The reason was straightforward: ARM cores were small, power-efficient, licensable, and already embedded in networking silicon. By the time Intel reacted with its Infrastructure Processing Unit (IPU) program, ARM had already captured the ecosystem and set the standard.

Market Context: Why Now?

The global Data Processing Unit (DPU) market is projected to grow from $1.5 billion in 2023 to approximately $9.8 billion by 2032, reflecting a robust compound annual growth rate (CAGR) of 22.8% (Dataintelo Consulting Pvt. Ltd., 2024). Dataintelo attributes this growth to the exponential rise in data generation and the need for efficient data management and processing solutions across industries. At present, ARM cores power the overwhelming majority of DPU shipments, while Intel continues to promote its IPUs but has yet to gain broad market traction.

Meanwhile, RISC-V already has momentum in adjacent domains. Storage controllers from companies like Seoul-based Fadu — which integrates RISC-V cores into its enterprise SSD controllers for I/O scheduling and latency optimization — and SiFive use RISC-V to accelerate I/O. Orchestration and security processors also frequently rely on lightweight RISC-V designs such as OpenTitan. These are natural adjacencies to the DPU role. At the same time, geopolitics favors diversification: China in particular is accelerating sovereign RISC-V adoption, and DPUs are exactly the kind of infrastructure component where sovereignty matters.

The combination of market expansion, ARM’s lock-in, and hyperscalers’ desire for architectural alternatives sets the stage for a serious RISC-V entry into DPUs.

RISC-V’s Opportunity in DPUs

Unlike ARM, RISC-V offers an open ISA that companies can tailor to their exact workloads. (Wevolver, RISC-V vs. ARM, 2023). This is especially relevant for DPUs, which integrate diverse functional blocks: networking engines for packet flows, storage accelerators for compression and NVMe-oF, security modules for isolation, and control-plane CPUs for orchestration. RISC-V allows vendors to adapt each of these roles with custom instructions instead of relying on ARM’s fixed roadmaps.

Today’s DPUs often use clusters of ARM Cortex-A cores (ranging from Cortex-A53 to A72) (Marvell, OCTEON 10 Technical White Paper, 2023) to handle control-plane and lightweight compute functions. Here, RISC-V offers advantages: – Customization: Vendors can tune instruction sets for specialized workloads instead of relying on ARM’s fixed roadmaps.

Some RISC-V vendors, such as Akeana, support simultaneous multithreading (SMT) with up to four threads per core (Electronics360, 2024), improving throughput and utilization in workloads with high memory or I/O latency, such as networking and packet processing. Recent RISC-V vector extensions map naturally to packet processing, cryptography, and storage acceleration.

Emerging matrix extensions extend programmability into AI inference and security. Startup Simplex Micro’s architecture integrates scalar, vector, and matrix execution within a time-scheduled framework—leveraging RISC-V’s extensibility to deliver deterministic performance across diverse AI and HPC workloads. Finally, the RISC-V avoids ARM royalties while maintaining compatibility with open-source stacks like Linux, TensorFlow, and PyTorch.

Enter RISC-V’s Scalar-to-Matrix Roadmap

What makes this moment interesting is not just another IP vendor’s pitch, but the way RISC-V itself has evolved. The ISA began by addressing scalar compute — small, efficient cores for microcontrollers, embedded systems, and simple Linux-capable processors. Over the past few years, RISC-V has steadily added vector extensions, enabling data-parallel acceleration that maps naturally onto networking, storage, and cryptographic workloads. Most recently, the roadmap has expanded to include matrix extensions, designed to bring AI inference and other matrix-math-heavy tasks into the same ISA framework.

Table 1. RISC-V Companies Advancing Unified Scalar/Vector/Matrix Architectures

Company Focus Differentiator
SiFive General-purpose + AI Early vector adoption, strong ecosystem support
Andes Embedded + DSP/Vector Broad portfolio, DSP + vector extensions for AI/IoT
Akeana Datacenter-class CPUs First RISC-V mover with SMT (4 threads) + matrix engine
Ventana Server-class CPUs Hyperscaler-aligned, clear path to vector workloads
Simplex Micro Unified pipeline Novel scalar/vector/matrix integration, latency-tolerant multithreading
SemiDynamics Configurable HPC cores Advanced vector + memory subsystem customization
XiangShan Open-source research Academic/industry project exploring unified designs

This progression — scalar to vector to matrix — mirrors the way DPUs are being asked to perform. DPUs must handle scalar control-plane logic, vectorizable packet and crypto flows, and increasingly matrix-oriented inference tasks for telemetry and security. In other words, the RISC-V roadmap provides the full ingredient set for a truly programmable DPU.

Several companies are now pursuing this vision. Akeana, with its SMT-enabled designs and AI matrix computation engines, represents one of the first movers applying RISC-V directly to datacenter-class compute. Ventana Micro Systems is building server-class RISC-V processors with a clear path from scalar to vector workloads, aligning with hyperscaler requirements. SemiDynamics in Europe is focused on configurable vector cores tailored for data-intensive and AI-centric applications.

SiFive has emphasized Linux-capable RISC-V cores with vector support, targeted at HPC and infrastructure. Andes Technology has extended its cores with vector and DSP capabilities for embedded acceleration. Simplex Micro is explicitly developing a unified scalar/vector/matrix architecture with programmable extensions aimed at spanning edge to datacenter-class infrastructure solutions. At the research level, XiangShan in China is already experimenting with scalar and vector unification under one architecture.

Leapfrogging or Reinforcing ARM?

The question is not simply whether RISC-V can replace ARM, but whether it can expand the DPU definition itself. ARM’s current dominance in DPUs relies on scalar cores plus fixed accelerators. RISC-V provides an avenue to leapfrog by blending scalar, vector, and matrix programmability into one platform. This does not have to come at ARM’s expense — indeed, ARM could even adopt RISC-V vector and matrix extensions to strengthen its own DPU position.

Why This Matters

For the broader industry, RISC-V’s rise in DPUs offers a rare chance to reset the playing field. Instead of being restricted by ARM’s licensing model, companies can bend the architecture to their needs. This is especially relevant for hyperscalers, who want to optimize power, performance, and sovereignty. RISC-V also avoids monopoly dynamics: rather than a single vendor dictating the roadmap, an open ecosystem fosters multiple paths forward (SiFive, 2023).

With RISC-V, a company like Qualcomm or any major vendor would find itself in the driver’s seat — able to design a unique, custom CPU optimized for its DPU architecture, rather than depending on ARM’s licensing terms and roadmap. This independence could be a critical differentiator as DPUs become central to datacenter infrastructure.

The timing is right. AI-driven datacenter fabrics are exploding, and DPUs are no longer just about networking. They are about orchestrating compute, storage, and AI flows. In that world, a DPU that combines scalar, vector, and matrix programmability looks far more attractive than one that only integrates scalar ARM cores and fixed-function engines.

A Broader Opening

Just as ARM spotted and exploited the DPU opportunity to outflank Intel and AMD, RISC-V now offers the chance to redefine the category. Instead of fighting NVIDIA head-on in GPUs or trying to revive CPUs, vendors can leapfrog with a programmable DPU platform that reimagines datacenter infrastructure. It would be a comeback story — not by repeating old battles, but by opening a new front.

Final Thought

The industry often frames RISC-V as a CPU story — whether it can replace ARM or x86 — or as an edge IoT play. Yet the more disruptive opportunity may lie in the datacenter’s control plane. ARM built a DPU franchise that Intel and AMD never anticipated, and now RISC-V has a chance to redefine the category with vector and matrix programmability. Ultimately, ARM and RISC-V may coexist in DPUs — with ARM maintaining its incumbency and RISC-V offering an open, customizable alternative — giving vendors and hyperscalers greater architectural choice as the market matures.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.