WP_Term Object
(
    [term_id] => 51
    [name] => RISC-V
    [slug] => risc-v
    [term_group] => 0
    [term_taxonomy_id] => 51
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 143
    [filter] => raw
    [cat_ID] => 51
    [category_count] => 143
    [category_description] => 
    [cat_name] => RISC-V
    [category_nicename] => risc-v
    [category_parent] => 178
)
            
SemiWiki Podcast Banner
WP_Term Object
(
    [term_id] => 51
    [name] => RISC-V
    [slug] => risc-v
    [term_group] => 0
    [term_taxonomy_id] => 51
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 143
    [filter] => raw
    [cat_ID] => 51
    [category_count] => 143
    [category_description] => 
    [cat_name] => RISC-V
    [category_nicename] => risc-v
    [category_parent] => 178
)

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores
by Kalar Rajendiran on 09-18-2025 at 6:00 am

Key Takeaways

  • SiFive is the leading independent supplier of RISC-V processor IP, with over two billion devices using its designs across various applications.
  • The second generation of SiFive's Intelligence Family introduces the X100 series alongside upgrades to the X200, X300 and XM lines, focusing on high performance and low power for edge to data center applications.
  • The X-Series cores serve two main purposes: as standalone vector CPUs for AI inference and as Accelerator Control Units to manage data movement in accelerators.
  • SiFive's new X160 core outperforms Arm's Cortex-M85, delivering up to 230% better inference performance while maintaining the same silicon area and power budget.
  • The company provides a comprehensive AI software ecosystem, facilitating rapid development and deployment of AI algorithms, enhancing the appeal of RISC-V for next-generation processing.

SiFive, founded by the original creators of the RISC-V instruction set, has become the leading independent supplier of RISC-V processor IP. More than two billion devices already incorporate SiFive designs, ranging from camera controllers and SSDs to smartphones and automotive systems. The company no longer sells its own chips, choosing instead to license CPU IP and collaborate with silicon partners on development boards. This pure-play IP model allows SiFive to focus on innovation across its three core product families: Performance for high-end applications, Essential for embedded control, and Intelligence for AI-driven compute. The company also has an Automotive family of products with auto-grade safety and quality certifications.

The company recently announced the second generation of its Intelligence Family of processor IP cores, a complete update of its AI-focused X-Series. The new portfolio introduces the X100 series alongside upgrades to the X200, X300, and XM lines designed for low power and high performance in a small footprint for applications from the far edge to the data center.

SiFive 2nd Gen Intelligence Family

On the eve of the AI Infra Summit 2025, I chatted with SiFive’s Martyn Stroeve, Vice President of Corporate Marketing and Marisa Ahmad, Product Marketing Director, to gain the following deeper insights.

Two Popular X-Series Use Cases

While very flexible and versatile, the second-generation X-Series targets two distinct use cases. The first is as a standalone vector CPU, where the cores handle complex AI inference directly without the need for an external accelerator. A leading U.S. semiconductor company has already licensed the new X100 core for its next-generation edge-AI system-on-chips, relying on the core’s high-performance vector engine to process filters, transforms, and convolutions efficiently.

The second and increasingly critical application is as an Accelerator Control Unit. In this role, the X-Series core replaces the discrete DMA controllers and fixed-function state machines that traditionally orchestrate data movement in accelerators. Another top-tier U.S. semiconductor customer has adopted the X100 core to manage its industrial edge-AI accelerator, using the processor’s flexibility to control the customer’s matrix engine accelerator and to handle corner-case processing.

The Rising Importance of Accelerator Control

AI systems are becoming more complex, with vast data sets moving across heterogeneous compute fabrics. Conventional accelerators deliver raw performance but lack flexibility, often suffering from high-latency data transfers and complicated memory access hardware. SiFive’s Accelerator Control Unit concept addresses these pain points by embedding a fully programmable scalar/vector CPU within the accelerator itself. This design simplifies programming, reduces latency, and makes it easier to adapt to new AI models without extensive hardware redesign—an area where competitors such as Arm have scaled back their investment. Here is a link to a video discussing how Google is leveraging SiFive’s first generation X280 as AI Compute Host to provide flexible programming combined with the Google MXU accelerator in the datacenter.

SiFive Accelerator Control Unit

Four Key Innovations in the Second Generation

SiFive’s new Intelligence cores introduce four standout enhancements. First are the SSCI and VCIX co-processing interfaces, high-bandwidth links that provide direct access to scalar and vector registers for extremely low-latency communication with attached accelerators.

SiFive New 2nd Gen Accelerator Interfaces

Second is a hardware exponential unit, which reduces the common exp() function operation from roughly fifteen instructions to a single instruction, an especially valuable improvement given that exponential function operations are second only to multiply–accumulate in AI compute workloads.

SiFive New Exponential Function

Third is a new memory-latency tolerance architecture, featuring deeper configurable vector load data queues and a loosely coupled scalar–vector pipeline to keep data flowing even when memory access is slow. Finally, the family adopts a more efficient memory subsystem, replacing private L2 caches with a customizable hierarchy that delivers higher capacity while using less silicon area.

Performance Compared to Arm Cortex-M85

SiFive highlighted benchmark data showing that the new X160 core,  delivers roughly twice the inference performance of Arm’s Cortex-M85 at comparable silicon area. Using MLPerf Tiny v1.2 workloads such as keyword spotting, visual wake-word detection, image classification, and anomaly detection, the X160 demonstrated performance gains ranging from about 148 % to over 230 % relative to the Cortex-M85 while maintaining the same footprint. This two-times advantage underscores SiFive’s claim that its second-generation Intelligence cores can outpace the best current Arm microcontroller-class AI processors without demanding more die area or power budget.

SiFive X160 vs Arm Cortex M85 Performance

A Complete AI Software Stack

Hardware is supported by a robust AI RISC-V based software ecosystem . The stack includes an MLIR-based compiler toolchain, a SiFive-tuned LLVM backend, and a neural-network graph analyzer. A SiFive Kernel Library optimized for vector and matrix operations integrates with popular frameworks such as TensorFlow Lite, ONNX and PyTorch. Customers can prototype on QEMU, FPGA, or RTL/SystemC simulators and seamlessly transition to production silicon, allowing rapid deployment of AI algorithms on SiFive’s IP.

SiFive 2nd Gen Intelligence Software Stack

Summary

By marrying a mature software platform with cutting-edge vector hardware, SiFive’s second-generation Intelligence Family positions RISC-V as a compelling alternative for next-generation AI processing. These new products all feature enhanced scalar, vector and specifically with XM, matrix processing capabilities designed for modern AI workloads. All of these cores build on the company’s proven fourth-generation Essential architecture, providing the reliability valued by automotive and industrial customers while adding advanced features for AI workloads from edge to data center.

With initial design wins at two leading U.S. semiconductor companies and momentum across industries from automotive to data centers, the Intelligence Gen 2 products stands ready to power everything from tiny edge devices to massive training clusters—while setting a new performance bar by outclassing Arm’s Cortex-M85 in key AI inference tasks.

Access the press announcement here.

To learn more, visit SiFive’s product page.

Also Read:

Podcast EP197: A Tour of the RISC-V Movement and SiFive’s Contributions with Jack Kang

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Enabling Edge AI Vision with RISC-V and a Silicon Platform

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.