When Semidynamics added support for int4 and fp8 data types to their RISC-V processors, it clearly indicated their intent to target AI inference with hundreds or perhaps thousands of concurrent threads running in their advanced caching and pipelining scheme. Two recent announcements around Embedded World 2025 reinforce their positioning for RISC-V AI applications – a partnership with Baya Systems on NoC technology, and added support for the ONNX Runtime as part of its widening software offering for its RISC-V processors.
WeaverPro enables rapid optimization of low-latency interconnects
At the RISC-V core level, Semidynamics has spent enormous energy creating a deeper, out-of-order RISC-V pipeline and keeping it busy at all times. It’s a key part of their scalability story. But what happens when an application calls for many smaller cores to work together? An increasingly popular option for AI inference is a sea of small RISC-V cores that can provide faster detection and more intense region-of-interest processing in the same system architecture.
In these applications, interconnects between cores and from cores to memory immediately rise to the top of the list of designer concerns. The aggregate bandwidth flowing across an AI inference chip can be staggering, even with efforts to strategically distribute memory in smaller blocks to get it closer to processing units. Hand-designed interconnects easily become bottlenecks with more than a few cores participating.
Baya Systems is a relatively new player in the NoC space with its next-generation WeaveIP and is gaining rapidly thanks to its software-driven, system-level optimization technology, WeaverPro. The partnership between Semidynamics and Baya is a broader effort for scalability into HPC space, but it also provides designers with excellent tools for AI inference chip design. WeaverPro has two components: CacheStudio to analyze cache and memory hierarchy with loading, and FabricStudio to analyze and optimize NoC parameters in actual workloads. The tools provide designers working with Semidynamics RISC-V processors an efficient path to creating high-bandwidth, low-latency interconnects optimized for AI inference applications.
Moving highly portable ONNX models onto RISC-V
Laying down RISC-V hardware is one thing, but the crucial factor in AI inference design is the ability to map a software model onto hardware and optimize the configuration. As AI inference proliferates, designers frequently use open-source AI models to speed their prototyping cycles by applying proven code. ONNX originated within Microsoft as a common format for AI models and is often a lingua franca for import and export between AI frameworks.
The ONNX Runtime can be thought of as a microkernel for AI, accelerating AI models via interfaces to integrated hardware-specific libraries. Semidynamics extended its Kernel Library with ONNX Runtime support to leverage its RISC-V processors efficiently. The library includes primitives for matrix multiplication, transposition, activation functions, and more features for faster development and optimization of RISC-V AI applications.
ONNX support is part of a broader effort, the Aliado RISC-V SDK, providing enhanced software for Semidynamics RISC-V processors. Many RISC-V tools come from the robust open-source ecosystem. Semidynamics gathers those plus its hardware-specific RISC-V processor enhancements into a single environment, saving designers time.
Semidynamics resources for RISC-V AI applications
Semidynamics is carving out a powerful niche in RISC-V AI applications, addressing a whole product with hardware and software ready for designers to focus on adding value on top. The SMD ONNX Runtime and a Model Zoo for Semidynamics RISC-V processors, along with the Aliado Quantization Recommender and the Aliado SDK, are available for download at:
https://semidynamics.com/software
More information on the partnership with Baya Systems and support for the ONNX Runtime is available in the Semidynamics newsroom.
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.