Key Takeaways
- The webinar highlighted Akeana's leadership in high-performance RISC-V-based SMT-capable IP cores, emphasizing their unique position in the evolving compute landscape.
- Simultaneous Multi-Threading (SMT) is framed as a solution to increasing compute density demands across various sectors such as edge AI, data centers, and automotive processing.
- Akeana's product lineup includes multiple series of cores supporting SMT, with implementations that allow up to four threads per core, catering to diverse market needs.
An Akeana hosted webinar on Simultaneous Multi-Threading (SMT) provided a comprehensive deep dive into the technical, commercial, and strategic significance of SMT in the evolving compute landscape. Presented by Graham Wilson and Itai Yarom, the session was not only an informative overview of SMT architecture and use cases, but also a strong endorsement of Akeana’s unique position as a leader in high performance RISC-V-based SMT-capable IP cores.
Akeana is a venture funded, RISC-V, startup founded in early 2021 by leaders in the industry. Their semiconductor IP offerings include low-end microcontroller cores, mid-range embedded cores, and high-end laptop/server cores, along with coherent and non-coherent interconnects and accelerators.
Akeana frames SMT as a solution born out of necessity—driven by increasing compute density demands in edge AI, data center inference, automotive processing, and more. As heterogeneous SoC architectures become the standard, efficient compute resources management is essential. SMT addresses this by allowing multiple independent threads to run concurrently on the same core, enabling higher resource utilization and reducing idle cycles, particularly when latency or memory fetch bottlenecks arise.
The presenters made a compelling case for SMT’s relevance beyond its traditional niche in networking. They emphasized how companies like NVIDIA and Tesla are now openly embracing SMT in their in-house SoCs for AI workloads, citing improvements in performance-per-watt and latency management. This shift signals broader industry validation for SMT, especially as AI systems grow more complex and thread-aware execution becomes essential. Historically SMT has been available on x86 processors but not those from ARM, as general-purpose compute instances in the cloud seek to optimize for sellable cores per Watt. As the infrastructure landscape shifts towards accelerated computing that requires heterogeneous compute elements, SMT is now back in demand as a valuable capability.
A highlight of the webinar was Akeana’s multi-tiered product lineup: the 100 series (32-bit embedded), the 1000 series (consumer/automotive performance), and the 5000 series (high-performance out-of-order designs), all offering SMT as a configuration option. Notably, their SMT implementation supports up to four threads per core, across both in-order and out-of-order microarchitectures. This flexibility is crucial for customers balancing power, area, and throughput requirements across diverse markets.
Graham and Itai reinforced that SMT is more than just a performance booster—it is a key enabler of system-level efficiency. In multi-threaded SoC configurations, SMT allows a CPU to manage not only main application workloads but also real-time housekeeping tasks such as system management, interrupt handling, and accelerator coordination. The example of networking applications combining USB and Ethernet threads illustrated how SMT reduces the need for separate CPUs, lowering BOM and energy use.
Akeana’s team also emphasized how SMT contributes to safety and redundancy. In automotive contexts, running dual software instances on separate threads allows for fault detection and safe-state recovery. Similarly, in AI training clusters, redundancy via SMT enhances system resilience without duplicating silicon.
From a technical perspective, the discussion on out-of-order vs. in-order SMT implementations was informative. Itai clarified that out-of-order SMT enables further instruction-level parallelism by optimizing across threads, while in-order SMT is more deterministic and lightweight—making it well-suited for real-time embedded applications.
Another key insight was the secure threading implementation Akeana’s provides. Features such as isolated register files, secure context switching, and telemetry support indicate a mature approach to protecting multi-threaded workloads—a necessity in safety-critical and edge environments.
Performance benchmarks presented by Akeana were particularly impressive. A 20–30% uplift in Spec scores using SMT (even in out-of-the-box configurations) and over 2x performance boosts in data movement-intensive tasks underscore SMT’s real-world benefits. And this too for cores with good single thread performance to begin with.
It is clear from the webinar content, that SMT is not just for higher performance of CPU compute, SMT also enables more efficient data movement, connectivity, which is very important for advanced AI SoC systems, which use a range of heterogeneous cores and hardware accelerators. So SMT is a solution that must be considered when orchestrating the interplay between Compute and Data movement from memory or other IO ports.
The Q&A section highlighted growing market interest: over half of Akeana’s customers now request SMT, particularly in automotive, edge inference, and data center workloads. Moreover, the firm currently stands alone among independent RISC-V IP vendors in offering configurable SMT as licensable soft IP—underscoring its first-mover advantage.
Bottom line: the webinar succeeded not only in demystifying SMT, but also in positioning Akeana as a pioneering force in bringing high-performance, secure, and scalable multi-threading capabilities to the RISC-V ecosystem. As compute demands continue to intensify, SMT will likely evolve from a niche capability into a foundational feature—and Akeana appears well-positioned to lead that transition.
You can see a replay of the webinar here.
Also Read:
CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs
cHBM for AI: Capabilities, Challenges, and Opportunities
Memory Innovation at the Edge: Power Efficiency Meets Green Manufacturing
Share this post via:
Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside