NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Video EP10: An Overview of Mach42’s AI Platform with Brett Larder

Video EP10: An Overview of Mach42’s AI Platform with Brett Larder
by Daniel Nenni on 09-19-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Brett Larder, co-founder and CTO at March42. Brett explains what March42’s AI technology can do and the benefits of using the platform to quickly analyze designs to find areas that may be out of spec and require more work. He describes the way Mach42 trains AI models and discusses some of the benefits for tasks such as IP reuse and design iteration.

Contact Mach42

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization,
committee or any other group or individual.


CEO Interview with Adam Khan of Diamond Quanta

CEO Interview with Adam Khan of Diamond Quanta
by Daniel Nenni on 09-19-2025 at 6:00 am

Adam Khan Diamond Quanta.jpg.


Key Takeaways

  • Diamond Quanta is bringing diamond out of the lab and into manufacturable devices today — converting decades of promise into practical impact for semiconductors, optics, and quantum. What was once thought ‘unavailable’ is becoming inevitable.
  • The company’s platform centers on proprietary doping and annealing methods that enable both n- and p-type behavior in diamond, supporting real devices (diodes, FETs, and quantum emitters).
  • Early collaborations with industry and research partners focus on high-temperature, high-voltage operation and reliability, targeting use in aerospace/defense, energy, and next-gen computing.
  • At Diamond Quanta, we call this vision The Physics of Forever — unlocking the enduring properties of diamond to enable a new era of performance and reliability.

Adam Khan is a vanguard in diamond semiconductor technology, celebrated for his foresight and expertise in the industry. As the founder of AKHAN Semiconductor, he was instrumental in innovating lab-grown diamond thin-films for a myriad of applications, from enhancing the durability of smartphone screens and lenses with Miraj Diamond Glass® to bolstering the survivability of aircraft with Miraj Diamond Optics®.

Tell us about your company.

Diamond Quanta is pioneering engineered diamond as a practical semiconductor and quantum material platform.

Our team combines decades of proprietary diamond growth and processing expertise with business development, IP strategy, and financial leadership. Our founders have committed capital and sweat equity, reflecting grit and full-time commitment. The goal: deliver devices that run cooler, last longer, and perform in places silicon, SiC, and GaN struggle—think high temperature, high field, high radiation, and high frequency environments. We’re building a platform that spans power electronics (diodes and FETs), quantum photonic sources, and ruggedized optical/sensor components. This platform embodies The Physics of Forever — our mission to make engineered diamond the foundation for the next era of electronics, optics, and quantum technologies, with physics-informed machine learning (ML) accelerating breakthroughs.

What problems are you solving?

Modern power and sensor systems are hitting thermal and reliability walls. Wide-bandgap incumbents have extended performance, but at the highest voltages, temperatures, and power densities, margins are thin. Diamond’s unique properties — a combination of ultra-wide bandgap, thermal conductivity, breakdown field, and carrier velocity — offer new headroom. Our focus is manufacturable doping and activation, so diamond can move from materials promise to device reality. For customers, this translates into significant economic value: up to 70% BOM savings, better reliability, and reduced cooling/qualification costs. In practice, this means up to 50% fewer cooling components are required in system designs, directly reducing weight, complexity, and cost. This is why leading OEMs are already engaging with Diamond Quanta — the industry cannot afford to wait.

What application areas are your strongest?

Our beachhead is display coatings, proving manufacturability and customer pull with Tier-1 glass suppliers. Beyond this zero-step, three near-term areas are:

  1. Power electronics for aerospace/defense, energy, and mobility where high-temperature, high-voltage switches reduce size, weight, and cooling needs.
  2. Quantum photonics with diamond color centers that enable secure comms, sensing, and computing.
  3. Extreme-environment sensing and optics such as high-temp pressure/current sensors and radiation-hard windows.
What keeps your customers up at night?

Reliability at temperature, efficiency under brutal duty cycles, and qualification risk. Many are boxed in by thermal budgets, derating, and complex cooling. They want devices that survive heat and radiation with predictable lifetime models—and they want a path to volume without a science-project supply chain.

What does the competitive landscape look like and how do you differentiate?

We respect SiC and GaN—they unlocked a generation of power density. Our differentiation is the engineered diamond device stack: co-doping, activation/defect-management anneals, and physics-first modeling. This enables both n- and p-type device functionality at higher breakdown and hotter junction operation while remaining compatible with mainstream fab flows. Compared to SiC and GaN, diamond offers >2x thermal conductivity and 10x higher heat tolerance, which translates into fewer design trade-offs at scale. We also maintain a strong IP portfolio across doping, annealing, and device architectures. Given recent M&A in coatings and semiconductor materials, we see optics as a divestiture option and the broader platform as a strategic acquisition target.

What new features/technology are you working on?

We are advancing ion-implantation co-doping with pulsed-laser and high-temp anneals to minimize defect complexes while activating dopants. Prototypes include Schottky and PiN diodes, followed by FETs exploiting diamond’s breakdown and thermal transport. For optics and quantum, coatings serve as a zero-step proving manufacturability from the start. We are also engineering emitters and coupling structures to deliver brighter, more uniform quantum sources.

  • Process integration: Ion-implantation-based co-doping with pulsed-laser and high-temperature anneals designed to minimize defect complexes while activating dopants.
  • Device prototypes: Next-gen Schottky and PiN diodes, followed by FET topologies that exploit diamond’s breakdown and thermal transport.
  • Quantum photonics: Engineered emitters and coupling structures targeting brighter, more uniform sources for integrated photonics.
How do customers normally engage with your company?

We run structured evaluation and co-development programs: NDAs and problem statements → sample/device evaluation or compact model sharing → joint reliability plans → pre-production pilots. For quantum photonics, we offer early-access engagements around emitter performance and packaging. For power, we collaborate on application-specific stress profiles and targets (voltage class, Tj, SOA, RDS(on)/VF, switching loss).

What results can you share today?

We’ve demonstrated device-relevant doping/activation and early diode behavior at temperatures where SiC and GaN derate. Independent labs and partners validated high electron mobility (>555 cm²/V·s), reduced defect scattering in co-doped diamond, as published in a peer-reviewed MRS Advances white paper (Feb. 2025).[1] Building on this validation, our customer engagements show how these advances translate into system economics: up to 70% BOM savings, improved reliability, and fewer cooling components.

What’s next?

Our focus is converting prototypes to qualified parts in a few focused voltage/current classes, expanding our foundry-friendly process modules, and broadening our partner ecosystem—from epi and substrates to packaging and test. The through-line is the same: engineered diamond devices that simplify thermal design and push performance per watt in regimes that matter.

How can interested teams engage?

If you’re wrestling with heat, reliability, or extreme environments in power or sensing—or need practical quantum photonic sources—let’s compare requirements and agree on a pilot plan. We bring the materials, process, and device stack; you bring the mission profile. We have active customer discovery and development engagements (i.e., 20+ MNDAs, 2 MoUs, and a JTEA / SOW). Join the wave — Diamond Quanta is moving fast from promise to product. Let’s define your pilot plan now and help shape the next era of performance. Be part of The Physics of Forever.

Why did you join Silicon Catalyst and what are your goals in their 24-month program?

We joined Silicon Catalyst because it represents What’s Next in semiconductors — a platform proven to help deep-tech startups move from breakthrough science to market adoption. For Diamond Quanta, it’s not about incubation, it’s about accelerating impact through a network of industry partners, investors, and mentors.

Our goals in the 24-month program are clear: validate our engineered diamond platform in customer systems, secure early design-ins with Tier-1 partners, and build the operational and investor readiness to scale from prototypes into production.

Silicon Catalyst amplifies our mission — The Physics of Forever — making diamond the enduring foundation for electronics, optics, and quantum technologies that last longer, run cooler, and redefine performance.

  1. Khan, A.H., Kim, T.S., “Advanced co-doping techniques for enhanced charge transport in diamond-based materials,” MRS Advances, Feb. 2025. https://doi.org/10.1557/s43580-025-01206-x
Also Read:

CEO Interview with Andrew Skafel of Edgewater Wireless

Cutting Through the Fog: Hype versus Reality in Emerging Technologies

CEO Interview: John Chang of Jmem Technology Co., Ltd.


Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS
by Daniel Nenni on 09-18-2025 at 10:00 am

Rise SemiWiki Webinar October


Key Takeaways

– High-Level Synthesis (HLS) delivers not only design productivity and quality but also dramatic gains in verification speed and debug – and it delivers them today.
–  Rise Design Automation uniquely enables SystemVerilog-based HLS and SystemVerilog verification, reusing proven verification infrastructure.
– The webinar features expert insights from verification methodology architect Mark Glasser, and Mike Fingeroff, HLS expert, presenting the technical content and a live demonstration.
– Attendees will learn how to unify design, verification, and debug across abstraction levels without duplicating effort.

Register Here For Replay

High-Level Synthesis (HLS) and raising design abstraction have been proven to deliver significant productivity and value to design teams — faster design entry, improved architectural exploration, and tighter system-level integration. These benefits are real, but experienced users and teams often cite a different advantage as the most valuable: verification.

By enabling earlier testing, running regressions 30×–1000× faster than RTL, and simplifying debug, HLS can dramatically accelerate verification. The challenge, however, is that existing HLS flows rely on C++ or SystemC, often leaving verification disconnected from established SystemVerilog/UVM environments. This gap forces teams to bridge methodologies on their own and uncover problems only after RTL is generated — slowing adoption and raising risk.

Rise Design Automation addresses this directly by making SystemVerilog a first-class citizen in HLS. In collaboration with SemiWiki, Rise will host a webinar that demonstrates how teams can apply familiar SystemVerilog and UVM methodologies consistently from high-level models through RTL, simplify debug, and unify design and verification across abstraction levels. The live event takes place on Wednesday, October 8, 2025, from 9–10 AM Pacific Time.

The Webinar Presenters:

The session begins with Mark Glasser, a distinguished verification architect and methodology expert. Mark co-invented both OVM and UVM and is the author of the recently published book, Next Level Testbenches: Design Patterns in SystemVerilog and UVM (2024). He will provide historical and forward-looking context on how verification methodology has evolved and the need driving raising abstraction.

The majority of the session will be presented by Mike Fingeroff, Chief of HLS at Rise DA. With over 25 years of experience and as the author of The High-Level Synthesis Blue Book, Mike specializes in HLS, SystemVerilog, SystemC, and performance modeling. He will deliver the technical deep dive and a live demonstration of Rise’s flow.

Key Topics

The webinar will address how Rise enables:

  • SystemVerilog for HLS – untimed and loosely timed modeling and the constructs synthesized into RTL.
    Verification continuity – applying SystemVerilog methodologies consistently from high-level models through RTL.
    Mixed-language and mixed-abstraction simulation – explain the automatically generated adapters that bridge between HL and RTL and how to mix-and-match in verification including UVM.
    Advanced debug features – HL↔RTL correlation, transaction-level waveforms, RTL signal visibility, and synthesized assertions and coverage.
    Familiar debug practices – including $display support and line-number annotations for RTL signals.

A highlight of the session will be a live demonstration, where attendees will see a design example progress from high-level verification through RTL, showcasing methodology reuse and debug continuity.

To Learn More

If you’re looking to accelerate verification, reduce duplicated effort, and understand how to apply your existing SystemVerilog/UVM expertise in an HLS context, this webinar will step you through the code.

Register Here For Replay

Don’t miss the opportunity to see how SystemVerilog at the core of HLS can streamline your design process and verification flow.

About Rise Design Automation

Our mission at Rise Design Automation is to raise the level of abstraction of design and verification beyond RTL and have it adopted at scale across the industry in order to transform how designs will be done for years to come.   So, Adoption at Scale with Innovation at Scale.

Also Read:

Moving Beyond RTL at #62DAC

Generative AI Comes to High-Level Design

Upcoming Webinar: Accelerating Semiconductor Design with Generative AI and High-Level Abstraction


CEO Interview with Barun Kar of Upscale AI

CEO Interview with Barun Kar of Upscale AI
by Daniel Nenni on 09-18-2025 at 10:00 am

Barun Kar Headshot

Barun Kar is CEO of Upscale AI. He is also the co-founder of Auradine and previously served as COO. Barun has over 25 years of experience leading R&D organizations to deliver disruptive products and solutions, resulting in multi-billion-dollar revenue. Barun was on the founding team at Palo Alto Networks and served as the company’s Senior Vice President of Engineering where he spearheaded two acquisitions and led five post-merger integrations. Prior to that, Barun oversaw the entire Ethernet portfolio at Juniper Networks.

Tell us about your company

Upscale AI is developing open-standard, full-stack turnkey solutions for AI networking infrastructure. We’re redesigning the entire AI networking stack for ultra-low latency networking, offering next-level performance and scalability for AI training, inference, generative AI, edge computing, and cloud-scale deployments. Upscale AI just raised $100 million of funding for our seed round, so we look forward to using this funding to bring our solutions to market and define the future of scalable, interoperable AI networking.

What problems are you solving?

It’s becoming more challenging for network infrastructure to keep up with AI/ML model sizes, inferencing workloads, token generation rates, and frequent model tuning with real-time data. To meet today’s networking challenges, the industry needs scalable, high-performance networking infrastructure built on open standards. Upscale AI’s open standard solutions meet the latest bandwidth requirements and low latency needs of AI workloads, while also offering customers more scalability. Plus, Upscale AI is providing companies with much more flexibility and interoperability compared to the closed, proprietary solutions that dominate the market today.

What application areas are your strongest?

Upscale AI’s silicon, systems, and software are specifically optimized to meet AI requirements today and in the future. Our ultra-low latency and high bandwidth networking fabric will not only drive the best xPU performance, but will also offer a huge reduction in total cost of ownership at the data center level. Our unified NOS, which is based on SAI/SONiC open standards, makes it easy for companies to scale their infrastructure as needed and perform in-service network upgrades to maximize uptime. Additionally, our networking and rack scale solutions enable companies to host an array of AI compute without vendor lock-in.

What keeps your customers up at night?

Increasing network bandwidth demands have put a lot of pressure on infrastructure to deliver high bandwidth, low latency, and reliable interconnectivity. While you often hear about how powerful AI applications are now accessible to anyone with an internet connection, AI network infrastructure remains limited to companies with a lot of capital.  Furthermore, even hyperscalers and AI neocloud providers with deep pockets are limited to closed, proprietary solutions for AI network infrastructure. These companies don’t like being locked into a closed ecosystem. Upscale AI is giving its customers a new level of flexibility with our portfolio that is built using UALink, Ultra Ethernet, SONiC, SAI, and other cutting-edge open source technologies and open standards.

What does the competitive landscape look like and how do you differentiate?

Today there is no established AI network player that is offering an alternative to proprietary solutions. AI networking innovation should not be locked into a closed ecosystem. We strongly believe at Upscale AI that open standards are the future and we’re working to democratize AI network infrastructure by pioneering open-standard networking technology. Our portfolio gives companies bring-your-own-compute flexibility to help realize the full potential of AI. A truly differentiated full-stack solution—engineered for AI-scale networking, vertically integrated from product to support, diversified for optionality, and built on open standards to power the next wave of AI infrastructure growth.

What new features/technology are you working on?

We’re working to bring to market full stack AI networking infrastructure, including robust silicon, systems, and software. Stay tuned for more updates on what’s coming out next.

How do customers normally engage with your company?

Upscale AI has a direct salesforce working with hyperscalers, neocloud providers, and other companies in the AI networking space. Prospective customers can reach out to our team via our website: https://upscaleai.com/.

Also Read:

CEO Interview with Adam Khan of Diamond Quanta
CEO Interview with Nir Minerbi of Classiq
CEO Interview with Russ Garcia with Menlo Micro


Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure

Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure
by Daniel Nenni on 09-18-2025 at 8:00 am

Eric Xu at Huawei Connect 2025

At Huawei Connect 2025, held in Shanghai, Eric Xu, the Rotating Chairman of Huawei, delivered a keynote speech that laid out the company’s ambitious roadmap for AI infrastructure, computing power, and ecosystem development. His speech reflected Huawei’s growing focus on building high-performance systems that can support the next generation of artificial intelligence while advancing self-reliant technology development.

Setting the Stage

Xu began his keynote by reflecting on the rapid evolution of AI models and how breakthroughs over the past year have pushed the boundaries of computing. He noted that the increasing complexity of large models, particularly in inference and recommendation workloads, demands not just more powerful chips, but fundamentally new computing architectures. According to Xu, AI infrastructure needs to be both scalable and efficient—capable of handling petabyte-scale data and millisecond-level inference.

He also reminded the audience of the five key priorities he had previously outlined, such as the need for sustainable compute power, better interconnect systems, and software-hardware co-optimization. This year’s keynote built upon those principles and introduced Huawei’s vision for its next-generation systems.

New Products and Roadmap

One of the most significant parts of Xu’s speech was the unveiling of Huawei’s updated roadmap for chips and AI computing platforms. Over the next three years, Huawei will roll out several generations of Ascend AI chips and Kunpeng general-purpose processors. Each generation is designed to increase performance and density while supporting the growing needs of training and inference workloads.

Xu introduced the TaiShan 950 SuperPoD, a general-purpose computing cluster based on Kunpeng processors. It offers pooled memory, high-performance storage, and support for mission-critical workloads such as databases, virtualization, and real-time analytics. The design is intended to support diverse computing needs, with significant improvements in memory efficiency and processing speed.

On the AI side, Xu announced the Atlas 950 and Atlas 960 SuperPoDs. These are high-density AI compute systems capable of scaling to tens of thousands of AI processors. The upcoming Atlas 960 SuperCluster will combine over one million NPUs and deliver computing power measured in zettaFLOPS. This marks a shift toward ultra-large-scale AI systems, designed to handle foundation models, search, recommendation, and hybrid workloads.

To enable this, Huawei developed UnifiedBus, a proprietary interconnect that supports high-bandwidth, low-latency communication between nodes. It also supports memory pooling and intelligent task coordination. According to Xu, this interconnect is critical for scaling AI systems efficiently and supporting hybrid PoDs that combine AI, CPU, and specialized compute.

Open Source and Ecosystem Strategy

Another core element of the keynote was Huawei’s strong push toward openness. Xu announced that the company will fully open-source its core AI software stack, including its CANN compiler and virtual instruction set. Toolchains, model kits, and the openPangu foundation models will also become available to developers and partners by the end of the year.

This move toward open-source infrastructure is part of Huawei’s strategy to lower adoption barriers and encourage collaboration across the AI ecosystem. Xu emphasized that AI innovation cannot happen in silos, and by opening up its tools and platforms, Huawei hopes to enable more organizations to build on its technology.

Strategic Implications

Xu’s keynote also carried strategic overtones, reflecting Huawei’s response to geopolitical challenges and technology restrictions. With limited access to advanced semiconductor manufacturing, Huawei is shifting its focus toward system-level innovation—building powerful infrastructure using available nodes while maximizing performance through architecture and software.

The message was clear: Huawei is betting on large-scale infrastructure, hybrid compute systems, and interconnect innovation to maintain competitiveness in AI. The company aims to provide alternatives to traditional U.S.-centric AI platforms and chip providers, especially in markets seeking greater technological independence.

Bottom line: Eric Xu’s keynote at Huawei Connect 2025 outlined a bold vision for the future of AI infrastructure. From SuperPoDs and interconnect breakthroughs to open-source initiatives, Huawei is positioning itself as a central player in the next phase of AI development. If the company can execute its ambitious roadmap and foster a strong ecosystem, it may reshape the global AI landscape—especially in regions looking to build homegrown compute capabilities.

The full transcript is here.

Also Read:

MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices

AI Revives Chipmaking as Tech’s Core Engine

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

 


Global Semiconductor Industry Outlook 2026

Global Semiconductor Industry Outlook 2026
by Admin on 09-18-2025 at 6:00 am

Semiconductor and Beyond 2026 PWC

PwC’s comprehensive report, “Semiconductor and Beyond,” released in 2026, provides a strategic outlook on the global semiconductor industry amid rapid transformations driven by AI, geopolitical tensions, and supply chain shifts. Structured into four sections—Foreword, Demand Analysis, Supply Analysis, and What’s Next?—the 104-page document offers insights for industry leaders, policymakers, and businesses navigating this critical sector. Authored by experts like Global Semiconductors Leader Glenn Burm, it emphasizes semiconductors’ role in powering innovation across automotive, server/network, home appliances, computing devices, and industrial applications.

The Foreword sets the stage, highlighting the industry’s evolution shaped by AI advancements, increased government investments in domestic production, and efforts to enhance supply chain resilience. Semiconductors are deemed indispensable for sectors like automotive, healthcare, and energy, with priorities shifting toward technology sovereignty and risk mitigation amid export controls and trade alliances.

In the Demand Analysis, titled “Semiconductors Power Innovation and Everyday Life,” PwC projects the semiconductor market to grow from $0.6 trillion in 2024 to over $1 trillion by 2030, at a CAGR of 8.6%. This growth is fueled by end-market dynamics across five key sectors. Server and Network leads with an 11.6% CAGR, driven by generative AI services and data center expansions. Automotive follows at 10.7%, propelled by electric vehicles (EVs), autonomous driving, and advanced driver-assistance systems (ADAS), with EV semiconductor content growing at 11.3% CAGR versus -3.1% for internal combustion engines. Home Appliances anticipates a 5.6% CAGR, boosted by AI-powered devices like smart refrigerators and TVs, with connectivity chips growing at 11.6%. Computing Devices expects 8.8% growth, led by AI-capable chips in high-end smartphones (75.6% CAGR) and PCs (29.3% CAGR). Industrial applications project 8.8% growth, supported by renewable energy (13.4% CAGR), smart agriculture (17.3%), and medical devices (5.3%). The analysis uses bubble charts to categorize applications by market CAGR and content growth, identifying “leap” opportunities in areas like automotive HPC and AI connectivity.

The Supply Analysis, “The Race for Semiconductor Supremacy,” examines value chain dynamics, noting U.S. dominance in design, Asia’s fabrication edge, and Southeast Asia’s packaging leadership. Geopolitical shifts are reshaping chains, with investments focusing on leading nodes. Foundry capacity is expected to rise 1.7x by 2030, with ≤7nm nodes at 8% CAGR. High-bandwidth memory (HBM) penetration surges from 14% to 40%, market size at 27.8% CAGR. Materials market grows to $97 billion at 5.1% CAGR, while advanced packaging reaches $76 billion at 10.6%. Regional shifts show China at 7.5% CAGR, USA at 12.9%, and ASEAN at 9.9%. Wide-bandgap materials like GaN (53.5% CAGR) and SiC (27.0%) highlight power electronics growth.

The “What’s Next?” section explores opportunities beyond 2030, evaluating technologies like AI integration and advanced materials. It assesses feasibility via maturity scores, market potential by 2030 size and CAGR, and investment scales over five years. Quantitative insights guide entrants on uncertainties and future dynamics.

Overall, PwC’s report underscores the semiconductor industry’s pivotal role in global innovation and economic security. With AI accelerating demand and geopolitics disrupting supply, companies must prioritize resilience, diversification, and R&D. The outlook warns of disruptions but highlights growth in AI, EVs, and renewables, urging strategic adaptations for supremacy. As semiconductors evolve, PwC positions itself as a partner for sustainable growth, offering expertise in supply chain optimization and innovation.

Here is the full report.


SiFive Launches Second-Generation Intelligence Family of RISC-V Cores

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores
by Kalar Rajendiran on 09-18-2025 at 6:00 am

SiFive 2nd Gen Intelligence Family

SiFive, founded by the original creators of the RISC-V instruction set, has become the leading independent supplier of RISC-V processor IP. More than two billion devices already incorporate SiFive designs, ranging from camera controllers and SSDs to smartphones and automotive systems. The company no longer sells its own chips, choosing instead to license CPU IP and collaborate with silicon partners on development boards. This pure-play IP model allows SiFive to focus on innovation across its three core product families: Performance for high-end applications, Essential for embedded control, and Intelligence for AI-driven compute. The company also has an Automotive family of products with auto-grade safety and quality certifications.

The company recently announced the second generation of its Intelligence Family of processor IP cores, a complete update of its AI-focused X-Series. The new portfolio introduces the X100 series alongside upgrades to the X200, X300, and XM lines designed for low power and high performance in a small footprint for applications from the far edge to the data center.

On the eve of the AI Infra Summit 2025, I chatted with SiFive’s Martyn Stroeve, Vice President of Corporate Marketing and Marisa Ahmad, Product Marketing Director, to gain the following deeper insights.

Two Popular X-Series Use Cases

While very flexible and versatile, the second-generation X-Series targets two distinct use cases. The first is as a standalone vector CPU, where the cores handle complex AI inference directly without the need for an external accelerator. A leading U.S. semiconductor company has already licensed the new X100 core for its next-generation edge-AI system-on-chips, relying on the core’s high-performance vector engine to process filters, transforms, and convolutions efficiently.

The second and increasingly critical application is as an Accelerator Control Unit. In this role, the X-Series core replaces the discrete DMA controllers and fixed-function state machines that traditionally orchestrate data movement in accelerators. Another top-tier U.S. semiconductor customer has adopted the X100 core to manage its industrial edge-AI accelerator, using the processor’s flexibility to control the customer’s matrix engine accelerator and to handle corner-case processing.

The Rising Importance of Accelerator Control

AI systems are becoming more complex, with vast data sets moving across heterogeneous compute fabrics. Conventional accelerators deliver raw performance but lack flexibility, often suffering from high-latency data transfers and complicated memory access hardware. SiFive’s Accelerator Control Unit concept addresses these pain points by embedding a fully programmable scalar/vector CPU within the accelerator itself. This design simplifies programming, reduces latency, and makes it easier to adapt to new AI models without extensive hardware redesign—an area where competitors such as Arm have scaled back their investment. Here is a link to a video discussing how Google is leveraging SiFive’s first generation X280 as AI Compute Host to provide flexible programming combined with the Google MXU accelerator in the datacenter.

Four Key Innovations in the Second Generation

SiFive’s new Intelligence cores introduce four standout enhancements. First are the SSCI and VCIX co-processing interfaces, high-bandwidth links that provide direct access to scalar and vector registers for extremely low-latency communication with attached accelerators.

Second is a hardware exponential unit, which reduces the common exp() function operation from roughly fifteen instructions to a single instruction, an especially valuable improvement given that exponential function operations are second only to multiply–accumulate in AI compute workloads.

Third is a new memory-latency tolerance architecture, featuring deeper configurable vector load data queues and a loosely coupled scalar–vector pipeline to keep data flowing even when memory access is slow. Finally, the family adopts a more efficient memory subsystem, replacing private L2 caches with a customizable hierarchy that delivers higher capacity while using less silicon area.

Performance Compared to Arm Cortex-M85

SiFive highlighted benchmark data showing that the new X160 core,  delivers roughly twice the inference performance of Arm’s Cortex-M85 at comparable silicon area. Using MLPerf Tiny v1.2 workloads such as keyword spotting, visual wake-word detection, image classification, and anomaly detection, the X160 demonstrated performance gains ranging from about 148 % to over 230 % relative to the Cortex-M85 while maintaining the same footprint. This two-times advantage underscores SiFive’s claim that its second-generation Intelligence cores can outpace the best current Arm microcontroller-class AI processors without demanding more die area or power budget.

A Complete AI Software Stack

Hardware is supported by a robust AI RISC-V based software ecosystem . The stack includes an MLIR-based compiler toolchain, a SiFive-tuned LLVM backend, and a neural-network graph analyzer. A SiFive Kernel Library optimized for vector and matrix operations integrates with popular frameworks such as TensorFlow Lite, ONNX and PyTorch. Customers can prototype on QEMU, FPGA, or RTL/SystemC simulators and seamlessly transition to production silicon, allowing rapid deployment of AI algorithms on SiFive’s IP.

Summary

By marrying a mature software platform with cutting-edge vector hardware, SiFive’s second-generation Intelligence Family positions RISC-V as a compelling alternative for next-generation AI processing. These new products all feature enhanced scalar, vector and specifically with XM, matrix processing capabilities designed for modern AI workloads. All of these cores build on the company’s proven fourth-generation Essential architecture, providing the reliability valued by automotive and industrial customers while adding advanced features for AI workloads from edge to data center.

With initial design wins at two leading U.S. semiconductor companies and momentum across industries from automotive to data centers, the Intelligence Gen 2 products stands ready to power everything from tiny edge devices to massive training clusters—while setting a new performance bar by outclassing Arm’s Cortex-M85 in key AI inference tasks.

Access the press announcement here.

To learn more, visit SiFive’s product page.

Also Read:

Podcast EP197: A Tour of the RISC-V Movement and SiFive’s Contributions with Jack Kang

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Enabling Edge AI Vision with RISC-V and a Silicon Platform


Simulating Gate-All-Around (GAA) Devices at the Atomic Level

Simulating Gate-All-Around (GAA) Devices at the Atomic Level
by Daniel Payne on 09-17-2025 at 10:00 am

GAA FET min

Transistor fabrication has spanned the gamut from planar devices o FinFET to Gate-All-Around (GAA) as silicon dimensions have decreased in the quest for higher density, faster speeds and lower power. Process development engineers use powerful simulation tools to predict and even optimize transistor performance for GAA devices. Dr. Philippe Blaise, Principle AE at Silvaco delivered a webinar on the topic of simulating GAA devices using their tool, Victory Atomistic with quantum-level precision.

An overview diagram shows the different FET device structures in 3D cross-sections.

The big challenge is how to simulate a GAA FET and predict its performance at the nano scale with quantum effects, easy for a TCAD engineer to use, fast enough to enable exploration and optimization, and accurate enough to produce trusted SPICE models. Silvaco uses the Non Equilibrium Green’s Function (NEGF) approach in their device simulation.

Using Victory Atomistic looked straight forward, where the user chooses a material from a database, defines the device structure, then the simulator solves density using NEGF equations and solves potential using Poisson’s equation, producing an output IV curve.

Dr. Blaise showed the details of the device template for a nanowire using a diamond crystal structure in Silicon, with lengths, number of gates, doping, oxide and contacts. Users also choose the type of solver, voltage ranges, temperature and output options. The goal of Victory Atomistic is to enable atomic-scale TCAD simulations easy to use by keeping the complexity inside the tool.

Simulation results showed the desired accuracy, along with options for silicon bands and orientation. Victory Atomistic uses the Low-Rank Approximation (LRA) optimization technique to make the complex, quantum-level simulations run in a short period of time on conventional computers.

SiGe materials were simulated using Virtual Crystal Approximation (VCA). Transition-metal dichalcogenide (TMD) monolayers were also simulated for MoS2. Electron-phonon scattering effects were modeled next. Victory Visual was demonstrated as a graphical post processing tool, showing 90,000 atoms loaded.

Simulation results for a GAA FET device showed accurate results across a temperature range of 300K to 2K.

Summary

GAA FET devices can be accurately modeled and simulated using the NEGF algorithm, so that TCAD engineers can predict and optimize a new process technology. Silvaco has made their Victory Atomistic tool easy to use, and the results are produced quickly with the option to employ multiple CPUs. Victory Visual helps to graphically show results, making analysis more intuitive. Simulating a process technology before running wafers saves time and money, so why not give modern TCAD tools a try.

View the entire webinar after registering online here.

Related Blogs


AI Revives Chipmaking as Tech’s Core Engine

AI Revives Chipmaking as Tech’s Core Engine
by Daniel Nenni on 09-17-2025 at 8:00 am

The Economist

A century ago, 391 San Antonio Road in Mountain View, California, housed an apricot-packing shed. Today, it’s marked by sculptures of diodes and a transistor, commemorating the 1956 founding of Shockley Semiconductor Laboratory—the birthplace of Silicon Valley. William Shockley, co-inventor of the transistor, aimed to craft components from silicon, but his firm flopped. Yet, his “traitorous eight” employees defected in 1957 to launch Fairchild Semiconductor nearby. This group included Gordon Moore and Robert Noyce, who later co-founded Intel, and Eugene Kleiner, pioneer of venture capital firm Kleiner Perkins. Most Silicon Valley giants trace their lineage to these early innovators.

Semiconductors revolutionized computing. Pre-semiconductor era computers relied on bulky, unreliable vacuum tubes. Semiconductors, solid materials controlling electrical flow, offered durable, versatile alternatives. Silicon enabled mass production of transistors, diodes, and integrated circuits on single chips, processing and storing data efficiently.

In 1965, Moore observed that transistor density on chips doubled annually (later adjusted to every two years), an observation dubbed Moore’s Law. This drove exponential progress: from 200 transistors per square millimeter in 1971 to 150 million in AMD’s 2023 MI300 processor. Smaller transistors switched faster, fueling breakthroughs like personal computers, the internet, smartphones, and artificial intelligence.

Chipmakers once dominated tech. In the 1970s, IBM integrated chips, hardware, and software into unparalleled dominance. The 1980s saw Microsoft’s software-only model thrive, but Intel’s processors remained vital. By 2000, Intel ranked sixth globally by market cap. Post-dotcom bust, however, firms like Google and Meta overshadowed hardware, commoditizing chips. Venture capitalist Marc Andreessen famously declared software was “eating the world” in 2011.

AI’s surge has reversed this. Training LLMs demands immense computation. Pre-2010, AI training compute doubled every 20 months, aligning with Moore’s Law. Since then, it doubles every six months, spiking chip demand. Nvidia, specializing in AI-suited GPUs, now ranks third-most valuable globally. Since late 2023, chipmaker stocks have outpaced software firms for the first time in over a decade.

Beyond training, AI inference, responding to queries, requires efficient, bespoke chips. General-purpose processors fall short, prompting tech giants to design custom silicon. Apple, Amazon, Microsoft, and Meta invest heavily; Google deploys more proprietary data-center processors than anyone except Nvidia and Intel. Seven of the world’s top ten firms now make chips.

Sophistication hinges on process nodes—feature sizes under 7nm define cutting-edge AI chips. Yet, over 90% of manufacturing uses 7nm or larger nodes for everyday devices like TVs, fridges, cars, and tools.

The 2021 COVID-19 chip shortage exposed vulnerabilities in global supply chains: design in America, equipment in Europe/Japan, fabrication in Taiwan/South Korea, packaging in China/Malaysia. Governments responded with subsidies—America’s $50 billion CHIPS Act in 2022, followed by $94 billion from the EU, Japan, and South Korea. Geopolitics complicates matters: U.S. export bans limit China’s access to advanced tech, prompting Beijing’s restrictions on key materials like gallium and germanium.

Yet, technological hurdles loom larger than political ones, argues Economist Global Business Writer Shailesh Chitnis. For decades, shrinking transistors boosted performance without proportional energy hikes. Now, denser chips and massive AI models drive soaring power use. Data centers could consume 8% of U.S. electricity by 2030.

To sustain exponential gains, innovations are essential. Incremental steps include hardware-software integration, like optimizing algorithms for specific chips. Radical shifts involve alternatives to silicon, such as gallium nitride for efficiency, or neuromorphic computing mimicking brain analog processes over digital ones. Optical computing, using light for faster data transfer, and quantum chips for complex simulations also promise breakthroughs.

AI’s demands are putting silicon back at tech’s heart, echoing Valley origins. As computation needs explode, chipmaking’s evolution will dictate future innovation, balancing efficiency, geopolitics, and sustainability. The apricot shed’s legacy endures—silicon’s story is far from over.

You can read the full article here.

Also Read:

Advancing Semiconductor Design: Intel’s Foveros 2.5D Packaging Technology

Revolutionizing Processor Design: Intel’s Software Defined Super Cores

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability


The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs
by Bernard Murphy on 09-17-2025 at 6:00 am

Disaggregation of NoCs min

In chiplet-based design we continue the march of Moore’s Law by scaling what we can put in a semiconductor package beyond the boundaries of what we can build on a single die. This style is already gaining traction in AI applications, high performance computing, and automotive, each of which aims to scale out to highly integrated systems meeting performance, power, cost, and reliability goals. The technology challenge then is to build effective communications infrastructure between those chiplets.

UCIe is mentioned frequently as the standard for connectivity between chiplets. That standard is important, but is only the bottom layer of the communication stack. In the modern era of networks-on-chip (NoCs), modern networks must also handle packetized communication, congestion, and quality of service, within chiplets and between chiplets. This prompts a deeper dive into Arteris’ recently announced collaboration with AMD, in which FlexGen smart NoC IP cooperates with AMD’s Infinity Fabric. Commercial and proprietary network co-existence has arrived.

Commercial NoC IP jumps a hurdle

For a long time NoC IP choices were an either/or decision. Build and use your own in-house NoC IP or buy a commercial IP. The choice was easy for new ventures or design teams who had outgrown earlier in-house options and who prioritized differentiation in core functions rather than in the communication fabric. Buy a commercial option from an IP supplier and work with that vendor to ensure they keep up as requirements evolved. Still, some semiconductor houses, particularly the big compute vendors like AMD and Intel, continue to see value in their proprietary NoC IP and were not obvious candidates for a commercial option.

Chiplet-based design seems to have scaled that wall. Highly optimized in-house communication IPs such as Infinity Fabric continue to be central in coherent compute chiplets but now AMD’s endorsement of using FlexGen smart NoC IP between chiplets has shown co-existence to be a very viable option.

Andy Nightingale (VP Product Management and Marketing at Arteris) told me that the term IO Hub is now commonly used to represent an emerging architectural pattern, a structure to bridge coherent fabrics (such as Infinity Fabric) with heterogeneous subsystems. A new realization that one unified fabric architecture may not be the optimal strategy across chiplet-based systems.

I asked Andy why some of their top-tier customers are turning to Arteris for this capability. Why not just build their own IO Hub? Their answer reflects what you’ll often hear when design houses choose a commercial solution over an in-house option. They want to prioritize in-house resources towards their core competencies, using a proven partner to handle off-chiplet communication. A co-existence solution meets that objective.

Digging a little deeper

The physical connectivity between chiplets will most commonly be wires, not active elements (also possible but that style of connectivity is more expensive I am told). Traffic management through the IO Hub is therefore handled through distributed control from chiplet interfaces (network interface units for Arteris IPs) to the hub. In the IO hub use-case FlexGen is optimized for non-coherent, high-bandwidth data flows like HBM, PCIe, and AI accelerators.

Effectively managing this structure – off-chip interconnect topology, adding buffers/ adapters, scaling wide datapaths, managing QoS, is a complex task that will probably demand iteration as the larger design evolves. That task usually must be handled by a senior NoC designer, consuming weeks of effort. Arteris’ FlexGen smart NoC technology acts as a virtual senior NoC engineer to automate this function, providing when compared with manual design up to 10x productivity improvement, 30% shorter wirelength and 10% reduced latency according to announced customer benchmarks.

Expanding Arteris reach

Arteris’ FlexGen NoC IP for non-coherent networks and Ncore for coherent networks are already well-established, particularly in new ventures, AI, automotive and many other applications. Arteris have announced a range of co-existence collaborations including:

  • Adoption by AMD to augment Infinity Fabric for AI chiplets
  • Accepted in the Intel Foundry AI + Chiplet Alliance
  • Adopted by Samsung LSI to use FlexNoC/FlexGen alongside proprietary fabric in mobile and AI SoCs
  • Adopted by NXP using Ncore and FlexNoC for automotive, integrating safety and accelerators in ASIL D systems
  • Publicly announced benchmarks from SiMa.ai showing increased productivity and reduced wirelengths

It’s now quite clear that Arteris can now augment proprietary NoC solutions in addition to providing comprehensive NoC fabric solutions, as already demonstrated in widespread adoption in multiple markets. Impressive. You can learn more about Arteris HERE.

Also Read:

Arteris Simplifies Design Reuse with Magillem Packaging

Arteris at the 2025 Design Automation Conference #62DAC

Arteris Expands Their Multi-Die Support