SNPS1670747138 DAC 2025 800x100px HRes

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000
by Bernard Murphy on 05-29-2025 at 6:00 am

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000

Another content-rich kickoff covering a lot of bases under three main themes: the new Millennium AI supercomputer release, a moonshot towards full autonomy in chip design exploiting agentic AI, and a growing emphasis on digital twins. Cadence President and CEO Anirudh Devgan touched on what is new today, and also market directions beyond EDA and systems design, into physical AI (robots, drones) and sciences AI (molecular design). Jensen Huang (NVIDIA) joined Anirudh for a fireside chat preceding this keynote and Satya Nadella (Microsoft) provided a video endorsement, as did Charlie Kawwas (President of Semiconductor Solutions at Broadcom), reinforcing that Cadence is both serving and partnering with the world leaders in tech.

Millennium M2000 Release

Millennium M2000 is the next generation of the Cadence AI hardware acceleration platform, built on Nvidia Blackwell. For those keeping careful track, Anirudh and Jensen announced this platform in the fireside chat immediately before this keynote but I’m covering it here. Jensen also announced that he is going to buy 10 systems. Quite an endorsement.

I wrote about the first-generation Millennium Enterprise Multiphysics Platform last year, when it was clear it would immediately benefit computational fluid dynamics (CFD). Given how pervasive AI has become throughout the Cadence EDA product line, it is now apparent that Millennium will have an increasing role in chip design.

Hardware acceleration is a fundamental tier in the Cadence strategy, initially for (Boolean) logic simulation but now also in support of numeric simulation, which is where Millennium shines. Accelerating CFD is an obvious application for aerodynamic modeling, datacenter cooling, and hydrodynamic modeling for ships. In biosciences, molecular similarity screening can greatly reduce an initial pool of potential therapies, weeding out candidates with possibly strange behaviors or toxicities, before advancing to more detailed lab testing.

In semiconductor design, Cadence’s Cerebrus Intelligent Chip Explorer has been driving significant improvements in PPA through AI, from chip level to system level using numeric simulation methods, a natural partner with M2000. Other EDA applications in the Cadence tool suite from 3D-IC and packaging design, thermal and signal integrity modeling to analog design and analysis, all benefit from AI which can be further accelerated and scaled on the M2000 system.

A Moonshot to Full Autonomy in Chip Design

Anirudh positions AI advances in EDA following the automotive autonomy SAE model, from levels 1 (basic autonomy) to 5 (full autonomy). Nice analogy and useful to grade progress. Cadence started building their JedAI platform more than 5 years ago, to centralize data from spec through to manufacturing as a mechanism to support generative AI throughout the design cycle and across designs. Now AI is a daily reality in design flows they support, to the point that he feels much of what they offer is already at levels 2 or 3. Advances he announced in this talk stretch to levels 3 and 4, thanks in part to a big investment in agentic AI which makes more complex chains of reasoning possible.

Level 5 – full autonomy – he acknowledges is a moonshot but like all moonshots worth attempting to see how far they can get. They can advance on multiple fronts. RTL generation in part, maybe through a CoPilot type of approach (assisted generation with RAG). Partly through leveraging proven IP – Cadence now has a rapidly growing IP catalog to support this direction. Partly through AI-generated C, for which Cadence can use their proven C to RTL technologies. Also partly through generating testbenches, both UVM and Perspec. The following keynote from Uri Frank at Google touched on joint work between Cadence and Google in this area.

On leveraging proven IP, Cadence continues to invest significantly in growing their own IP catalog. Anirudh mentioned this area now has one of the biggest R&D teams in the company and an expanding portfolio across protocol IP and compute IP, including their Tensilica family and NPUs and their recent NeuroEdge introduction (on which I will write more in a following blog). They also recently announced their intent to acquire Secure-IC, a well-respected company in all areas of hardware security, from design to deployment and eventual decommissioning (I wrote about them recently).

Now Agentic AI is available in Integrity 3D-IC, Cerebrus AI Studio and Virtuoso Studio. I’m looking forward to seeing applications in functional verification – maybe next year?

Continued Focus on Digital Twins

In semiconductor design digital twins are a way of life. A critical component here is hardware-assisted logic verification. Last year customers added almost 460 billion gates in new Palladium/Protium capacity, easy to understand when you consider the sizes of designs, particularly AI designs, that are being built today around the world. It’s also not surprising to learn that the Palladium emulation platform, built on a Cadence designed custom chip, is itself pacing those design sizes at 120 billion transistors per chip, 16 trillion transistors in a rack. A super-sized chip to verify super-sized chips!

Beyond semiconductor design, in the physical and bio world digital twins are less widely used, in part because it is more difficult to capture all the complexity of the mechanical, chemical and ambient constraints that must go along with that modeling. Difficult but becoming more approachable thanks to AI, agentic AI, and AI hardware accelerators.

One area that is advancing quite rapidly is in digital twins for datacenters. The Cadence Reality Digital Twin Platform is becoming a reality (😀) for datacenter design and upgrades, where the cost of power and cooling is an everyday headline topic given the growing volume of AI accelerator hardware. Thermal modeling depends critically on very detailed analysis and recommendations to manage thermal hot spots and cooling flows, whether ambient, forced air, or liquid. Rack placements, cooling unit placements, vent placements all depend on optimized modeling. AI to mitigate the impact of AI on power – physician heal thyself indeed. Cadence is closely partnered here with Nvidia.

Digital twins are becoming equally important for aircraft design, drone design (now pervasive in many applications) and robot design for automated factories, warehouses, hospital logistics support. And continuing of course for design of cars, trucks and other transportation options. Semiconductors and systems supporting these use cases will have much tighter power envelopes and significantly more mixed signal content to support all the sensors these systems require, playing nicely to Cadence strengths from chip design up through system design.

Big picture views, hot AI investment, focus on growing markets in tech and partnering with tech leaders on system-level applications. Looks promising to me!


Semiconductor Market Uncertainty

Semiconductor Market Uncertainty
by Bill Jewell on 05-28-2025 at 2:00 pm

2025 Semiconductor Market Forecast

WSTS reported 1st quarter 2025 semiconductor market revenues of $167.7 billion, up 18.8% from a year earlier and down 2.8% from the prior quarter. The first quarter of 2025 was weak for most major semiconductor companies. Ten of the sixteen companies in the table below had declines in revenue versus 4Q 2024, ranging from -0.1% from Broadcom to declines of over 20% from STMicroelectronics and Kioxia. Six companies reported revenue increases, ranging from 1.5% from Texas Instruments to 12% from Nvidia. The outlook for 2Q 2025 is mixed. Nine of the fourteen companies providing guidance expect revenue growth in 2Q 2025 from 1Q 2025, with the highest from SK Hynix at 14.6%. MediaTek expects flat revenue. Four companies are expecting revenue declines, with the largest from Kioxia with a 10.7% decline.

In their conference calls with analysts, most of the companies cited economic uncertainty due to tariffs as a factor in their outlook. Companies dependent on the automotive and industrial markets are seeing recoveries. Strong AI demand is driving growth at Nvidia and the memory companies.

Key end equipment drivers of the semiconductor market are projected to slow in 2025 versus 2024. The server market was a major driver in 2024 with 73% growth in dollar value, according to IDC. 2025 is expected to show healthy growth, but at a much slower rate of 26% growth in dollars. IDC forecasts smartphone units will only grow 2.3% in 2025, down from 6.1% in 2024. PCs are the only major driver expected to show an increase in growth rate in 2025 at 4.3%, up from 1.0% in 2024, according to IDC. The end of support for Windows 10 and increased AI computing should drive PC growth despite tariff uncertainty. Worldwide production of light vehicles is projected to see a decline of 1.7% in 2025 following a 1.6% decline in 2024, according to S&P Global Mobility. Again, tariffs were cited as the major reason for the decline.

The International Monetary Fund (IMF) reduced its outlook for global GDP in April. The IMF cited the uncertainty around tariffs as the primary reason for the reduction. The IMF excepts World GDP growth to decelerate by half of a percentage point from 3.3% in 2024 to 2.8% in 2025. Both advanced economies and emerging/developing economies will see an overall slowing of growth. The biggest growth deceleration in terms of percentage point change is expected in the U.S. (down 1.0), China (down 1.0) and Mexico (down 1.8).

Recent forecasts for global semiconductor market growth in 2025 range from our 7% at Semiconductor Intelligence to 14% at TechInsights. Although TechInsights has the highest forecast, they project a moderate tariff impact would lower growth to 8% and a severe tariff impact would lower it to 2%.

Our 7% forecast for 2025 is primarily based on the uncertainty about tariffs. Tariffs may not affect semiconductors directly but could have a significant impact on key drivers such as automotive and smartphones. The weakness in the semiconductor market could carry into 2026, resulting in low single-digit growth.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Also Read:

Semiconductor Tariff Impact

Weak Semiconductor Start to 2025

Thanks for the Memories


Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies
by Kalar Rajendiran on 05-28-2025 at 10:00 am

Sassine Holding an 18A Test chip

Design-Technology Co-Optimization (DTCO) has been a foundational concept in semiconductor engineering for years. So, when Synopsys referenced DTCO in their April 2025 press release about enabling Angstrom-scale chip designs on Intel’s 18A and 18A-P process technologies, it may have sounded familiar—almost expected. But to dismiss it as “more of the same” would be to overlook just how far DTCO has come, and how dramatically Synopsys has elevated it. To gain deeper insights, I spoke with Prasad Saggurti, Executive Director of Product Management for Foundation and Security IP, and Ashish Khurana, Executive Director of R&D for Foundry Ecosystem at Synopsys.

DTCO: From Tactical Method to Strategic Enabler

In its earliest form, DTCO focused on adapting design techniques to meet the constraints of shrinking nodes. It was often a reactive, back-end effort to align standard cells and process rules with emerging technology limits. But as Moore’s Law encountered physical and economic headwinds, DTCO evolved into something far more comprehensive—an integrated, predictive approach to co-developing process and design in parallel.

Today, DTCO plays a central role in defining not just how chips are built, but what technologies are viable. And Synopsys, through its close collaboration with Intel, has taken it to a level where it’s shaping the future of Angstrom-era silicon.

DTCO Delivers Early on Intel 18A

The evolution of DTCO was in the spotlight during the 2025 Intel Foundry Direct Connect event. In an on-stage appearance alongside Intel CEO Lip-Bu Tan, Synopsys CEO Sassine Ghazi reached into his pocket, pulled out a chip, and held it up for the audience. That chip, he explained, was a Synopsys test chip built on Intel’s 18A process and had been produced a year earlier. It was proof that deep DTCO integration delivers real, early silicon results.

Such early silicon readiness would have been unthinkable in the traditional flow. It was made possible only because of a close, continuous DTCO collaboration between Synopsys and Intel—spanning process definition, tool enablement, IP development, and design methodology refinement.

DTCO in Action: RibbonFET, PowerVia, and the Intel 18A Breakthrough

This transformation is best illustrated through the tangible gains achieved during the Intel 18A development. Synopsys worked closely with Intel to align their design tools with Intel’s RibbonFET transistor architecture, enabling reduction in timing closure cycles. This streamlined convergence and boosted productivity for design teams using the 18A platform.

At the same time, DTCO was instrumental in optimizing PowerVia, Intel’s backside power delivery system. By leveraging PowerVia-aware floorplanning within Synopsys’ place-and-route tools, the collaboration delivered improvement in power efficiency—a result of co-optimized IR drop management and floorplan restructuring enabled by early-stage modeling.

PICO: DTCO’s Evolution into Full-Stack Optimization

To manage this expanded scope, Synopsys introduced PICO—short for Process-IP-Co-Optimization. PICO represents a structured, pre-silicon flow that spans process assumptions, cell library development, IP integration, toolchain validation, and even 3DIC packaging studies. It ensures all components from transistor to tools are developed in tandem, under real-world constraints.

With PICO:

  • TCAD simulations inform device models early in the cycle.
  • Design rule validation occurs before masks are built.
  • IP is co-architected with process and performance trade-offs in mind.
  • CAD tools are aligned with structures like RibbonFET from the outset.

The Enablement Readiness Cycle: Getting to Market Faster

This all feeds into the enablement readiness cycle—a core strategy for delivering validated design flows, certified IP, and process-aligned methodologies in sync with foundry technology ramps. For Intel 18A, Synopsys’ tools and libraries were ready before silicon. This closed-loop cycle is central to achieving fast, low-risk product development at Angstrom-scale nodes.

Summary

Modern day DTCO is a competitive strategy for the Angstrom-scale era. With strategic collaborative partnerships between foundries and design-enablement ecosystem partners such as Synopsys, DTCO is a full-stack, front-loaded discipline capable of delivering real silicon on cutting-edge process nodes well ahead of schedule.

To learn more, visit Synopsys’ DTCO Solutions page.

Also Read:

Intel Foundry is a Low Risk Aternative to TSMC

Intel’s Foundry Transformation: Technology, Culture, and Collaboration

Intel’s Path to Technological Leadership: Transforming Foundry Services and Embracing AI


Optimizing an IR for Hardware Design. Innovation in Verification

Optimizing an IR for Hardware Design. Innovation in Verification
by Bernard Murphy on 05-28-2025 at 6:00 am

Innovation New

Intermediate representations (IRs) between high level languages (C++, AI, etc.) and machine language are both commonplace (witness LLVM) and a continuing active area of research. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick LLHD: A Multi-level Intermediate Representation for Hardware Description Languages, was published in PLDI 2020 and has 24 citations. The authors were all from ETH Zurich (University), Tobias Grosser is now an associate professor at Cambridge (UK).

We wrote recently on optimizing simulation performance through ML-trained compiler options. This paper instead looks at whether performance (or net design flow throughput) can be improved if the compiler generates an IR (intermediate representation) better optimized to needs across the design flow. This is in-line with ongoing compiler research for software in other domains such as AI. Simulation compilers already apply such methods. Optimization techniques here may suggest addition opportunities for performance tuning?

Paul’s view

I am often asked what Cadence’s view is on leveraging innovations from the open-source compiler community in our logic simulator, especially from the LLVM community. This month’s paper is on “LLHD”, an LLVM-like intermediate representation (“IR”) for hardware design. It’s the first work I’ve seen that describes an IR for hardware design that covers both the testbench as well as the DUT. The authors have also built a SystemVerilog frontend and a backend into LLVM which enables native code logic simulation on any platform already supported by LLVM. Nice!

Overall, this is a meaty paper with some significant contributions. For example, LLHD includes a crisp notation and associated semantics for handling concurrency and event time windows. Another contribution is a well thought out system for progressively refining behavioral constructs in the IR into structural/synthesizable constructs wherever possible, essentially mirroring some of the “RTL rewriting” optimizations that commercial logic synthesis and simulation tools do under the hood.

As a framework for stimulating more academic investment into logic simulation and logic synthesis, LLHD looks awesome. It can enable research into high level optimizations (IR rewrites) to improve logic synthesis PPA and logic simulation performance. As to the question of rewriting our tools at Cadence to use LLHD rather than their existing proprietary IRs, these would be massive projects with very long tails to reach maturity. Probably, any interesting innovations by the LLHD community in the future would be best commercialized simply by re-implementing them on our existing proprietary IRs. We’ll keep watching this space and do what we can to support more academic investment into any area that could potentially lead to innovations that improve synthesis PPA or logic simulation performance for our customers.

Raúl’s view

The evolution of software compilation has been remarkable in recent years, partly due to the existence of an open-source framework that includes LLVM (originally meaning Low Level Virtual Machine) as a central component, specifically its language independent intermediate representation (IR). Conversely, the authors of the reviewed paper argue that “hardware design flow remains isolated and vendor-locked”. To address this issue, they are developing LLHD, a multi-level IR for hardware description languages. The framework also includes Moore, a compiler that translates SystemVerilog and VHDL to LLHD, LLHD-Blaze, a simulator leveraging just-in-time (JIT) code generation, and compiling a structural form of LLHD (with some optimizations) to support vendor-specific tools such as simulation, synthesis and verification. This approach aims to eliminate redundancy in compilation and tool-specific IRs and serve as a platform for interoperability and new tool development.

The paper provides an in-depth description of LLHD, defining a multi-level IR that captures current HDLs in an SSA (Single Static Assignment) based form compatible with modern compilers while incorporating extensions to represent digital hardware and the testbench. It demonstrates how SystemVerilog and VHDL map to LLHD. LLHD consists of three levels: behavioral, structural, and netlist, explicitly omitting layout. The paper shows transformations from behavioral LLHD to structural LLHD, encompassing typical high-level synthesis and compile techniques such as code motion, complexity reduction of operations, arithmetic optimization, identification of flip-flops and latches, and de-sequentialization.

Furthermore, the authors illustrate the integration of existing EDA tool flows by providing a simple structural Verilog equivalent. They evaluate their framework on ten small designs, with the most complex being a RISC-V core comprising 3479 lines of code, in terms of memory usage and simulation time on LLHD-Blaze.

The paper elicits mixed reactions. The paper is certainly rich in content. Although multi-level IRs for HW have been previously explored, for instance by the high-level synthesis community (e.g., Gajski-Kuhn, 1983, Y-chart with behavior-structure-geometry), the emphasis here is on making an implementation publicly available. In terms of results, size comparisons gain significance primarily at the layout stage (geometry), which LLHD does not address, thereby reducing the relevance of these comparisons. The simulator achieves performance improvements up to 2.4 times faster (though 5.2 times slower for the largest example) compared to commercial simulators with full features, which may not be representative.

Nevertheless, the fundamental argument that open-source infrastructure and common IRs have been instrumental in the advancement of software compilers remains persuasive. The question arises: can a similar impact be realized for EDA tools? Could LLHD serve as a catalyst for research and development, despite common claim that hardware design and automation tool development involve far fewer individuals than software development? Undoubtedly, significant work remains to be done.

Also Read:

LLMs Raise Game in Assertion Gen. Innovation in Verification

High-speed PCB Design Flow

Perspectives from Cadence on Data Center Challenges and Trends


WEBINAR: PCIe 7.0? Understanding Why Now Is the Time to Transition

WEBINAR: PCIe 7.0? Understanding Why Now Is the Time to Transition
by Don Dingee on 05-27-2025 at 10:00 am

PCIe application interface options are the primary motivation for the PCIe 7.0 transition

PCIe is familiar to legions of PC users as a high-performance enabler for expansion slots, especially GPU-based graphics cards and M.2 SSDs. It connects higher-bandwidth networking adapters and niche applications like system expansion chassis in server environments. Each PCIe specification generation has provided a leap in bandwidth, with speeds doubling about every two years. The latest PCIe 7.0 reaches 128 GT/sec link speeds, and combined with PHY improvements, PAM4 signaling, and improved error correction introduced in PCIe 6.0, a chip-to-chip use case, at least in moderately-sized, reasonably-priced systems, is emerging.

A recent Synopsys webinar discusses the implications of releasing the 0.9 version of the PCIe 7.0 specification to PCI-SIG members. It’s a conversation between Madhumita Sanyal, Director of Technical Product Management, High Performance Computing IP Solutions at Synopsys, and Richard Solomon, Technical Product Manager, PCI Express Controller IP at Synopsys, who also serves as vice-president of the PCI-SIG. Solomon dives deeper into what makes PCIe 7.0 faster, how that changes the controller logic, and what that will mean for SoC designers.

Faster PCIe 7.0 is good, but it’s not the real news

Solomon starts on a self-deprecating note, saying that PCIe 7.0 is “kind of boring, and boring in a good way.” His point is that if engineers have designed PCIe 6.0-compliant links, they’re already compliant with PCIe 7.0, except now they can access the 128 GT/sec rate. “From a spec perspective, [this is] really very straightforward,” he adds.

The real challenge with PCIe 7.0 may not be in the links, but in the link managers. “It’s harder and harder for devices to satisfy [bandwidths] with sort of one agent internal, like within an SSD. You’re going to need multiple channels or multiple data movers to sustain the full data rate.” As with so many connectivity schemes, running one link fast is tempting, but designing a chip with more application interfaces spread out at lower clock speeds is less risky. His example cites moving from an unsustainable 512-bit path at 4 GHz to a dual-ported 1024-bit path at 1 GHz.

With 1024-bit data paths, the problem becomes multiple PCIe packets arriving per clock cycle, pushing designers into the dual interface solution leveraging PCIe relaxed ordering.

Solomon then launches into an extended discussion of bifurcation and the challenges for PCIe switch designers dealing with many different links. He points out that 512 bits is a magic width fitting a CPU cache line, so moving to two cache lines per clock introduces some concern. “We’re all in this sort of tug of war – it’s not which one of these options you prefer, it’s which one you dislike less,” he muses. However, the difficulty of implementing a 4 GHz clock in current ASIC technology dominates the tradeoffs.

SoC designers get more options with PCIe 7.0

Earlier in the webinar, Solomon points out that PCIe may not be the fastest possible solution, but it’s the fastest at a reasonable price, and he implies that it also gives SoC designers flexibility in architectural choices for systems. He sees AI as a good example. PCIe probably won’t be the choice for a system with thousands of AI chips interconnected, but he believes it’s a clear choice for a system with tens of chips. The bandwidth, low latency, and lower pin count of PCIe 7.0 compared with a parallel PCI-X bus are strong arguments.

The last few minutes of the conversation discuss the idea of an open ecosystem, like commercial chips with PCIe 7.0 as their interface, versus what internal teams might do with PCIe 7.0 in proprietary ASIC designs in private topologies. “All the tools to analyze a PCI Express channel exist,” he says. “I can use the open nature of PCI Express, but I can also cheat, I can do whatever I want [in a proprietary design].”

The webinar wraps up with questions, like one from an SSD designer who is just launching PCIe 6.0 products and is worried about PCIe 7.0 and even PCIe 8.0 making it prematurely obsolete. With the spec on a two to three-year cadence, that’s not a new concern, and there’s the balance between placing a big bet, leaping to the latest IP in a new chip, or staying put with existing chips and capturing revenue sooner while the ecosystem for the new spec stabilizes.

There’s much more detail in the complete discussion, available online.

Webinar: PCIe 7.0? Understanding Why Now is the Time to Transition

Also Read:

SNUG 2025: A Watershed Moment for EDA – Part 2

Automotive Functional Safety (FuSa) Challenges

Scaling AI Infrastructure with Next-Gen Interconnects


Andes Technology: Powering the Full Spectrum – from Embedded Control to AI and Beyond

Andes Technology: Powering the Full Spectrum – from Embedded Control to AI and Beyond
by Kalar Rajendiran on 05-27-2025 at 6:00 am

Overview of Andes Product Categories

As the computing industry seeks more flexible, scalable, and open hardware architectures, RISC-V has emerged as a compelling alternative to proprietary instruction set architectures. At the forefront of this revolution stands Andes Technology, offering a comprehensive lineup of RISC-V processor solutions that go far beyond embedded systems—reaching into the realms of AI, machine learning, high-performance computing, and functional safety.

During the recent Andes RISC-V CON, Dr. Charlie Su, President and CTO and Marc Evans, Director of Business Development and Marketing at Andes Technology talked about the company’s full lineup of RISC-V solutions to handle AI and beyond. The following is an integrated synthesis of their talks.

A RISC-V Product Portfolio That Goes Beyond Embedded

RISC-V is often perceived as best suited for low-power, embedded control applications. Andes is shattering that stereotype with a portfolio that spans from microcontrollers to Linux-capable AI processors. The company offers a comprehensive RISC-V computing stack that spans four key domains: high-performance compute acceleration for AI/ML, ultra-efficient embedded and real-time processing, full-featured application processing with Linux support, and advanced security and functional safety with ISO 26262-certified cores.

Its lineup covers the full computing stack: ultra-compact cores like the D23 and N225 for low-latency, ultra-low power embedded tasks; mid-tier cores in the 25/27 and 40 series for applications requiring a balance of performance and efficiency; and advanced multicore processors like the AX60 and upcoming Cuzco series, which support out-of-order execution, large caches, Linux and AI-intensive workloads.

Driving AI and Compute Acceleration with RISC-V

AI and ML are defining workloads of the next decade. NVIDIA has integrated RISC-V into its deep learning accelerator orchestration, validating the role of RISC-V control processors in managing complex, AI-driven pipelines. And Andes is also ready to support these types of workloads. With its AndesAIRE platform, Andes enables edge and cloud inference engines through vector and matrix processing, neural network SDKs, and custom instruction support.

Using its Automated Custom Extension (ACE) toolchain, customers can create highly specialized instructions to accelerate AI functions like convolution, softmax, and other neural operations. Custom vector and matrix instructions, native support for RISC-V vector extensions (RVV), and end-to-end software stacks empower customers to build tailored AI inference engines directly into their silicon.

Industry giants like Meta are already deploying Andes RISC-V cores in production-scale recommendation systems, a very data and compute-intensive workload.

Real-World Deployment Across Diverse Markets

Beyond AI, Andes processors are finding adoption across a wide range of industries. EdgeQ, an innovator in 5G base station SoCs, uses Andes cores to combine signal processing, AI inference, and control logic in a single chip—a testament to the flexibility of Andes’ ISA extensions and multicore support. Cornami, focused on the cutting edge of fully homomorphic encryption (FHE), relies on Andes processors to meet the performance and latency demands of next-generation security workloads.

In automotive applications, Andes has introduced ISO 26262 ASIL-certified cores like the D23-SE and AX60-SE, supporting functional safety for mission-critical components such as powertrain and chassis controllers. These cores meet the highest safety standards (ASIL-D) and include features like dual-core lockstep and advanced memory protection mechanisms.

Such a wide range of offerings addressing hyperscalers, telecom, industrial, security, and automotive—demonstrates Andes’ reach in delivering silicon-ready, vertically integrated RISC-V solutions.

A RISC-V Powerhouse, Not a Niche Player

Andes Technology has firmly established itself as a key enabler of the RISC-V revolution, demonstrating that the architecture is not just an open alternative, but a world-class computing platform. From wearables and industrial controllers to AI computing and hyperscale deployments, Andes provides the performance, flexibility, and scalability the future demands and is leading the charge into a new era of computing.

With one of the broadest product portfolios in the industry, a powerful suite of development tools, and validation from industry leaders across AI, telecom, and automotive, it has gained a 30% share of the RISC-V market.

To learn more, visit Andes website.

Also Read:

Andes Technology: A RISC-V Powerhouse Driving Innovation in CPU IP

Andes RISC-V CON in Silicon Valley Overview

Webinar: Unlocking Next-Generation Performance for CNNs on RISC-V CPUs


Voice as a Feature: A Silent Revolution in AI-Enabled SoCs

Voice as a Feature: A Silent Revolution in AI-Enabled SoCs
by Jonah McLeod on 05-26-2025 at 10:00 am

Voice as a Feature

When Apple introduced Siri in 2011, it was the first serious attempt to make voice interaction a mainstream user interface. Embedded into the iPhone 4S, Siri brought voice into consumers’ lives not as a standalone product, but as a built-in feature—a hands-free way to interact with an existing device. Siri set the expectation that voice could be ambient, contextual, and invisible.

But Siri never lived up to its early promise. It remained largely scripted, failed to evolve into a true conversational assistant, and was confined to Apple’s tightly controlled ecosystem. Still, it laid the groundwork for what would come next.

Amazon’s Alexa took the opposite approach: voice wasn’t just a feature—it became the product. The Echo smart speaker turned voice interaction into a consumer electronics category, leading to over 500 million devices sold. Alexa taught consumers to expect voice responsiveness in their homes—and aimed to monetize that presence through commerce.

But Alexa, too, fell short. Consumers embraced voice for utility—timers, weather, smart home control—but not for transactions. Shopping by voice lacked trust, context, and feedback. Privacy concerns and awkward user experiences further limited adoption. Despite its scale, Alexa failed to become the commerce engine Amazon hoped for.

Together, Siri and Alexa defined a decade of voice computing—and revealed its limits. Siri introduced voice as a feature. Alexa attempted to make it a business. But in the end, voice didn’t work as a product. It works best as infrastructure—quietly embedded into everything.

Voice is becoming a built-in feature across TVs, thermostats, earbuds, appliances, and automotive dashboards—with over 500 million smart speakers already installed globally and voice interfaces now expected in nearly every smart device. [^1] Like the touchscreen before it, voice is becoming a default input modality—ambient, expected, and embedded. This transformation is quietly rewriting the rules of semiconductor design.

From Cloud to Edge: Why Voice Must Be Local

With over 500 million smart speakers already in homes and a voice assistant application market projected to grow from $6.3 billion in 2025 to $49.82 billion by 2033, the future of voice computing is moving from cloud to edge.[2] On-device processing for wake-word detection, keyword spotting, and natural language understanding is becoming the standard.

This shift to local inference brings critical benefits: improved responsiveness, enhanced user privacy, and lower ongoing cloud infrastructure costs. [3] But it also creates a new dilemma for chip architects: how to bring data center-grade inference capability into a consumer SoC with a bill of materials often constrained to under $3–$10 per unit, depending on the device class. [4]

That challenge is reshaping SoC architecture. [5] These devices must now deliver real-time performance, neural inference, and ultra-low power consumption in thermal and cost envelopes more typical of smart remotes than smartphones.

A $10–$15 Billion Opportunity

While high-end smartphones can afford complex neural engines, the majority of voice-enabled products—like thermostats, remotes, and earbuds—cannot. These devices require purpose-built, cost-efficient SoCs priced in the $3–$10 range. According to Market.us, the edge AI integrated circuits (ICs) market is projected to grow from $17.3 billion in 2024 to $340.2 billion by 2034, with over 70% of that value attributed to inference tasks—including those performed by voice-optimized SoCs for wake-word detection, on-device speech recognition, and natural language processing (NLP). [6]

The SoC as the Voice Compute Backbone

To support always-on voice capabilities, SoCs must become the central compute engine inside embedded devices. These chips must continuously listen for user input while consuming only milliwatts of power. They need to execute inference tasks, such as wake-word recognition and intent parsing, in less than 100 milliseconds. They also must interface seamlessly with control logic and input/output systems, all while adhering to strict cost constraints—typically with a bill of materials under $10.

This shift reflects a broader trend toward embedding intelligence directly into the fabric of everyday products. Consumers don’t expect to buy separate “voice assistants.” They expect voice interaction to be built in and invisible—like buttons or touchscreens once were.

SoC Design for Voice as a Feature

Meeting these requirements calls for a hybrid SoC architecture. This includes a scalar processor—typically ARM or RISC-V—for managing operating system tasks and control logic. [7] A vector or matrix engine handles the heavy lifting of AI tasks such as wake-word detection and intent parsing. To ensure efficiency and predictability, the architecture incorporates a deterministic scheduling model that avoids the power and verification challenges of speculative execution.

This combination delivers advanced AI workloads efficiently—without the thermal and architectural overhead of traditional, general-purpose designs.

In this new generation of voice-enabled devices, the vector-matrix processor is not just more valuable than the scalar unit—it is the performance and power bottleneck driver. Inference workloads such as wake-word detection, keyword spotting, noise suppression, and intent parsing now account for over 80% of total compute cycles and up to 90% of dynamic power consumption in voice-capable SoCs.

The AI engine typically occupies 2–3× the silicon area of the scalar core, yet delivers orders of magnitude higher throughput per watt. [8]

A Next-Generation SoC Axelon Interface Integrates Scalar & Vector

Simplex Micro’s Axelon architecture embodies this next-gen approach. It provides a flexible CPU interface to integrate a scalar core (ARM or RISC-V) with a RISC-V Vector engine, while introducing a novel time-based execution model. This deterministic scheduler eliminates the complexity of speculative execution—a common technique in general-purpose CPUs that adds power and verification overhead—delivering consistent, low-latency AI performance.

ARM vs. RISC-V: The Battle for AI Acceleration

ARM continues to dominate the scalar core market and will remain a cornerstone for control logic. However, when it comes to AI acceleration, ARM’s Neon SIMD architecture is limited in vector width, scalability, and power efficiency.

In contrast, RISC-V—with its open architecture, variable-length vector support, and extensibility for custom AI instructions—offers a more scalable and energy-efficient foundation for edge inference. These advantages are driving its adoption in hybrid SoCs optimized for embedded voice.

Conclusion: Voice Is Infrastructure, Not a Product

We are witnessing a silent revolution in silicon. Voice is no longer a standalone product category—it’s infrastructure. A baseline feature, embedded directly into the interface layer of modern devices.

The companies designing the SoCs that enable this transition will define the next generation of user interaction. ARM will remain essential for control, but the competitive frontier is in AI acceleration. Here, RISC-V vector processors—especially when paired with deterministic execution models like Simplex Micro’s Axelon—are poised to lead. Quietly and efficiently, they are powering the age of voice as a feature.

References

[1] Remesh. (2022). 5 Ways Voice Recognition Technology Sways Consumer Buying Behavior. https://www.remesh.ai/resources/voice-recognition-technology-consumer-buying-behavior

[2] MarketsandMarkets. (2024). Voice Assistant Application Market Size. https://www.marketsandmarkets.com/Market-Reports/voice-assistant-application-market-1235279.html

[3] ARM. (2025). Silicon Reimagined Report. Chapter 2: AI at the Edge. https://www.arm.com/company/newsroom

[4] ARM. (2025). Silicon Reimagined Report. Chapter 2: Power-efficiency limits. https://www.arm.com/company/newsroom

[5] ARM. (2025). Silicon Reimagined Report. Chapter 1: Custom silicon trends. https://www.arm.com/company/newsroom

[6] Global Edge AI ICs Market Size, Share, Statistics Analysis Report By Chipset  https://market.us/report/edge-ai-ics-market/

[7] ARM. (2025). Silicon Reimagined Report. Chapter 2: Vector/matrix specialization. https://www.arm.com/company/newsroom

[8] Patel, D. (2022). Apple M2 Die Shot and Architecture Analysis – Big Cost Increase for Minor Performance Gains. SemiAnalysis. https://www.semianalysis.com/p/apple-m2-die-shot-and-architecture

Also Read:

S2C: Empowering Smarter Futures with Arm-Based Solutions

SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments

The RISC-V and Open-Source Functional Verification Challenge


From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V
by Kalar Rajendiran on 05-26-2025 at 6:00 am

Started with a look at CPU

In an era where artificial intelligence workloads are growing in scale, complexity, and diversity, chipmakers are facing increasing pressure to deliver solutions that are not only fast, but also flexible and programmable. Semidynamics recently announced Cervell™, a fully programmable Neural Processing Unit (NPU) designed to handle scalable AI compute from the edge to the datacenter. Cervell represents a fundamental shift in how AI processors are conceived and deployed. It is the culmination of Semidynamics’ IP offerings evolution from modular IP components to a tightly integrated, unified architecture, rooted in the open RISC-V ecosystem.

The Roots of Cervell: A Modular Foundation

Cervell’s architectural DNA can be traced to Semidynamics’ earlier innovations in customizable processor components. The company began by developing highly configurable 64-bit RISC-V CPU cores that allowed customers to tailor logic and instruction sets to their unique requirements. These cores served as the foundation for control flow and orchestration in AI and data-intensive systems.

As AI workloads evolved, Semidynamics introduced vector and tensor units to extend the performance of its RISC-V platforms. The vector unit enabled efficient parallel processing across large data sets, making it well-suited for signal processing and inference tasks. Meanwhile, the tensor unit brought native support for matrix-heavy computations, such as those central to deep learning. Importantly, both units were designed to share the same register file and memory system, reducing latency and improving integration.

These components formed the basis of what the company called its “All-in-One” IP architecture—a modular approach that gave chip designers the freedom to assemble compute units tailored to their application. However, it still required developers to integrate, manage, and orchestrate these units at the system level. Cervell changes that.

Why Semidynamics Chose to Build an Integrated NPU

As AI models became larger and more complex, the need for a more unified compute platform became clear. Traditional approaches—where CPUs handle orchestration and discrete accelerators handle AI inference—were increasingly hindered by memory bottlenecks, data movement latency, and software complexity. Fragmented architectures that required separate cores for control, vector operations, and tensor math no longer met the performance and efficiency demands of modern AI workloads.

Moreover, as customers began to prioritize programmability and long-term flexibility, it was evident that an off-the-shelf NPU with fixed functionality would no longer suffice. Semidynamics saw the opportunity to converge its modular IP blocks into a single, coherent compute architecture. Cervell is a RISC-V NPU that is not only scalable and programmable, but also capable of eliminating the need for fallback or offload operations.

A New Category of NPU: Coherent, Programmable, and Scalable

What distinguishes Cervell from traditional NPUs is its unification of compute components under one roof. Rather than treating the CPU, vector unit, and tensor engine as separate blocks requiring coordination, Cervell integrates all three within a single processing entity. Each element operates within a shared, coherent memory model, meaning data flows seamlessly between control logic, vector processing, and matrix operations without needing DMA transfers or synchronization barriers.

This integration enables Cervell to execute a full range of AI tasks without falling back to an external CPU. Tasks that traditionally caused performance bottlenecks—such as control flow in transformer models or non-linear functions in recommendation engines—are now handled within the same core. With support for up to 256 TOPS in its highest configuration, Cervell achieves datacenter-level inference performance while remaining flexible enough for low-power edge deployments.

Cervell’s Market Impact

By removing the artificial boundaries between compute types, Cervell delivers a simplified software stack and more predictable performance for AI developers. Its design challenges the status quo of traditional NPUs, which often rely on closed, fixed-function pipelines and suffer from limited configurability. In contrast, Cervell empowers companies to tailor the architecture to their algorithms, allowing for truly differentiated solutions.

The Role of RISC-V in Enabling the Cervell Vision

None of this would be possible without the open RISC-V instruction set architecture. RISC-V allows developers and chip designers to deeply customize the ISA, add proprietary instructions, and maintain compatibility with open software ecosystems. In the case of Cervell, RISC-V serves not only as a technical enabler, but as a strategic differentiator.

Unlike proprietary ISAs that limit innovation to a vendor’s roadmap, RISC-V allows Cervell’s capabilities to evolve alongside customer needs. The openness of RISC-V means companies can build processors that match their business and technical requirements without being locked into closed ecosystems. This flexibility is crucial in AI, where workloads shift quickly and the ability to adapt can be a competitive advantage.

Cervell vs. Semidynamics’ Earlier All-in-One IP

While the All-in-One IP architecture laid the groundwork for what Cervell has become, the differences are profound. Previously, customers had to select and integrate CPU, vector, and tensor components on their own, often dealing with toolchain and memory integration complexity. Cervell consolidates these elements into a pre-integrated NPU that is ready to deploy and scale.

Cervell is a holistic product that includes a coherent memory subsystem, programmable logic across all compute tiers, and a software-ready interface supporting common AI frameworks. Furthermore, Cervell introduces performance scaling configurations, from C8 to C64, ensuring the same architecture can serve everything from ultra-low-power IoT devices to multi-rack datacenter inference systems.

Summary: The Future of Scalable AI Compute

Cervell brings together programmability, performance, flexibility and scalability to AI solutions. By building on RISC-V and eliminating the barriers between CPU, vector, and tensor processing, Semidynamics has delivered a unified architecture that scales elegantly across deployment tiers.

Cervell is a trademark of Semidynamics. Learn more at Cervell product page.

Also Read:

Andes RISC-V CON in Silicon Valley Overview

Keysom and Chipflow discuss the Future of RISC-V in Automotive: Progress, Challenges, and What’s Next

Vision-Language Models (VLM) – the next big thing in AI?


Executive Interview with Mohan Iyer – Vice President and General Manager, Semiconductor Business Unit, Thermo Fisher Scientific

Executive Interview with Mohan Iyer – Vice President and General Manager, Semiconductor Business Unit, Thermo Fisher Scientific
by Daniel Nenni on 05-24-2025 at 10:00 am

MohanPicWorkday 11302021

Mohan Iyer serves as the Vice President and General Manager of the Semiconductor Business Unit at Thermo Fisher Scientific, a global leader in providing reference metrology, defect characterization, and localization equipment. These advanced systems are essential for driving innovation, accelerating time to market, and optimizing manufacturing yields in the semiconductor industry. Mohan has over 27 years of experience in the semiconductor industry specializing in semiconductor equipment and process control.

Tell us about your company?

Thermo Fisher Scientific’s mission is to enable our customers to make the world healthier, cleaner and safer. Our life sciences solutions, specialty diagnostics, analytical instruments and laboratory, pharmaceutical and clinical research services help improve patient health, increase productivity in laboratories, develop and manufacture life-changing therapies, help solve complex analytical challenges and support cleaner environments.

Through our extensive capabilities and global reach, we empower our customers to accelerate breakthroughs in scientific discovery, human health, global sustainability and safety. Today, technological advancements are powering our modern world – from new materials for cleaner energy solutions to advanced semiconductors that make way for next-generation technologies like artificial intelligence and machine learning. The recent launch of the Thermo Scientific Vulcan™ Automated Lab, a solution that will drive a new era of process development and control in semiconductor manufacturing, supports the growing global need for analytical instruments and expertise within the semiconductor manufacturing industry.

What problems are you solving?

Thermo Fisher Scientific addresses key challenges in the semiconductor industry by providing solutions across R&D, yield management, and failure analysis. In R&D, our state-of-the-art atomic-scale imaging, elemental analysis, and characterization capabilities enable the development of high-performance materials and the precise structural innovations necessary for next-generation semiconductor devices. Furthermore, we offer precision circuit-editing technologies that accelerate the prototyping of new devices, significantly reducing mask costs and enabling faster time-to-market. In yield management, our metrology solutions support in-line metrology correlations, allowing semiconductor manufacturers to resolve yield excursions. In failure analysis, our defect analysis workflows provide rapid and accurate root cause analysis of manufacturing defects or customer returns.  Our electrostatic discharge test systems ensure compliance with industry standards, safeguarding the reliability and performance of semiconductor devices. These advanced solutions help optimize the semiconductor manufacturing process, driving innovation, reducing costs, and improving product quality across the industry.

What application areas are your strongest?

Our strongest application areas are centered around defect characterization and metrology in semiconductor manufacturing using failure analysis instruments, Scanning Electron Microscopy (SEM), Focused Ion Beam (FIB), and TEM (Transmission Electron Microscopy).  We specialize in our ability to ‘see the unseen’.   As industry defect tolerance has shifted from parts per million to parts per billion, it’s crucial to identify defects quickly and accurately. This is especially important with AI chips, where latent defects are an increasing concern. We excel at accurately localizing defects for failure analysis and can quickly characterize these defects using chemical analysis and other electron microscopy methods. Additionally, we help control development and manufacturing processes in the fab by providing precise metrology for structures buried beneath the wafer surface.

We have decades of experience in developing automated workflows that significantly improve our customers’ productivity. Ultimately, we deliver comprehensive, atomic-scale data solutions with the highest level of automation, leveraging our cutting-edge instruments to meet the demands of the semiconductor industry.

What keeps your customers up at night?

As consumer products require more powerful computing capabilities, defect rates in chip development naturally rise with new technologies. A subtle defect that was previously in the ‘noise’, is now causing issues and they need to find those very subtle defects. A prime example is the rapid adoption of AI across various industries, which drives the need for even more sophisticated chips. A critical defect or process issue in the fab can severely delay time to market, which is a major concern for our customers. They need these issues identified and resolved quickly to maintain competitive advantage. Additionally, as advanced chips become more expensive to produce, managing operational costs becomes even more important. Our customers need automated workflows to reduce the burden on operators and streamline processes, helping to control costs while maintaining quality and speed. By addressing these challenges, we help them stay ahead in a rapidly evolving industry.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape in electron microscopy is established with other vendors providing instruments in this space.  However, we differentiate by offering advanced solutions that are increasingly critical as device structures shrink and become more complex. Our exceptional ability to precisely localize defects, reveal them for analysis, and provide atomic-scale resolution imaging is essential for meeting the industry’s evolving needs. Furthermore, what sets us apart is our unique automated workflows, which streamline analysis throughout the entire semiconductor product development cycle. By leveraging our deep hardware expertise, software integration, and AI innovations, we help our customers boost productivity and reduce time to market. Additionally, we stand out by having a dedicated semiconductor market division, focused exclusively on providing instruments and workflow solutions tailored to the industry’s needs. With decades of experience in providing specialized service and applications support, we’ve built a strong reputation for being the go-to partner for the semiconductor market. This combination of cutting-edge technology, market focus, and customer support is what truly differentiates us in the competitive landscape.

What new features/technology are you working on?

We’re continuously working on developing higher precision instruments to characterize even smaller structures. As chip devices become more complex, our goal is to ‘see the unseen’ by advancing both hardware and specialized software capabilities.  For example, in defect localization applications, instead of using traditional optical imaging we’ve integrated a high resolution SEM into the Meridian EX system, resulting in a remarkable improvement in probing precision.  Additionally, as chip footprints shrink and devices are stacked vertically, this introduces challenges in revealing structures that are microns, or even hundreds of microns, below the wafer surface. This drives us to continue our innovations in precisely revealing defects and generating metrology data for these cutting-edge chips. Another exciting focus is bridging the lab-to-fab time to data gap. We’re expanding automation not just within our instruments but also integrating our automation systems with those in the lab and fab. Our Vulcan Automated Lab is a great example. This will significantly streamline processes and reduce the time it takes to get actionable data, ultimately boosting efficiency and productivity in semiconductor manufacturing

How do customers normally engage with your company?

Our customers typically engage with us through various channels to ensure we meet their needs. One key method is through technical review meetings, where we share roadmaps and ensure we’re developing targeted solutions. We also interact with customers through seminars, conferences, and webinars. Engaging with our customers is crucial to our shared success, and we host and attend numerous events throughout the year.

Additionally, as mentioned earlier, we pride ourselves on providing strong local sales, service, and application support. Our extensive network of semiconductor-focused engineers, located worldwide, works closely with customers to address their specific challenges. We also use our global demo labs, called NanoPorts, as a way for customers to engage with us in solving cutting-edge problems and evaluating new technologies and techniques.

These interactions not only allow us to showcase our latest innovations but also give us the opportunity to learn directly from customers, ensuring we understand their current and future challenges. This two-way communication is key to our ability to provide effective, forward-thinking solutions.

Also Read:

CEO Interview with Jason Lynch of Equal1

CEO Interview with Sébastien Dauvé of CEA-Leti

CEO Interview with Sudhanshu Misra of ChEmpower Corporation


CEO Interview with Jason Lynch of Equal1

CEO Interview with Jason Lynch of Equal1
by Daniel Nenni on 05-24-2025 at 6:00 am

Equal1. 27/2/25 Picture by Fergal Phillips

For more than 20 years, Jason has delivered high impact results in executive management. Prior to Equal1, he held several executive management positions at Analog Devices, Hittite and Farren Technology. In these roles, Jason built and managed large teams comprising sales, operation and finance and was responsible for significant revenue targets. He also drove AI for predictive maintenance, made Internet of Things (IoT) a reality, using cutting-edge sensors and Artificial Intelligence (AI) to save factory-owners millions of dollars in unplanned downtime.

Tell us about your company.

Equal1 is a global leader in silicon-based quantum computing. With a world-class team spanning Ireland, the US, Canada, Romania, and the Netherlands, we are focused on building the future of scalable quantum computing through our patented Quantum System-on-Chip (QSoC) technology.

Our UnityQ architecture is the world’s first hybrid quantum-classical chip, integrating all components of a quantum computer onto a single piece of silicon. This breakthrough enables cost-effective, energy-efficient quantum computing in a compact, rack-mountable form, ready for deployment in existing data center infrastructure.

Our approach radically reduces the size and cost of quantum systems, preparing them to unlock real-world applications across sectors such as pharmaceuticals and finance.

Our first commercial product, the Bell-1 Quantum Server, is now shipping to customers worldwide.

What problems are you solving?

Quantum computing today faces two major barriers – scalability and deployability. Most current systems are bulky, power-hungry and confined to lab environments, making them impractical for real-world adoption.

At Equal1 we’re solving this by delivering a fully integrated, silicon-based quantum computing platform that is compact, energy-efficient and manufacturable using standard semiconductor processes. Our UnityQ QSoC architecture eliminates the need for complex infrastructure, enabling quantum computing to be deployed in existing data centers and HPC environments.

We’re also addressing the challenge of hybrid processing by embedding quantum and classical components on a single chip. This allows for real-time quantum-classical interaction, which is a critical capability for solving complex problems in data-sensitive sectors like pharmaceuticals and finance.

What application areas are your strongest?

Our technology is designed for application areas where quantum-classical hybrid computing has the potential to deliver significant future impact. While commercial applications are still in early stages industry-wide, our QSoC platform is especially well-positioned to support:

  • Pharmaceuticals: simulating molecular interactions and accelerating early-stage drug discovery.
  • Finance: exploring advanced optimisation and risk modelling techniques.
  • Data centre efficiency: enabling more energy-efficient computation and reducing the environmental footprint of large-scale infrastructure.

These sectors share common characteristics of data intensity, high computational complexity and sensitivity, making them prime areas for early hybrid quantum-classical exploration. Equal1’s compact, energy-efficient systems are built to integrate easily into existing infrastructure, enabling customers to prepare for and experiment with quantum workloads today.

What keeps your customers up at night?

Our customers – along with many closely watching the evolution of quantum computing – are excited about the promise of the transformative technology, but they also have uncertainty about when and how that transformation will take place. There are differing views on when quantum computing will begin to deliver real-world value, and how quantum systems will fit into existing operations. Many current quantum solutions are far from enterprise-ready. For organisations in highly regulated, data-sensitive industries like finance and pharmaceuticals, the stakes are even higher.

At Equal1, we’re taking a grounded, practical approach. We see quantum not as a replacement for classical computing, but as a complement – an accelerator within high-performance computing environments. And importantly, quantum is scaling faster than classical computing ever did, bringing us closer to practical applications than many once thought possible.

What does the competitive landscape look like and how do you differentiate?

The quantum computing space is full of exciting and diverse approaches, but what makes Equal1 stand out is our focus on silicon-based quantum technology. We’re proud to be the first company to launch a silicon-based quantum server, Bell-1 – a compact, rack-mounted system designed for deployment in real-world data centers.

We’ve also recently achieved a major milestone by validating a commercial CMOS process for quantum devices. This proves that quantum computing can be built using the same mature, scalable technology behind classical semiconductors, paving the way for what we call Quantum 2.0: practical, integrated and ready to scale. While others are still working with complex, custom platforms, we’re focused on delivering quantum solutions that fit into today’s data centers and tomorrow’s high-performance computing infrastructure.

What new features/technology are you working on?

We’re focused on pushing the boundaries of what’s possible with our silicon-based architecture. Our UnityQ Quantum System on Chip Processor roadmap will deliver millions of physical qubits, uniting quantum and classical components on a single chip using commercial semiconductor manufacturing processes.

To bring these next-generation capabilities to life, we’re working closely with industry and research partners who share our vision for practical, scalable quantum computing. These collaborations are critical in helping us accelerate development and ensure our technology addresses real-world needs from day one. We’re very excited for what’s to come for Equal1.

Visit our website at equal1.com.

Also Read:

CEO Interview with Sébastien Dauvé of CEA-Leti

CEO Interview with Sudhanshu Misra of ChEmpower Corporation

CEO Interview with Thar Casey of AmberSemi