Bronco Webinar 800x100 1

CEO Interview with Dave Kelf, CEO of Breker Verification Systems

CEO Interview with Dave Kelf, CEO of Breker Verification Systems
by Daniel Nenni on 05-08-2026 at 6:00 am

David Kelf Headshot

In the functional verification space, Breker Verification Systems stands out for its vast and long-standing understanding and ability to solve many of the seemingly intractable complexity challenges, especially in the system space.

I recently talked with Dave Kelf, Breker’s CEO, who has plenty of good news to share about Breker’s growth and new product development.

What’s new with Breker?

In 2025, we saw 35% growth over 2024. That matched the 35% growth from 2023 to 2024. Our growth in 2026 should be even greater as demand for our functional verification solutions has accelerated.

Also in 2025, we added resources when we engaged an engineering and support facility in Bangladesh.

And much like previous years, we are deep into new product development with work in the AI space that includes verifying AI/GPU processors and leveraging AI in our flows. We inked some interesting partnerships that will result in some exciting new advances.

What’s was the most exciting high point for Breker recently?

I can point to three areas that were and continue to be high points for us. The first is our efforts within the RISC-V space. We signed several strategic partnerships, took a larger leadership role in the RISC-V International Certification program, a need that’s gaining increasing importance and urgency, and of course, enhancements to our RISC-V SystemVIP products.

Breker has made considerable advancements in both Arm platform verification and AI hardware GPUs. We are offering an Arm Neoverse CSS SoCReady SystemVIP as part of our product portfolio and signed a fascinating new partnership in the AI hardware GPU area.

Finally, our customer base is growing. From all accounts, Breker is now being used on more than half of all the RISC-V processor core developments globally. We added several top 10 semiconductor companies to our portfolio, plus two more from the Magnificent 7. Overall, our customer base has doubled in size. Revenue crossed a major goal for Breker.

What do you think the biggest growth area for 2026 will be, and why? How is Breker’s work addressing this growth?

Without question, it is test content for system verification because there is an unserved market in test content generation for these large devices, especially consumers of simulation and hardware-assisted verification platforms. Currently, verification engineers spend way too much time and resources creating homebrew solutions for system-level scenarios. By leveraging synthesis, we are able to find troublesome corner cases not obvious when composing manual test cases or running real workloads.

Are you incorporating AI into your products?

Yes. Breker has used AI Planning Algorithms for years in our synthesis technology.

Breker’s Synthesis technology, provides a backend for AI verification tools that completes their basic flows while adding tribal knowledge that is not readily apparent.

At this year’s DVCon US, we launched a partnership with Moores Lab AI to create the first AI-driven SoC verification flow integrating our Trek Test Suite Synthesis with Moores Lab agentic AI technology. It leverages our experience in test generation for complex system design scenarios with Moore’s Lab’s agentic AI VerifAgent product to seamlessly enable automated multicore, multitool, C or TLM test generation for complex SoC scenarios from manually composed specifications. The flow uses agentic AI to read a specification and generate appropriate scenario models for test synthesis that produce combined C and SystemVerilog tests that can be run on simulation and emulation platforms targeting high-coverage SoC scenarios.

Is AI affecting the way you develop your products?

Yes. We use AI to assist backend test synthesis for AI SoC verification. AI spec interpreters enable a greater understanding of complex specifications such as RISC-V’s ISA. Our tools leverage AI to generate various code segments and provide verification “tribal knowledge” to AI-focused verification groups.

How do customers normally engage with your company?

We can be contacted through the Breker Verification Systems website, our LinkedIn page or via X. Or, send email to info@brekersystems.com. We will respond quickly to requests.

Also Read:

Breker Hosts an Energetic Panel on Spec-Driven Verification

Verifying RISC-V Platforms for Space

A Principled AI Path to Spec-Driven Verification


RISC-V: From Niche Architecture to Strategic Foundation

RISC-V: From Niche Architecture to Strategic Foundation
by Kalar Rajendiran on 05-07-2026 at 10:00 am

The RISC V Inflection Point

At the recent RISC-V Now by Andes conference, Aion Silicon’s presentation made one thing clear: RISC-V is no longer an emerging alternative but rather rapidly becoming foundational to modern silicon design. This conviction is not theoretical says Oliver Jones, CEO of Aion Silicon, who gave the talk. It is grounded in Aion Silicon’s direct experience delivering advanced-node-silicon spanning dozens of designs at 7nm and below and deep engagements across AI, networking, and automotive sectors. Across these engagements, a consistent pattern has emerged: RISC-V is increasingly selected where flexibility, control, and differentiation matter most.

The semiconductor industry is undergoing a structural shift, where the ability to tailor compute to specific workloads is becoming as critical as raw performance. RISC-V sits squarely at the center of this transformation.

The Real Driver: AI’s Insatiable Demand for Custom Compute

A central theme of the presentation is that artificial intelligence (AI) is not just another workload but a forcing function reshaping silicon design. AI workloads demand specialization, efficiency, and adaptability, all of which align naturally with RISC-V’s extensible instruction set architecture.

In practice, this is already visible in production silicon. RISC-V cores are increasingly embedded inside AI accelerators, including hyperscale deployments, where they handle control, orchestration and specialized processing tasks. Rather than displacing incumbent CPU architectures, RISC-V is occupying critical roles that benefit from tight optimization and domain-specific behavior. The alignment is powerful: AI creates demand for customization, and RISC-V provides the architectural mechanism to deliver it.

Customization as a First-Class Design Principle

One of the most compelling arguments presented is that customization is the defining advantage of RISC-V, not cost or openness alone. The ability to extend the ISA with domain-specific instructions allows silicon designers to tailor compute precisely to workloads, whether in AI, networking, or embedded systems.

This represents a meaningful shift in design philosophy. Instead of forcing software to adapt to fixed hardware constraints, organizations can now shape hardware around their most important workloads. The resulting gains in performance-per-watt and system efficiency are increasingly difficult to achieve with general-purpose architectures alone.

That said, this flexibility introduces complexity. Custom instructions ripple through compilers, verification flows, and system integration. The presentation emphasizes that successful adoption requires discipline with customization targeted, measured, and supported across the full stack.

The Rise of Heterogeneous SoCs

Another key insight is that RISC-V’s momentum is closely tied to the rise of heterogeneous system-on-chip architectures. Rather than replacing Arm or x86 CPUs, RISC-V is being deployed alongside them as a function-specific compute element.

This coexistence model reflects a broader industry evolution. Modern SoCs are no longer built around a single dominant processor; instead, they integrate multiple compute elements, each optimized for a specific role. Within this architecture, RISC-V frequently serves as the control and orchestration layer or as a targeted offload engine.

The implication is clear: RISC-V’s strength lies not in displacing existing architectures, but in complementing them—filling the growing need for specialized, adaptable compute within increasingly complex systems.

From Edge Volume to Datacenter Value

RISC-V’s adoption story began in microcontrollers and embedded systems, where its open and customizable nature provided immediate advantages. What is changing now is the direction and scale of growth. The architecture is expanding upward into edge AI, networking, and datacenter silicon.

This progression mirrors historical industry patterns. Volume at the edge drives ecosystem maturity, which in turn enables deployment in higher-value, performance-sensitive domains. As AI inference shifts from centralized cloud environments to distributed edge devices, the number of deployment opportunities grows exponentially.

The result is a virtuous cycle: broader adoption strengthens the ecosystem, which lowers barriers to entry and accelerates further adoption.

From Architecture to Silicon: The Importance of System-Level Thinking

Beyond market trends, the presentation highlights how successful RISC-V designs are executed. Aion Silicon emphasizes system-level modeling, where compute, memory, and I/O interactions are analyzed early using cycle-accurate frameworks before committing to RTL.

This approach reflects a critical reality: in modern SoCs, performance bottlenecks are rarely confined to the CPU core. They emerge in memory hierarchies, interconnect fabrics, and data movement patterns. Early modeling enables teams to identify these constraints and optimize accordingly, while changes are still cost-effective.

The broader lesson is that architectural success depends on holistic system design, not just individual component optimization.

Where RISC-V Works and Where It Needs to Mature More

The presentation offers a pragmatic view of where RISC-V delivers immediate value and where challenges remain. It is particularly effective in control and orchestration roles within heterogeneous SoCs, and in scenarios where targeted vector or custom instructions can accelerate well-understood workloads.

At the same time, areas such as toolchain maturity, verification complexity, and system integration require careful management. These challenges are not unique to RISC-V, but they are amplified by its flexibility.

What emerges is a clear pattern: success with RISC-V is less about adopting the architecture itself and more about how thoughtfully it is integrated into the broader system.

Ecosystem and Collaboration: The Multiplier Effect

The role of ecosystem is another critical theme. The collaboration between Aion Silicon and Andes underscores how partnerships across IP providers, design services, and tooling vendors are essential to delivering production-ready solutions.

As the ecosystem expands, it reduces friction for adoption and increases confidence among customers. This network effect is a key driver of momentum, enabling RISC-V to scale beyond early adopters into mainstream deployment.

Summary

The overarching message is that RISC-V is evolving into the default architecture for control, customization, and specialization within heterogeneous systems. Its rise is driven by the convergence of AI workloads, the shift toward multi-architecture SoCs, and the growing importance of architectural flexibility.

For decision-makers, the implication is clear: RISC-V is becoming a strategic component of future silicon roadmaps. In that sense, RISC-V is doing more than spanning datacenter to edge. It is redefining how compute is designed, deployed, and optimized.

Learn more about Aion Silicon.

Also Read:

NoC Matters: Designing the Backbone of Next-Gen AI SoCs

The 10 Practical Steps to Model and Design a Complex SoC: Insights from Aion Silicon

Live Webinar: Considerations When Architecting Your Next SoC: NoC with Arteris and Aion Silicon


Bringing mathematical rigour in the world of hardware – a journey into Formal Verification

Bringing mathematical rigour in the world of hardware – a journey into Formal Verification
by Daniel Nenni on 05-07-2026 at 6:00 am

Robert Simpson picture

This interview presents the first steps of Robert Simpson (R.S.), a Maths graduate who found an unexpected, but natural home in Formal Verification (FV) at Axiomise. Drawn by a desire to apply rigorous logic to real-world problems, he shares how abstract mathematical thinking translates into ensuring hardware correctness at the silicon level. From the excitement of spotting elusive bugs missed by traditional methods to collaborating with experts on complex designs, Robert reflects on the mindset, surprises, and rewards of working in a field that blends philosophy, engineering, and detective work—revealing how precise reasoning can have a tangible impact on the technology that shapes our world.

What initially drew you from the world of Maths into the field of FV?

R.S.: – In the final few years of my degree, I thought a lot about what I wanted to do going forward with my career. The focus of my final year was understanding how proof and logic could be used and applied to solve a huge range of problems, and I knew that I wanted to continue to problem solve in a mathematically rigorous way. When I found Axiomise at a careers fair, I was immediately interested by the need for formal proof in silicon verification and wanted to learn more.

How did you fare stepping into the job market with that fresh degree in hand?

R.S.: Starting work in the industry after finishing my degree was very exciting. After lots of studying, it was refreshing to try something new and see how the skills which I had learnt could be applied to real-world problems.

What’s one habit from your Maths degree that turned out to be unexpectedly valuable on the job?

R.S.: I think that the habit which I learnt from my degree, which has been the most useful, is my reaction to always carefully think through the consequences of any assumption or requirement. In both mathematics and FV, this is important because the precise definition of the problem space is critical to the answer to the problem, or whether a bug can be found. In technical research, the importance of methodical accuracy is clearer because of the need for critical thinking when reading papers and reaching sound conclusions; however, I was quite surprised by how useful this turned out to be for reading design specifications and constructing formal checks. The experience with this that I gained during my degree certainly helped me to pick up formal quicker and make more useful contributions.

What surprised you most about applying abstract mathematical thinking to something as tangible—and consequential—as hardware correctness?

R.S.: Whilst working at Axiomise, I have been impressed by how rigorous the field of hardware correctness can get at times. Before starting, I would have guessed that in the important physical industry of hardware, formal mathematical analysis would be limited or bound by approximations, but in fact formal verification has a long history of solid logical foundations. I have really enjoyed this aspect of the hardware industry, as it combines the logical reasoning which really interests me, and knowledge of different designs to produce effective and logically sound results.

What is the most fulfilling part of working at Axiomise?

R.S.: For me, the most fulfilling part of working at Axiomise has been the ability to work alongside experts in formal verification using cutting edge techniques in order to find intricate bugs, get correctness proofs for massively complex designs or research new techniques to further develop the field of FV. Being able to bounce ideas back and forth and have innovative discussions with people with so much knowledge and experience is really rewarding, and this collaborative environment makes solving problems that would otherwise not be resolved particularly enjoyable. It’s great to be able to learn so much and build upon existing methods within an amazing team.

What’s a moment at the company where you realized: “This isn’t just theory anymore—this actually changes how the world works”?

R.S.: I think the first moment I realized the importance of formal hardware verification was when I first saw a bug that had been missed by all non-formal verification. This made it clear to me that the theoretical advantages of formal actually translate into practical results when applied to real designs.

How does a day in the life of an Axiomiser look? How do you find the collaboration with engineers and researchers?

R.S.: What happens during a day at Axiomise varies a lot depending on the state of the current project. The first thing I do will typically be checking if there was any new progress in my checks overnight, raising some new potential failures which need to be debugged. I will look carefully into the flagged situation and use my knowledge of the design or work with a colleague to determine if there is an actual error in the design. Once we work out the cause, the next step could be a meeting with the customer to raise the issue and try to find a fix, or returning to tweak the tests to find a different case. Alongside this, I might also be running some experiments with the researchers in the R&D team, testing out different methods in order to find new ways to find bugs and correctness proofs for more complex designs. Working with different teams is always exciting as it often means we are thinking about a tricky problem and there are always many different clever ideas and potential solutions to get stuck into.

What mindset is required to do FV?

R.S.: I think a key aspect of doing formal verification is the ability to carefully think through a problem. This comes up over and over: when ensuring that all cases of a requirement are going to be considered, understanding what possible edge cases could occur, or when thinking through safe constraints to be put on a system. This ability to be methodical and understand all possibilities is important in many areas of research and industry, but is an invaluable part of doing FV.

After eight months at Axiomise, is FV closer to philosophy, engineering, or detective work in disguise?

R.S.: I think after working at Axiomise for several months, I would say that FV is really a mix of all three. There is an aspect of detective work when trying to carefully figure out why certain behaviours are occurring and looking for the root cause, but also one of engineering the best and most accurate checks to ensure that the specification is met. I would say there is even a bit of philosophy involved when trying to understand the proof construction methods and develop the best ways to utilise their advantages to be able to create new techniques.

What makes a new joiner succeed at Axiomise? What piece of advice would you give young people interested in this field?

R.S.: I think what makes people successful at Axiomise or working on formal verification in general is always having an interest to learn more about the really interesting elements of hardware design. Computers are so complex now, and there are so many cool ideas that go into making them as efficient as possible, that – if you are interested – there are always more interesting parts to learn about. Knowing everything is impossible, but learning as much as possible helps so much when finding out about something new and enables a deeper understanding sooner.

If you’re keen to delve further into formal verification, Axiomise’s website is an excellent place to start—offering both resources and career opportunities. The company’s semi-annual Graduate Program provides a welcoming pathway for aspiring candidates, with applications open directly via the website or Handshake.

Robert Simpson is a Formal Verification Engineer at Axiomise, bringing his problem-solving and logical reasoning to both customer projects and innovation research work aimed at improving the ability of formal methods to reach conclusive proofs. He works on verification of digital hardware designs through the use of formal methods to find design bugs and produce proofs of correctness. Before Axiomise, Robert studied Mathematics at Churchill college, with focusses on Foundations, Quantum computation and topology. Whilst at Cambridge, Robert enjoyed time with societies such as board games, Ultimate Frisbee and bridge. He also took part in a summer research project through the mathematics department, investigating whether a particular technique could be applied to a similar but subtly different problem. Robert learnt about Axiomise at a University Careers fair, and their work using mathematical approaches to the verification of cutting-edge hardware encouraged him to learn more and eventually apply for a position where he uses the skills and knowledge he built during his degree to effective use in an important field in our digital world.

Also Read:

Axiomise Introduces nocProve to Transform NoC Design Verification

Akeana Partners with Axiomise for Formal Verification of Its Super-Scalar RISC-V Cores

IP Surgery and the Redundant Logic Problem


The Great Divide: A Tale of Three Hardware Emulation Architectures

The Great Divide: A Tale of Three Hardware Emulation Architectures
by Lauro Rizzatti on 05-06-2026 at 10:00 am

A Tale of Three Hardware Emulation Architectures

Hardware emulation arose as a necessity out of the needs of the eighties. By the mid-1980s, semiconductor designs had outgrown the practical limits of gate-level simulation. Gate-level simulation delivered accuracy, but at glacial pace; silicon prototypes performed at real-speed but arrived far too late. The industry needed a new instrument, a verification engine capable of executing real hardware models at meaningful speed while preserving the visibility and control required for thorough verification. Hardware emulation was born to fill that gap.

Who would have thought at that time that Moore’s law of complexity growth would be so dramatically outgrown by AI model complexity around the 2020s. Well, hindsight is 20/20 as we know. Let’s take a look at what actually happened with semiconductors demand:

Figure 1: Semiconductor Demand Acceleration (Source: Synopsys)

Through 4 generations of major electronic systems eras: PC, Internet, Mobile, IoT & Cloud, one can argue that the evolution of emulation was driven by different paradigms fighting for market share through a different balance of technology advantages serving very broad market requirements. Today the market is primarily investing in AI chips which must execute AI models with explosive complexity growth. It is not a surprise that emulation architectures with a performance advantage for executing these long AI workloads are today’s winners  in the market.

Let’s visit the journey of how all the emulation technologies evolved and how commercial FPGA-based architecture came out on top for today’s needs of AI system verification.

The first generation of emulators relied on vast arrays of commercial FPGAs, a radical leap forward at the time. These systems enabled pre-silicon validation of complex chips that would otherwise have required years of simulation alone. For nearly a decade, progress followed a predictable trajectory: each new generation of FPGA devices delivered greater capacity, higher performance, and the ability to map increasingly ambitious designs. Scale improved dramatically, but the underlying philosophy remained largely unchanged.

As these platforms grew, however, their limitations became impossible to ignore. Increasing logic capacity did not resolve the architectural constraints embedded in their foundations. Early FPGA-based systems carried what many engineers would later describe as their original “sins.”

The sheer number of FPGAs required to keep pace with exploding design sizes drove setup times into weeks or even months. Compilation cycles stretched into days, often delaying DUT readiness beyond project schedules and making iterative development painfully slow. Design visibility was equally constrained: internal observability depended on compiling probes into the fabric, consuming valuable resources, increasing routing congestion, and turning debug into a laborious exercise. Execution models were rigid, centered entirely on in-circuit emulation (ICE), limiting flexibility for interactive debugging. And the total cost of ownership—purchase, operation, and maintenance—placed these systems far beyond the reach of most engineering teams.

As a result, hardware emulation remained confined to the most critical verification challenges, typically late in the design cycle and within only the most advanced organizations. For many teams, it was not a day-to-day engineering platform but a scarce, high-value resource—powerful, indispensable, and perpetually in short supply.

The Seeds of a Great Divide Were Beginning to Grow

By the mid-1990s, the commercial landscape appeared stable on the surface, dominated by two primary players: Quickturn Design Systems and IKOS Systems. Yet beneath that stability, the field was undergoing a profound transformation. Designs were scaling rapidly, software stacks were growing alongside hardware complexity, and verification demands were shifting from block-level correctness to full-system behavior. The question was no longer whether emulation could scale accordingly, but how.

Out of these pressures emerged a fundamental divergence in architectural thinking. Vendors and engineering teams began reimagining what an emulator should be: not just a larger FPGA array, but a purpose-built verification instrument optimized for visibility, controllability, and system-level performance. This rethinking gave rise to three distinct hardware emulation architectures—each grounded in a different philosophy, each making different trade-offs in speed, scalability, and usability, and each shaping the trajectory of pre-silicon verification for decades to come.

The three architectural approaches that emerged came to be known as processor-based emulation, custom-FPGA-based emulation, and commercial-FPGA-based emulation. Each represented a distinct attempt to overcome the growing limitations of simulation while enabling hardware designs to be validated at meaningful speed and scale. The essential characteristics of these technologies can be understood by examining their origins, evolution, and practical trade-offs.

Processor-Based Emulation
IBM and the Emergence of Processor-Style Approach

In the early 1980s, IBM began exploring hardware acceleration techniques to improve design verification efficiency through projects such as the Yorktown Simulation Engine (YSE) and the Engineering Verification Engine (EVE). These systems functioned as simulation accelerators—special-purpose computing platforms designed to execute hardware descriptions expressed in software languages more quickly than conventional simulators. While they delivered measurable speed improvements, they still fell short of the performance required to apply real-world stimulus to the design under test (DUT).

By the mid-1990s, IBM had refined a new architectural direction centered on arrays of simple Boolean processors. These processors operated on design data structures stored in large, shared memories and were coordinated by sophisticated scheduling mechanisms. The approach proved adaptable to full emulation workloads, offering a scalable alternative to traditional simulation. Still, IBM did not capitalized on the technology.

Quickturn and the Commercialization of Processor-Based Emulation

After nearly a decade of experience with commercial FPGA-based emulation systems, Quickturn reckoned that despite important advances, these emulators revealed structural limitations that proved difficult to overcome. Achieving sufficient design capacity required interconnecting hundreds of FPGAs across multiple boards, creating significant logistical and engineering challenges. Partitioning and routing designs across this distributed fabric often needed months of preparation to avoid congestion and ensure deterministic behavior. Debug visibility had to be explicitly compiled into the design, competing with routing resources and slowing development cycles. Performance also failed to scale linearly with design size, with execution speed declining as workloads grew more complex.

In search for a solution, Quickturn evaluated a custom-FPGA architecture developed by a French startup by the name of Meta System. At the same time, Mentor Graphics, after abandoning earlier experimentation with emulation and selling all assets to Quickturn, pursued the same path. The resulting competition escalated into legal disputes over intellectual property, ultimately culminating in Mentor’s acquisition of Meta System.

Quickturn, already familiar with IBM’s processor-based work, moved decisively in that direction. Rather than commercializing the technology directly, IBM entered into an exclusive OEM agreement with Quickturn, enabling the latter to incorporate the architecture into a new generation of emulation systems.

IBM’s processor-centric architecture offered a compelling alternative. It addressed three of the most persistent constraints associated with FPGA-based systems: lengthy setup and compilation cycles, restricted debugging visibility, and performance degradation at large scale. One drawback—less visible at the time—was higher power consumption compared with FPGA-based solutions of equivalent capacity.

In 1997 Quickturn acquired the IBM technology and soon after introduced the Concurrent Broadcast Array Logic Technology (CoBALT) emulator, the first major commercial platform built on a processor-style architecture. The product achieved rapid market acceptance.

The competitive landscape continued to shift. Ongoing litigation between Mentor and Quickturn persisted until around 2002, when Cadence acquired Quickturn, resolving the disputes and consolidating key emulation technologies within its portfolio.

Cadence and the Scaling of Processor-Based Emulation

Following the acquisition, Cadence phased out Quickturn’s FPGA-based product lines and committed fully to the processor-based paradigm. This decision laid the foundation for the long evolution of the Palladium family of emulators, which became the company’s flagship platform.

Across successive generations introduced from the early 2000s onward, Palladium preserved its fundamental architectural principle: large arrays of simple processors working in concert to emulate hardware behavior at scale. With each iteration, design capacity expanded, execution performance improved, debug capabilities became more comprehensive, and compilation workflows grew faster and increasingly automated.

Two characteristics consistently defined the appeal of the platform. First, compilation times were significantly shorter than those associated with FPGA-based approaches, enabling faster turnaround during development. Second, engineers benefited from full design visibility at runtime without requiring special compilation steps, a powerful advantage for debugging and iterative verification.

Palladium also proved particularly strong in in-circuit emulation. A broad ecosystem of speed bridges enabled direct interaction with real hardware interfaces, allowing software and hardware to be validated together in realistic operating conditions.

These strengths were accompanied by structural trade-offs. Processor-based systems required substantial physical infrastructure and typically consumed more power than FPGA-based emulators of comparable capacity. Customer had to invest into costly watercooling infrastructure. Scaling to multi-billion-gate designs often demanded large installations composed of numerous cabinets. In transaction-based acceleration scenarios, processor-based platforms also tended to operate at lower execution speeds than competing architectures optimized specifically for that use case.

Despite these constraints, processor-based emulation established itself as a foundational technology in hardware verification, offering a unique balance of scalability, visibility, and productivity that continues to shape modern emulation platforms.

Custom-FPGA-Based Emulator
Custom FPGAs: A Parallel Innovation Path

While IBM was advancing processor-based emulation in the United States, a parallel and equally important line of innovation was taking shape in Europe.

In France, Meta System began developing a class of programmable silicon inspired by field-programmable gate arrays but engineered specifically for emulation workloads. These devices—often referred to as custom FPGAs—were not intended for general logic prototyping or ASIC design. They were purpose-built as the computational fabric of an emulator.

Unlike commercial FPGAs, whose architectures must accommodate a broad range of applications, Meta System’s programmable devices were optimized for the specific requirements of hardware verification. Their architecture combined configurable logic elements with a dense, deterministic interconnect matrix tailored for predictable timing. Embedded multi-port memories enabled efficient storage of design state and stimulus data, while high-bandwidth I/O channels supported connection to external systems and software environments. The devices also incorporated built-in debug engines, including memory-based probing capabilities, and dedicated clock-generation circuitry to maintain synchronization across large, mapped designs.

This specialization yielded several tangible benefits. Compilation and setup times were dramatically reduced because the routing and configuration problem was constrained and optimized for emulation rather than general synthesis. Designers gained full visibility into the design during execution, often without the need for lengthy recompilation cycles to insert probes. And as designs grew in complexity, performance scaled more predictably because the architecture had been engineered around the structural characteristics of emulated hardware rather than generic programmability. Compared with processor-based emulators, the custom-FPGA approach delivered similar functional capabilities with lower power consumption and a more hardware-centric execution model.

Mentor Graphics and the Commercialization of the “Emulator-on-Chip”

The promise of custom-FPGA-based emulation attracted industry attention. In 1996, after outmaneuvering Quickturn, Mentor Graphics acquired Meta Systems and introduced SimExpress, the first commercial emulator built around custom programmable silicon.

SimExpress was, in many respects, a proof of concept rather than a fully competitive platform. Housed in a compact chassis roughly the size of a small wine cellar, it could map designs of fewer than 100,000 gates at a time when leading ASICs were already exceeding the million-gate threshold. Yet its architectural direction was significant. Setup was simpler, compilation times were reduced from hours to minutes, and runtime visibility into the design was far superior to what many FPGA-based systems offered. The platform demonstrated how emulation-optimized silicon, paired with advanced verification software, could form a balanced environment for pre-silicon validation.

Mentor expanded on this concept with the introduction of Celaro in 1999, a substantially larger emulator with a nominal capacity of approximately five million gates. By clustering multiple systems, engineers could scale total capacity beyond twenty million gates—an important milestone as system-on-chip (SoC) designs grew rapidly in size and complexity.

The custom-FPGA approach, however, came with trade-offs. Because these devices did not match the raw logic density of the largest commercial FPGAs, more chips were required to implement large designs. Larger arrays meant longer interconnect paths and increased signal propagation delays. As a result, execution speeds often fell below one megahertz for large configurations—adequate for verification workflows, but slower than some competing FPGA-based emulators at equivalent capacity.

IKOS, Virtual Wires, and Transaction-Based Verification

A pivotal shift occurred in 2002 when Mentor Graphics acquired IKOS Systems. This acquisition brought two complementary technologies that would shape the next generation of emulation.

The first was the Virtual Wire interconnect methodology, originally developed by Virtual Computer Corporation (VCC) and later incorporated into IKOS platforms. Virtual Wire simplified the daunting task of routing large designs across many programmable devices by abstracting physical connectivity into a software-controlled interconnect layer. Engineers could reassign signal paths without physically rewiring boards, dramatically accelerating bring-up and iteration.

The second was IKOS’s work in transaction-based verification. Rather than exchanging low-level signal toggles between the testbench and the hardware model, the methodology elevated communication to higher-level transactions—data packets, protocol events, and software interactions. This approach significantly improved verification efficiency and enabled tighter coupling between hardware and software validation.

Mentor integrated these innovations into the Veloce emulation family, first introduced in 2007 and positioned as a new generation of emulator-on-chip systems. The architecture combined custom programmable silicon, scalable interconnect, and advanced verification software into a unified hardware-assisted verification platform.

Differentiation Through Verification Methodology

Where Mentor ultimately distinguished itself was not only in hardware but in methodology. Building on IKOS’s foundations, the company introduced TestBench Xpress (TBX), widely regarded as one of the most effective implementations of transaction-level acceleration. TBX enabled software testbenches—typically written in SystemVerilog, C, or SystemC—to execute efficiently alongside emulated hardware by offloading transaction handling to the host environment.

Mentor extended this approach further with VirtuaLAB, a suite of application-specific verification environments tailored to industry protocols such as USB, Ethernet, and storage interfaces. These environments allowed teams to validate real-world workloads and software stacks earlier in the design cycle, bridging the gap between pre-silicon hardware verification and system-level validation.

Evolution of the Veloce Family

Over the following years, the Veloce platform progressed through multiple generations. Each iteration increased design capacity, improved execution performance, and enhanced analysis capabilities. New features supported low-power verification, power estimation, hybrid emulation with virtual platforms, and functional coverage analysis. The systems evolved from niche verification engines into central pillars of hardware-assisted verification strategies for large SoCs.

In 2018, Siemens Digital Industries Software acquired Mentor Graphics and incorporated the Veloce product line into its broader electronic design automation portfolio. Development continued, with the platform adapting to the demands of billion-gate designs, complex software stacks, and heterogeneous compute architectures.

Today, the latest generation of this lineage is the Veloce Strato CS, part of the Veloce CS hardware-assisted verification platform. It represents the culmination of decades of architectural evolution—from custom programmable silicon and Virtual Wire interconnects to transaction-based acceleration and enterprise-scale emulation infrastructure—designed to support the verification of modern AI-driven, software-defined systems on chip.

The FPGA Renaissance

While processor-based and custom-FPGA-based emulators were establishing themselves in the market, a parallel transformation was unfolding in programmable logic.

By the late 1990s, new FPGA generations from Xilinx and Altera began to close long-standing gaps in density, speed, and routing flexibility. Devices could now host significantly larger portions of now popular system-on-chip (SoC) designs, while improved place-and-route tools shortened iteration cycles—an essential requirement for verification teams working under relentless tape-out pressure.

Around the turn of the millennium, Xilinx introduced the Virtex family, marking a pivotal inflection point. These devices combined higher logic capacity with faster interconnects and, crucially, read-back capabilities. Engineers could inspect internal registers and memory contents at runtime without recompiling the design. The trade-off was performance: read-back operations slowed execution, but the visibility they enabled proved invaluable for debugging complex systems. For verification engineers, this represented a new balance between observability and speed, one that would shape FPGA-based emulation strategies for years to come.

The rapid progress of commercial FPGAs reignited interest in building emulators directly from off-the-shelf programmable devices. Compared with custom silicon approaches, FPGA-based systems promised faster innovation cycles and lower development costs, while benefiting from the continuous performance gains delivered by FPGA vendors. This environment set the stage for a new wave of entrepreneurial activity.

Two startups, operating on opposite sides of the Atlantic, capitalized on this opportunity, each pursuing a distinct architectural philosophy.

In Silicon Valley, in 1999 Axis introduced a simulation accelerator based on a patented, Reconfigurable Computing (RCC) architecture. Implemented as arrays of FPGAs, the system—marketed under the name Excite—initially targeted acceleration of simulation workloads rather than full emulation. Within a couple of years, Excite evolved into Extreme, a more traditional emulation platform. One of Extreme’s defining innovations was its “Hot-Swap” capability, which allowed engineers to move designs seamlessly between the emulator and a proprietary simulator. This approach leveraged the interactive debugging strengths of software simulation while retaining the speed advantages of hardware execution, bridging two previously distinct verification domains.

At roughly the same time in Europe, a more disruptive initiative was taking shape. Emulation Verification Engineering (EVE), founded in 2000 by four former Mentor Graphics engineers, set out to rethink FPGA-based emulation from the ground up. In 2003, the company introduced ZeBu (Zero-Bugs), an emulator implemented on a compact PC card. The first version, ZeBu-ZV, incorporated two Xilinx Virtex-II devices: one dedicated to mapping the design-under-test (DUT), and the other tasked with accelerating transaction-level execution through a newly conceived Reconfigurable Testbench (RTB) technology.

This architectural decision proved pivotal. By elevating the testbench into hardware and enabling transaction-based verification, ZeBu significantly increased throughput and reduced communication bottlenecks between the DUT and verification environment. At the same time, the system leveraged Virtex read-back features to deliver runtime visibility into internal state without recompilation—again trading execution speed for powerful debugging capabilities.

The concept demonstrated both technical viability and commercial promise. Within a year, EVE expanded the architecture into a larger chassis configurable with arrays of Virtex FPGAs. The resulting system, ZeBu-XL, marked the transition from a compact/personal emulator to a scalable/enterprise emulation platform. Over time, the product line evolved through successive generations, each benefiting from advances in FPGA density, clocking, and tool automation.

A major milestone arrived in 2009 at DAC with the introduction of ZeBu Server, the progenitor of a long-lived product family. Designed for scalability from a single chassis to multi-rack configurations, ZeBu Server could reach nominal capacities of one billion gates. It introduced a higher level of automation, including incremental compilation, faster place-and-route cycles, and multi-user capabilities—features that reflected the growing industrialization of verification workflows.

Equally important were its economic and operational characteristics. ZeBu Server delivered greater execution speed than all competing emulators while consuming a fraction of their power of computing. Its pricing—reportedly dropping below a penny per gate in large configurations—reset expectations for cost efficiency in hardware emulation. The platform quickly gained recognition for offering one of the lowest total costs of ownership in the industry, positioning EVE as a formidable player in the verification market.

Architecturally, ZeBu also departed from the prevailing ICE-first (In-Circuit Emulation) mindset. Early systems emphasized transaction-based verification through RTB—later renamed Flexible Testbench (FTB)—operating at higher clock speeds than the DUT to maximize bandwidth and responsiveness. ICE capabilities were introduced in later generations, but transaction-level acceleration remained a defining strength, aligning with the increasing role of embedded software and system-level validation.

Synopsys Moves into Emulation

The growing relevance of hardware emulation did not go unnoticed by major EDA vendors. Synopsys, recognizing the strategic importance of the technology as early as the mid-1990s, attempted to enter the market through the acquisition of a startup named Arkos, which had developed a processor-style emulation approach. The effort proved unsuccessful, and, within months, Synopsys divested the company and its assets, which were subsequently acquired by Quickturn.

That early setback delayed Synopsys’ direct involvement in emulation for more than a decade. During this period, the market matured, FPGA-based solutions gained credibility, and system-level verification requirements intensified, driven by the rise of complex SoCs and embedded software stacks.

The turning point came in 2012, when Synopsys acquired EVE. By then, EVE was already delivering the third generation of ZeBu Server platforms and had established a reputation for performance, scalability, and cost efficiency.

Following the acquisition, Synopsys invested heavily in advancing the ZeBu roadmap. Capacity continued to scale, performance improved, and compilation times shortened through enhanced automation and tool integration. New analysis and use modes were introduced, supporting everything from software bring-up to system validation and hybrid prototyping.

These developments cemented commercial FPGA-based emulation as a long-term pillar of the verification landscape.

Summary

What began in the mid-1980s as a pragmatic use of rapidly advancing programmable logic has evolved into a cornerstone of modern semiconductor development—scaling to multi-billion-gate designs, enabling software-defined systems, and supporting the increasingly system-centric nature of verification.

At its core, the evolution of hardware emulation is a story of architectural divergence. Three distinct approaches emerged, each shaped by the technological constraints and verification demands of its time, and each redefining what emulation platforms could deliver.

The early dependence on commercial FPGAs exposed a fundamental mismatch between available technology and the requirements of deep system verification. That realization marked a turning point, giving rise to processor-based and custom FPGA-based architectures—two powerful but fundamentally different paths that sustained emulation through a decade of explosive semiconductor growth. Their later disruption by a new generation of FPGA-driven systems did not render earlier approaches obsolete; instead, it broadened the architectural landscape, underscoring a critical truth: no single emulation architecture can optimally address every verification challenge.

Over time, the role of emulation has expanded dramatically, driven by the convergence of evolving user demands and advancing technology. In the AI era, verification has become fully system-centric, requiring the execution of massive software workloads—such as large language models—on next-generation architectures. This shift places three factors at the center of emulation effectiveness: system capacity, execution performance, and interface connectivity. Only the right balance of all three enables meaningful pre-silicon validation at scale.

Among today’s competing approaches, commercial FPGA-based platforms are best positioned to deliver across all three dimensions—offering a compelling balance of scalability, speed, and real-world interfacing that aligns with the needs of modern AI system development.

Also Read:

Synopsys and TSMC Deepen AI Design Alliance: What It Means

How to Overcome the Advanced Node Physical Verification Bottleneck

Podcast EP342: The Evolution and Impact of Physical AI with Hezi Saar


A Different Angle on Co-Simulation for Systems

A Different Angle on Co-Simulation for Systems
by Bernard Murphy on 05-06-2026 at 6:00 am

FMI use cases

Co-simulation, two or more simulations running concurrently in some manner, is not a new idea. I have written before about multiphysics systems able to model thermal, stress, CFD and other factors simultaneously. I just read a white paper from Siemens based on a different method, using an open standard called the Functional Mockup Interface (FMI) to connect simulators/models to co-analyze mechatronic and other systems across a range of multidiscipline analyses, going beyond what I have seen in multiphysics systems. My take is that looks like a more total systems-centric view to multi-domain simulation than chip-centric approaches.

About FMI

FMI is a standard created within the Modelica Association, an organization founded in 2000 in Sweden, with the intention to simplify the creation, storage, exchange and (re-) use of dynamic system models of different simulation systems for abstract model (e.g. MatLab)/software/hardware-in-the-loop simulation, for cyber physical systems, and other applications. The list of FMI members is impressive: Bosch, Dassault, Siemens, Synopsys/Ansys, Ampere, Saab, Airbus, Caterpillar, Hyundai, GM, Boeing, NVIDIA, VW, Volvo, MathWorks, Maplesoft and many more.

There are two use-cases: standalone FMUs and tool-coupling FMUs.  The standalone approach allows a component provider to build an abstracted model of an IP based on simulations within a specific domain. A system builder can then use such a model in their system analysis without revealing implementation details and without need for licenses used in building that abstracted model. A tool-coupling model provides more flexibility through FMI defined API interfacing between simultaneously running simulators. Both methods depend on a user-defined time-step to determine communication frequency.

Which approach will be most effective will depend on the application. The Siemens white paper provides examples for both use-cases.

A mechatronic application

The paper illustrates with three applications. The first of these is a mechatronic example for which the electrical side is a circuit to control a stepper motor. This circuitry is modeled in HyperLynx AMS which is FMI compatible. The mechanical part connects the motor output to a gear reducer, then a winch, raising or lowering a weight suspended on a rope. I’m guessing this is not a typical application, more likely an artificial use case to illustrate a capability without revealing customer proprietary details in real designs. The mechanical part is modeled in Siemens Simcenter Amesim. Sensors feedback winch RPMs and the airgap between the weight and the ground.

This method uses a tool-coupling FMU, here by generating an FMU model for the circuitry which can be inserted into the Amesim mechanical model. Remember in tool-coupling cases this model is an interface between the mechanical simulation and the electrical simulation, both of which will be running.

The paper illustrates the point of running this joint simulation in first observing that the rotational velocity of the gear shift is noisy and the weight hits the ground after a few cycles. They replace the stepper motor with a stronger model and reduce the weight, following which simulations show much cleaner behavior and the weight doesn’t hit the ground. Here and in the standalone methods users will need to experiment with time-step choices to find an optimum balance between accuracy and analysis throughput.

A control system application

The white paper illustrates a standalone FMU example with a power convertor example. The controller is designed through Altair Twin-Activate and MathWorks and exported as a standalone FMU. This is then imported into HyperLynx AMS to model the power convertor electronics around the controller. HyperLynx can run simulations with this FMU model without need for co-simulation. Again, a user may need to fine-tune a time-step choice for optimum results.

Very interesting to see a method for handling multi-domain simulations outside conventional multiphysics applications. I can see why this would be popular with system builders who have preferred simulator choices in each domain yet need to be able to stitch them together for full-system analysis. You can download the whitepaper HERE.

Also Read:

Siemens U2U 3D IC Design and Verification Panel

Solving the EDA tool fragmentation crisis

Complex PCB signoff challenges


Synopsys and TSMC Deepen AI Design Alliance: What It Means

Synopsys and TSMC Deepen AI Design Alliance: What It Means
by Kalar Rajendiran on 05-05-2026 at 10:00 am

Synopsys Powering the next generation of AI

A recent announcement from Synopsys signals a meaningful escalation in the race to build next-generation AI hardware. The expanded collaboration between Synopsys and TSMC brings together silicon-proven IP, AI-driven design tools, and cutting-edge manufacturing processes in a tightly integrated effort to accelerate high-performance computing (HPC) and AI system development. More than a routine partnership update, the move reflects a broader industry transition toward ecosystem-level innovation, where success depends on how well design, IP, and fabrication technologies align from the outset.

What Was Announced

At the core of the announcement is a three-part expansion of capabilities spanning IP, design flows, and system-level enablement.

Synopsys is advancing silicon-proven interface IP validated on TSMC’s most advanced nodes, including 3nm and emerging 2nm-class processes. These include next-generation standards such as M-PHY v6.0 which is now achieving industry-first low-power silicon bring-up on N2P, alongside tapeouts of 64G UCIe IP and 224G high-speed interconnect IP. Together, these technologies form the backbone of AI chips that must move massive volumes of data with minimal latency and power overhead, particularly in bandwidth-constrained environments.

The companies are also extending certified electronic design automation (EDA) flows with a sharper emphasis on increasingly agentic AI-driven optimization. Collaboration on run assistance within Synopsys Fusion Compiler, leveraging TSMC’s A14 process and NanoFlex Pro architecture, is aimed at improving power, performance, and area (PPA) while boosting design productivity. This signals a shift from passive AI assistance toward more active, decision-guiding systems that can materially impact how chips are designed at advanced nodes.

Beyond individual dies, the partnership continues to push into advanced packaging and system-level integration. Synopsys’ 3DIC Compiler platform is now enabling productivity improvements for TSMC’s CoWoS technology at interposer sizes reaching up to 5.5 times the reticle limit, underscoring the scale of modern multi-die designs. This is complemented by multiphysics simulation capabilities that address thermal, electrical, and optical interactions. These requirements are becoming essential as chips evolve into tightly integrated systems.

The announcement also highlights expansion into new application domains. In automotive, Synopsys is offering a UCIe IP solution compliant with ASIL B functional safety requirements on TSMC’s N5A process, marking a significant step toward enabling chiplet-based architectures in safety-critical environments. Meanwhile, advancements in M-PHY IP are targeted at next-generation mobile and storage applications, including smartphones that demand both high performance and power efficiency.

Finally, the collaboration advances AI infrastructure through co-packaged optics. Multiphysics design enablement for co-packaged optical systems, including TSMC’s COUPE design flow, spans optical path simulation, electromagnetic extraction, and system-level analysis, and is paired with 224G IP designed to support optical Ethernet and emerging interconnect standards such as UALink. Together, these capabilities directly address the growing bandwidth and energy challenges facing large-scale AI systems.

Why This Matters for AI Hardware

The significance of this partnership lies in how it tackles the core constraints of modern AI workloads. As compute performance scales, the bottlenecks have shifted toward data movement, power efficiency, and system integration. By combining high-speed IP, agentic AI-driven design tools, and advanced packaging technologies, Synopsys and TSMC are reducing the gap between design complexity and manufacturable silicon.

The introduction of agentic run assistance in EDA tools marks a particularly important inflection point. Rather than simply accelerating existing workflows, these capabilities begin to reshape them, enabling engineers to delegate increasingly complex optimization tasks to AI systems. This has the potential to significantly compress development cycles while improving overall design quality.

Equally critical is the focus on bandwidth. Technologies such as 224G interconnects and co-packaged optics are emerging as key enablers for scaling AI infrastructure, where moving data efficiently is often more challenging than processing it. By integrating these capabilities into both IP and design flows, the partnership addresses one of the most pressing limitations in next-generation AI systems.

The expansion into automotive and mobile markets further underscores the breadth of this strategy. It signals that advanced-node, multi-die, and chiplet-based designs are no longer confined to hyperscale data centers but are beginning to permeate safety-critical and consumer applications as well.

Market And Industry Implications

The expanded alliance reinforces Synopsys’s position as a central player in AI silicon enablement while strengthening TSMC’s ecosystem around its most advanced process nodes. For chip designers, tighter integration between EDA tools and foundry technologies can translate into faster time-to-market and reduced development risk, particularly when targeting cutting-edge nodes.

At the same time, the partnership reflects a broader industry dynamic in which design tools and manufacturing processes are becoming increasingly interdependent. As flows become more deeply optimized and certified for specific nodes, the cost and complexity of switching ecosystems rise. This creates a form of strategic lock-in that benefits tightly aligned partners while raising barriers for competitors.

The Bigger Picture

Taken together, the announcement illustrates a shift in how semiconductor innovation is defined in the AI era. Progress is no longer driven solely by transistor scaling but by the ability to coordinate across multiple layers of the technology stack, from design software and reusable IP to packaging and system integration.

The Synopsys–TSMC collaboration points to a future where chips are conceived not as isolated components but as parts of larger, highly integrated systems spanning data centers, vehicles, and mobile devices. In this landscape, competitive advantage will increasingly depend on how effectively companies can bring together tools, technologies, and partners to deliver complete, optimized solutions.

As AI continues to push the limits of performance and complexity, partnerships like this are likely to define the pace of innovation. The companies that succeed will be those that can bridge the gap between design intent and real-world deployment, turning increasingly sophisticated ideas into scalable, manufacturable systems.

You can access the entire press announcement here.

Also Read:

How to Overcome the Advanced Node Physical Verification Bottleneck

Podcast EP342: The Evolution and Impact of Physical AI with Hezi Saar

WEBINAR: Beyond Moore’s Law and The Future of Semiconductor Manufacturing Intelligence


Siemens U2U 3D IC Design and Verification Panel

Siemens U2U 3D IC Design and Verification Panel
by Daniel Nenni on 05-05-2026 at 6:00 am

IMG 1201
Kalar Rajenderan, Javier dela Cruz, Subi Kengeri, Satish Surana, Jeff Cain

Given the success of the event in Silicon Valley last week, I would expect the Siemens U2U event in Munich to be even bigger. In my experience this has been the best user driven event in 2026 with the deepest customer content. EDA has always been a customer driven industry and it is good to see us recognize that from time to time. Kalar was the moderator on this panel so I was in the front row taking notes.

The semiconductor industry is entering a pivotal phase as it transitions from traditional 2D ICs to 3D ICs and chiplet-based architectures. This shift represents a fundamental evolution in how chips are designed, manufactured, and deployed. Rather than relying solely on shrinking transistors, engineers are now stacking and integrating multiple dies into a single package, enabling higher performance, better power efficiency, and greater system flexibility. While this approach unlocks significant advantages, it also introduces a new set of challenges that must be addressed for widespread adoption.

At its core, 3D integration allows different functional components, such as logic, memory, and accelerators, to be combined in a modular fashion. This enables scalable architectures and significantly improves bandwidth by reducing the distance data must travel between components. It also allows designers to mix and match technologies from different process nodes, optimizing each function independently. As a result, 3D ICs are becoming essential for applications like artificial intelligence, high-performance computing, and data centers, where performance and efficiency are critical.

However, the move to 3D scaling is far from straightforward. One of the most significant challenges is the increased complexity across the entire development lifecycle. In traditional chip design, many issues could be addressed at the component level. In contrast, 3D ICs require a system-level perspective, where interactions between dies, packaging, and the overall system must be carefully managed. Thermal and power considerations, in particular, have become major concerns. As power density increases, heat dissipation becomes more difficult, and inefficient power delivery can lead to both performance limitations and reliability issues.

Another critical challenge is supply chain and manufacturing capacity. The semiconductor industry has experienced rapid growth, with demand accelerating at an unprecedented rate. While this growth is positive, it has also exposed limitations in infrastructure. Building new fabrication facilities and expanding cleanroom capacity takes years, and advanced packaging processes are becoming increasingly complex. Technologies such as hybrid bonding, high-bandwidth memory integration, and large interposers require sophisticated equipment and processes that are not yet widely available. These constraints can create bottlenecks, particularly for smaller companies trying to enter the market.

To navigate these challenges, industry experts emphasize the importance of early architectural planning. In the past, packaging was often treated as a secondary consideration, addressed after the core chip design was complete. This approach is no longer viable. Advanced packaging must now be considered at the very beginning of the design process, alongside system architecture and functionality. Decisions about packaging technology, supply chain partners, and manufacturing processes must be made early to avoid costly redesigns and delays. Designing for manufacturability and yield is especially important, as even small inefficiencies can significantly impact production capacity and cost.

Interoperability and standardization are also key factors in enabling the growth of the 3D IC ecosystem. Efforts to develop open standards for die-to-die communication aim to make it easier to integrate components from different vendors. While these standards can reduce barriers to entry and promote innovation, they are not a complete solution. In practice, achieving seamless interoperability is challenging due to differences in design choices, protocols, and performance requirements. As a result, many high-performance systems still rely on customized interfaces to achieve optimal results, while standards play a more prominent role in mid-range and emerging applications.

Looking ahead, innovation in materials, cooling, and power delivery will be essential to overcoming current limitations. New substrate materials, such as glass, offer improved mechanical stability and finer feature resolution compared to traditional organic substrates. Advanced cooling techniques, including near-chip cooling and novel heat dissipation methods, are being explored to manage increasing thermal loads. Similarly, improvements in power delivery, such as backside power distribution and integrated voltage regulation, are critical for supporting the high current densities required by modern systems.

Another promising area of development is the use of advanced modeling and simulation tools. Multiphysics simulation, which accounts for electrical, thermal, and mechanical interactions, is becoming increasingly important in 3D IC design. By incorporating these analyses early in the design process, engineers can identify potential issues and optimize system performance before manufacturing. Digital twin technology, which creates a virtual representation of the entire system and manufacturing process, is expected to play a major role in improving design accuracy and reducing time to market.

Bottom line: The transition to 3D ICs and chiplet architectures marks a significant turning point in semiconductor innovation. While the benefits in performance, efficiency, and flexibility are substantial, the challenges are equally complex. Success in this new era requires a holistic approach that integrates design, manufacturing, and system considerations from the outset. By investing in early planning, embracing new technologies, and fostering collaboration across the ecosystem, the industry can overcome these challenges and unlock the full potential of 3D integration.

MUNICH, GERMANY | MAY 12, 2026 User2User Europe

Also Read:

Solving the EDA tool fragmentation crisis

Exploring the Hidden Complexity of Modern Power Electronics Design – A Siemens White Paper

Siemens Wins Best in Show Award at Chiplet Summit and Targets Broad 3D IC Design Enablement


Connecting the Dots: Why RISC-V System Design Is Entering a New Era

Connecting the Dots: Why RISC-V System Design Is Entering a New Era
by Kalar Rajendiran on 05-04-2026 at 10:00 am

Andes x Arteris Pre Verified and Silicon Proven SoC Integration

At the recent RISC-V Now event hosted by Andes, the discussion underscored the fact that RISC-V is no longer just about instruction set architecture advantages or customizable cores. The real focus has moved up the stack to system-level design. This is where connectivity, integration, and security define whether an innovation can scale.

This shift reflects a broader reality: modern SoCs are no longer simple, monolithic designs. They are complex, heterogeneous systems that must seamlessly integrate multiple compute domains, memory hierarchies, and specialized accelerators. In this environment, the success of RISC-V depends not only on openness, but on how effectively that openness can be orchestrated into a cohesive and efficient system.

Guillaume Boillet, Vice President of Strategic Marketing at Arteris framed his talk around a central thesis: that RISC-V’s long-term success will hinge not just on open architectures, but on mastering integration, embedding security at the hardware level, adopting true systems thinking, and leveraging deep ecosystem collaboration.

The Hidden Bottleneck: Data Movement and System Complexity

One of the most compelling insights from the presentation is that compute is no longer the dominant constraint in system performance. Instead, data movement has emerged as the primary bottleneck. A significant portion of system energy is consumed simply moving and storing data, especially in GPU-class SoCs

This has profound implications. As workloads like artificial intelligence (AI) and real-time analytics continue to grow, the efficiency of the interconnect fabric becomes just as important as the performance of the compute engines themselves. The architecture must be designed to minimize latency, optimize bandwidth, and reduce power consumption associated with moving data across increasingly complex systems.

At the same time, the rise of chiplets and multi-die architectures introduces new layers of design complexity. What was once contained within a single piece of silicon must now operate across multiple interconnected dies, each potentially optimized for different functions. This transforms connectivity from a supporting role into a central architectural pillar.

RISC-V’s Promise Meets Integration Reality

RISC-V’s flexibility is one of its greatest strengths, but it also introduces a unique challenge: integration. The ability to mix and match IP from different sources creates enormous opportunity, yet it also leads to fragmentation if not managed carefully.

Modern SoCs often incorporate a wide array of IP blocks, each using different communication protocols. Bringing these together into a unified system requires a robust and adaptable interconnect strategy. Without it, the very modularity that makes RISC-V attractive can become a source of inefficiency and risk.

This is where Network-on-Chip (NoC) technologies play a crucial role. By providing a scalable and configurable communication backbone, they enable designers to integrate diverse components while maintaining performance and efficiency. The interconnect effectively becomes the glue that holds the system together, ensuring that all parts can communicate reliably despite their differences.

Automotive and AI: Stress Testing the Architecture

The growing demands of automotive and AI applications highlight just how critical system-level design has become. In automotive systems, especially those supporting advanced driver assistance and autonomous capabilities, architectures must handle a mix of workloads with different safety and performance requirements. Some functions demand strict determinism and compliance with safety standards, while others require high-throughput data processing.

These systems are also evolving toward chiplet-based implementations, further increasing the importance of reliable and scalable interconnects . The ability to manage both coherent and non-coherent data flows across such architectures is essential for ensuring system integrity and performance.

AI workloads present a different but equally demanding challenge. As AI becomes more pervasive, the need for specialized accelerators continues to grow. Integrating these accelerators efficiently into the broader system requires careful orchestration of data movement and memory access. Without a well-designed interconnect, the benefits of these accelerators can be significantly diminished.

The Overlooked Risk: Hardware Security

Another critical theme is the increasing importance of hardware security. Historically, security efforts have focused on software and network layers, but recent vulnerabilities have demonstrated that hardware itself can be a significant point of exposure.

The number of reported hardware vulnerabilities has been rising, reflecting a growing awareness of this issue . As systems become more complex and interconnected, the potential attack surface expands, making it essential to address security at the hardware level from the outset.

This requires new approaches to design and verification, including the ability to identify and mitigate vulnerabilities early in the development process. Hardware security is a foundational requirement for modern SoCs, not an afterthought.

Ecosystem Collaboration as a Force Multiplier

The collaboration between Andes and Arteris illustrates the importance of ecosystem-level solutions in addressing these challenges. By pre-validating the interoperability between processor cores and interconnect technologies, they reduce integration risk and accelerate development timelines.

This kind of partnership reflects a broader trend in the industry toward platform-based design. Instead of building systems from scratch, companies are increasingly relying on pre-validated components that can be assembled into complete solutions. This approach not only improves efficiency but also increases confidence in achieving first-pass silicon success.

From Components to Systems Thinking

The overarching message: the industry must move from a component-centric mindset to a system-centric one. Designing a successful RISC-V-based SoC today requires a holistic understanding of how all parts of the system interact.

It is no longer sufficient to optimize individual components in isolation. Designers must consider how data flows across the system, how different subsystems communicate, and how security is enforced at every level. This shift in perspective is essential for managing the complexity of modern designs.

Summary

RISC-V is well positioned to thrive in this new environment, but its success will depend on more than just its open architecture. It will require robust solutions for connectivity, integration, and security as these aspects are becoming increasingly critical as systems grow in complexity.

The future of RISC-V will be defined not just by its flexibility, but by its ability to deliver complete, scalable, and secure systems. Those who embrace this systems-level approach will be best equipped to lead the next wave of semiconductor innovation.

To learn more, visit Arteris.com

Also Read:

Scalable Network-on-Chip Enables a Modular Chiplet Platform

Renesas Scalable Automotive SoC Design Using Arteris NoC

NXP Expands Arteris NoC Deployment to Scale Edge AI Architectures


Rethinking ECAD IT Infrastructure: From Fragmentation to an Engineering Platform

Rethinking ECAD IT Infrastructure: From Fragmentation to an Engineering Platform
by Kalar Rajendiran on 05-04-2026 at 6:00 am

The semiconductor industry is entering a new phase of complexity. Advanced nodes, heterogeneous integration, and AI-driven design workflows are placing unprecedented demands on engineering teams. While much of the focus remains on tools and methodologies, an equally critical constraint is emerging beneath the surface: infrastructure.

Every IC design company today must make a foundational decision:

How to design, build, and operate the increasingly AI-driven ECAD IT infrastructure required to deliver chips efficiently. Historically, the answer has been to build and manage it internally. But that model is becoming increasingly difficult to sustain.

The Growing Infrastructure Burden

Modern semiconductor workflows depend on a combination of:

  • Distributed compute environments
  • Multiple EDA toolchains from different vendors
  • GPU-intensive AI/ML workloads
  • Strict data security and compliance requirements

Yet the infrastructure supporting these workflows is often fragmented. Teams must stitch together cloud resources, on-prem systems, license servers, and workflow orchestration tools without compromising on uptime and performance.

Cloud providers offer scalable compute, storage, and networking. And EDA vendors provide powerful design tools. But a unified, production-ready vendor agnostic environment is hard to come by for semiconductor engineering. This results in a gap that engineering teams must fill themselves.

The Limits of the Build-It-Yourself Model

To bridge this gap, companies invest heavily in:

  • CAD engineering teams
  • DevOps and cloud specialists
  • Custom scripts and automation frameworks

While this approach provides flexibility, it introduces significant overhead. Infrastructure must be continuously maintained, updated, and debugged. Misconfigurations, workflow failures, and resource inefficiencies are common.

More importantly, this effort does not directly contribute to product differentiation. Engineering teams end up spending time maintaining infrastructure rather than advancing design. As AI-driven workflows accelerate the increase in design complexity, the above burden grows even more.

A Shift Toward Engineering Infrastructure Platforms

An alternative approach is beginning to gain traction: the engineering infrastructure platform.

Rather than assembling infrastructure components manually, teams can deploy a platform that provides:

  • Pre-integrated environments
  • Vendor-neutral tool support
  • Automated orchestration
  • Built-in observability and security

This model abstracts the complexity of infrastructure while preserving flexibility. Tuple Technologies’ Stratos platform is one example of this approach, designed specifically for semiconductor workflows.

Below is a quote from Vamshi Kothur, CEO of Tuple Technologies.

“The Stratos Platform was built to solve the most critical bottleneck in semiconductor innovation: the infrastructure gap. By enabling rapid environment provisioning in minutes rather than weeks and offering a truly vendor-neutral architecture, we empower design teams to scale AI and HPC workloads seamlessly across any self-hosted, cloud or hybrid environment. Our mission at Tuple Tech is to provide a continuous, AI-driven automation layer that eliminates vendor lock-in and manual remediation, allowing engineers to focus on what matters most—accelerating the path to tapeout.”

What Changes with a Platform Approach

Stratos is built on Infrastructure-as-Code (IaC) principles and enables teams to deploy and operate environments across on-premises, hybrid, and cloud configurations.

Key capabilities include:

  • Rapid environment provisioning — in minutes rather than weeks
  • Support for multiple EDA vendors without lock-in
  • Integration of AI and HPC workloads into existing flows
  • Continuous monitoring, alerting, and automated remediation

The impact changes how teams work. Engineers shift from managing infrastructure to focusing on design execution.

CAD teams no longer need to manually manage licenses, debug workflows, or coordinate across fragmented environments. Instead, they operate within a unified platform where infrastructure is automated and observable.

Quantifiable Impact

The benefits of this approach are measurable: (as reported by Tuple Technologies)

  • Up to 70% reduction in infrastructure development time
  • Provisioning in minutes instead of weeks
  • Average 38% reduction in GPU and compute costs through multi-cloud optimization
  • 43% reduction in cloud computing costs in Pegasus sign-off workloads (customer case study)

In addition:

  • License utilization improves through real-time analytics
  • Downtime is minimized through proactive incident management
  • Disaster recovery is accelerated using IaC-based reconstruction

These improvements directly translate into faster design cycles and lower operational costs.

Key Differentiators

Several characteristics distinguish an engineering infrastructure platform from traditional approaches:

Vendor Neutrality: Teams can combine tools from multiple vendors without workflow fragmentation.

Multi-Cloud Optimization: Workloads are dynamically executed where compute resources are most efficient.

Operational Intelligence: Failures are detected and resolved automatically through integrated monitoring and escalation.

Built-In Security: Continuous compliance and threat detection are embedded into the platform.

Domain-Specific Support: Real-time support from engineers familiar with semiconductor workflows ensures rapid resolution of issues.

Why This Matters Now

As semiconductor design becomes increasingly data- and compute-intensive, infrastructure is no longer a secondary concern. It is a critical enabler of engineering productivity.

Companies that continue to rely on fragmented, manually managed environments risk:

  • Slower time-to-market
  • Higher compute and operational costs
  • Reduced engineering efficiency

By contrast, those adopting a platform approach gain:

  • Faster iteration cycles
  • Greater flexibility across tools and environments
  • Improved reliability and scalability

Summary

The semiconductor industry has long focused on advancing tools and methodologies. The next frontier is infrastructure. Moving from fragmented systems to unified engineering platforms represents a fundamental shift. A unified engineering platform can significantly impact productivity, cost, and competitiveness.

Learn more at https://www.tupletechnologies.net/

Also Read:

Tuple Technologies at the 2025 Design Automation Conference #62DAC

CEO Interview with Geoffrey Rodgers of Chameleon Semiconductor

Bronco AI Webinar: Full-Chip SoC Debug in 15 Minutes


CEO Interview with Geoffrey Rodgers of Chameleon Semiconductor

CEO Interview with Geoffrey Rodgers of Chameleon Semiconductor
by Daniel Nenni on 05-03-2026 at 2:00 pm

image (9)

Geoffrey Rodgers spent most of his career at the intersection of semiconductor technology and go-to-market execution, with a focus on scaling businesses and bringing complex solutions to market. He previously led the Analog Go-To-Market motion at Synopsys following the acquisition of Analog Design Automation and held leadership roles at PDF Solutions.

In addition to his semiconductor background, Geoff spent time in enterprise B2B SaaS, where he developed a strong foundation in modern go-to-market models, including segmentation, pipeline development, and building repeatable growth frameworks.

At Chameleon, he is focused on translating technical innovation into scalable, real-world impact in an industry undergoing a fundamental shift toward more flexible, secure, and adaptive silicon architectures.

Tell us about your company.

Modern silicon is still designed as if the future can be fully predicted at tapeout. In reality, standards, threats, and system requirements continue to evolve long after deployment.

Chameleon Semiconductor is focused on solving that gap. We provide embedded FPGA (eFPGA) via user-defined soft IP that brings post-silicon hardware-based programmability into ASICs and SoCs, allowing customers to modify, extend, and future-proof their designs without a respin.

We believe the industry is moving toward a design-for-change model, where flexibility is architected into the system from the start.

Our fabric is delivered as synthesizable RTL, not a hard macro, enabling customers to define exact requirements for their application while maintaining portability across foundries and process nodes. This provides both technical flexibility and supply chain freedom, while also allowing customers to retain tighter control over their IP and where it is manufactured.

Chameleon was founded by a team with deep experience across semiconductor IP, EDA, and system design.

Co-founder, Ken Mai, is a Professor of Electrical and Computer Engineering at Carnegie Mellon University and a recognized leader in digital system design. He has built a prominent research program focused on high-performance and energy-efficient architectures and maintains deep ties into the aerospace, defense, and intelligence communities.

Additionally, Robert Bielby, a recognized authority in FPGA architecture and programmable logic has joined. He has spent decades advancing reconfigurable computing, contributed to high-performance FPGA architectures, holds numerous patents, and has brought complex programmable fabrics from concept through production deployment.

What problems are you solving?

The semiconductor industry is facing a mismatch between design rigidity and real-world change.

Design cycles are long and expensive, but standards, security threats, and system requirements continue to evolve after deployment. Once silicon is taped out, adapting is costly and often requires a respin.

At the same time, customers are facing increasing pressure around IP security and trusted manufacturing. Protecting critical functionality while maintaining flexibility in where and how chips are built has become a growing concern.

We address three core issues.

First, lack of post-silicon flexibility. We enable customers to modify functionality at speed after deployment, versus more traditional software or microprocessor-based approaches, extending product life while maintaining performance.

Second, the cost of getting it wrong. A respin can cost millions and delay programs by months. Our fabric acts as an architectural safety net against that uncertainty.

Third, system fragmentation. As chiplet-based and heterogeneous architectures scale, integrating mismatched protocols and evolving interfaces becomes more complex. We provide a programmable layer that can adapt at those boundaries.

We are enabling a shift from fixed-function silicon to systems that can evolve over time while maintaining control over critical IP.

What application areas are your strongest?

We focus on markets where change is constant and the cost of inflexibility is high.

Aerospace and defense was our initial focus and remains a core market. These systems have long lifecycles, operate in harsh environments, and must respond to evolving threats. They also require strict control over IP, supply chain integrity, and trusted manufacturing environments. Our approach enables customers to maintain control of sensitive functionality while supporting deployment in trusted or onshore fabrication flows.

Our support for radiation-hardened design approaches, combined with the ability to update functionality in the field, aligns well with these requirements.

At the same time, we are expanding into commercial markets where architectural flexibility is becoming critical. Chiplet-based designs are a prime example. As systems are assembled from heterogeneous die, interoperability challenges increase. Our fabric enables protocol adaptation and long-term flexibility at chiplet interfaces.

Security-driven applications are another major driver, particularly around post-quantum cryptography. As standards evolve, maintaining compliance requires hardware-based programmability to support changes without sacrificing performance.

We are also seeing interest in AI-related applications, where rapidly evolving neural network architectures can outpace fixed-function silicon approaches.

Any system where the future cannot be fully predicted at design time, or where control of IP and manufacturing is critical, is a strong fit for our technology.

What keeps your customers up at night?

The common theme is uncertainty.

Customers are being asked to make long-term design decisions in an environment that is changing rapidly. They worry about locking into standards, security algorithms, and architectures that may not hold.

At the same time, supply chain, IP security, and geopolitical concerns are increasing pressure around foundry choice and IP sourcing.

The cost of being wrong is rising, while predictability is declining.

We give customers a way to manage that uncertainty by building flexibility and control directly into their silicon.

We also enable them to respond to new market opportunities that may emerge after tapeout or deployment.

What does the competitive landscape look like and how do you differentiate?

There are several established players in the eFPGA space, each providing variations on embedded programmable fabric.

Our view is that the competitive dynamic is less about the fabric itself and more about how customers address change over the life of a system.

The fabric must meet the power, performance, and area requirements of modern SoCs. That is table stakes.

Traditional approaches, particularly hard macro implementations, are optimized for integration efficiency at design time but remain fixed once deployed. As system requirements evolve, that rigidity becomes a limitation.

We take a different approach. Our soft IP architecture is designed around portability and adaptability, allowing customers to deploy across multiple foundries and process nodes while retaining the ability to update functionality over time. This is increasingly important in a world where both requirements and supply chains are in flux.

We are also the only US-based provider of soft embedded FPGA IP, which is highly relevant in security-sensitive domains such as aerospace, defense, and cryptography. In these environments, control over IP, provenance, and access to trusted manufacturing flows are critical considerations.

Beyond the fabric itself, we focus on system-level challenges such as chiplet interoperability, cryptographic agility, and long lifecycle adaptability.

Our view is that the value lies in enabling customers to build systems that can evolve without sacrificing performance, security, or control.

Our tool flow is based on open-source technologies, which helps avoid the long-term risks associated with proprietary tool lock-in.

What new features or technology are you working on?

Our roadmap is focused on making embedded programmability more practical and impactful at the system level.

We are advancing integration with emerging chiplet standards to enable seamless protocol adaptation across heterogeneous die.

We are also investing in tooling and flows that simplify deployment and make it easier for customers to update functionality over time.

On the security side, we are expanding support for cryptographic agility, with a focus on enabling migration to post-quantum algorithms without requiring new silicon.

We continue to improve performance and density to broaden the range of applications where embedded programmability is viable.

How do customers normally engage with your company?

Engagement typically starts when customers recognize a gap between what they can confidently design today and what they may need to support over the life of the product.

In many cases, this is driven by uncertainty around evolving standards, security requirements, or system architectures. We help them reframe the problem from committing to a fixed design to architecting for change.

From there, we work closely with system and SoC teams to identify where programmability provides the most leverage, often at control points, interfaces, or security functions.

We provide evaluation IP and support integration using standard RTL-based methodologies, with a focus on minimizing disruption to existing design flows while enabling long-term flexibility.

As programs progress, we support optimization, deployment, and ongoing updates, particularly in areas such as cryptographic agility and protocol adaptation.

Over time, the relationship tends to expand. Once customers adopt a design-for-change mindset, they begin to apply it more broadly across their portfolio.

Contact Chameleon Semiconductor

Also Read:

CEO Interview with Xianxin Guo of Lumai

CEO Interview with Johan Wadenholt Vrethem of Voxo

CEO Interview with Dr. Hardik Kabaria of Vinci