Bronco Webinar 800x100 1

Akeana Partners with Axiomise for Formal Verification of Its Super-Scalar RISC-V Cores

Akeana Partners with Axiomise for Formal Verification of Its Super-Scalar RISC-V Cores
by Daniel Nenni on 02-26-2026 at 8:00 am

Akeana Partners with Axiomise

Akeana Inc. announced a key milestone in the development of its advanced RISC-V technology: a successful partnership with Axiomise Limited to formally verify its super-scalar test chip, Alpine. The collaboration highlights the growing importance of formal verification in ensuring correctness, performance, and efficiency in next-generation semiconductor designs.

Alpine is a 4nm silicon and software development board that integrates high-performance, out-of-order RISC-V cores. As semiconductor process nodes continue to shrink and architectural complexity increases, ensuring functional correctness before tape-out has become both more challenging and more critical. Super-scalar, out-of-order cores, designed to execute multiple instructions per clock cycle, introduce intricate control logic, speculative execution paths, and numerous corner cases that are difficult to fully validate using traditional simulation techniques alone.

To address these challenges, Akeana turned to Axiomise for its deep expertise in formal verification. Unlike simulation-based approaches, which rely on test vectors and probabilistic coverage, formal verification applies mathematical proof techniques to exhaustively analyze all reachable states of a design. This guarantees that specific properties hold true under every possible condition, eliminating entire classes of latent bugs that could otherwise escape detection.

According to Nitin Rajmohan, Co-founder of Akeana, the results of the engagement exceeded expectations. Within just a few months, Axiomise’s team not only identified functional issues but also uncovered potential redundant logic in the design—findings that were not anticipated at the outset. These insights provided Akeana with opportunities to further optimize its RTL before tape-out, reducing risk and improving overall design quality. The experience reinforced the long-term value of formal verification within Akeana’s broader development methodology.

Axiomise employs a structured methodology that combines expert consulting with proprietary applications such as formalISA®, footprint®, and floatrix®, powered by its CoreProve® framework. These tools are designed to achieve full proof convergence using commercial EDA platforms, enabling end-to-end formal verification sign-off. By integrating advanced automation with domain-specific expertise, Axiomise delivers mathematically rigorous results across functional correctness, performance constraints, and area optimization.

For a complex 4nm design like Alpine, this approach provides significant advantages. At advanced process nodes, the cost of a silicon re-spin can be enormous, not only financially but also in terms of market timing and competitive positioning. Formal verification reduces this risk by ensuring that corner cases, particularly those involving concurrency, memory ordering, branch prediction, and pipeline hazards, are thoroughly analyzed before fabrication.

Beyond functional correctness, the partnership also addressed PPA (Power, Performance, Area) considerations. While verification is traditionally associated with functional validation, formal methods can also reveal inefficiencies in control logic or data paths that affect power consumption and silicon footprint. By identifying redundant or suboptimal structures early, Akeana was able to make informed design refinements that support both performance targets and area constraints.

Dr. Ashish Darbari, CEO of Axiomise, emphasized the broader industry significance of the project. As RISC-V adoption accelerates across markets—including mobile, automotive, data centers, and cloud computing—the demand for high-performance, reliable cores continues to grow. Formal verification provides the exhaustive guarantees required for these mission-critical applications, where undetected design flaws can have far-reaching consequences.

The tape-out of Alpine represents a meaningful milestone not only for Akeana but also for the expanding RISC-V ecosystem. It demonstrates that open-standard architectures can meet the stringent quality and performance expectations traditionally associated with proprietary designs. By incorporating formal verification at a sign-off level, Akeana underscores its commitment to delivering robust, production-ready IP to its customers.

Headquartered in Santa Clara, California, Akeana is backed by prominent investors including Kleiner Perkins, Mayfield Fund, and Fidelity Ventures. The company focuses on configurable RISC-V-based compute, interconnect, and AI accelerator IP solutions. Axiomise, based in the UK, has built a reputation over the past eight years for advancing formal verification adoption through consulting, training, and custom software solutions.

Bottom line: The two companies have demonstrated how collaboration between IP innovators and formal verification specialists can accelerate development while maintaining uncompromising quality standards. As semiconductor designs grow increasingly complex, partnerships like this are likely to become essential in ensuring that next-generation silicon achieves both performance leadership and mathematical correctness from day one.

CONTACT AKEANA

CONTACT AXIOMISE

Also Read:

An AI-Native Architecture That Eliminates GPU Inefficiencies

An AI-Native Architecture That Eliminates GPU Inefficiencies
by Lauro Rizzatti on 02-26-2026 at 6:00 am

VSORA SemiWiki 2026

A recent analysis highlighted by MIT Technology Review puts the energy cost of generative AI into stark perspective. Generating a simple text response from Llama 3.1-405B—a model with 405 billion parameters, the adjustable “knobs” that enable prediction—requires on average 3,353 joules, nearly 1 watt-hour (Wh). Once cooling and supporting infrastructure are factored in, that figure effectively doubles to about 6,706 joules (~2 Wh) per response.

The picture becomes even more striking with video. The same study found that producing just five seconds of low-resolution video at 16 frames per second with the open-source CogVideoX model consumed approximately 3.4 million joules, nearly 1 kilowatt-hour (kWh), as measured via CodeCarbon. To put that into perspective, this is roughly the amount of electricity a typical household appliance uses in an hour.

To scale this scenario, public estimates suggest that in mid-2025, platforms such as ChatGPT were handling over 2.5 billion queries per day. Even conservatively extrapolated, generative AI systems were dissipating energy on the order of gigawatt-hours daily, a level of consumption that rival industrial operations.

This raises two urgent questions:
  • Why does AI inference consume so much energy?
  • More importantly, can processor architecture be redesigned to dramatically reduce this cost?

The answer lies not only in model size, but in the silicon beneath it. AI processor architecture is no longer just a performance concern, rather it is a defining factor in the energy efficiency, scalability, and sustainability of artificial intelligence itself.

GPGPU: The Right Architecture for the Wrong Workload

GPGPUs are built around a micro-level execution model implemented on the Single Instruction, Multiple Threads (SIMT) model. In this model, performance is achieved by launching thousands of tiny threads, each operating on small pieces of data. Developers are expected to carefully coordinate these threads so that, together, they complete a larger computation.

This approach emerged from computer graphics, where workloads are highly irregular and branching behavior is common. SIMT excels in that environment because it allows thousands of threads to hide latency and tolerate divergence. However, when applied to artificial intelligence workloads the mismatch becomes apparent. AI computations are highly structured, repetitive, and mathematically regular, yet SIMT forces them to be expressed through an abstraction designed for far more chaotic workloads.

As a result, a significant fraction of execution time in SIMT-based systems is not spent performing useful mathematical work. Instead, it is consumed by what can be thought of as a management overhead. The hardware and software stack must constantly schedule threads, synchronize execution, handle divergence within warps, and coordinate memory accesses across deep hierarchies. As models grow larger and latency constraints tighten—particularly in real-time or interactive inference scenarios—this overhead begins to dominate overall performance.

VSORA: Redefining the Rules of the Game in AI Processors

This is the context in which VSORA enters the picture. With more than a decade of experience designing advanced digital signal processing architectures, VSORA approaches AI computation from a different starting point. Its background lies in deeply pipelined processors with rich instruction sets capable of executing complex operations in a single clock cycle. Rather than adapting an existing GPU model, VSORA leveraged this expertise to design a processor architecture specifically tailored for large language model inference, both in the cloud and at the edge. The goal was not incremental improvement, but a clean break from the inefficiencies inherent in GPGPU-based designs.

VSORA MPU: A Structural Shift in How AI Gets Computed

At the heart of this new approach sits the VSORA Matrix Processing Unit, or MPU. To appreciate how it differs, consider what happens when the basic unit of computation changes. In SIMT systems, threads are the atomic unit, and everything—from memory layout to scheduling—is organized around them. VSORA discards this assumption entirely. Instead of threads, the MPU treats tensors—multidimensional arrays representing matrices and vectors—as the fundamental unit of work.

In practical terms, this means that instructions operate on entire tensors at once. The programmer describes a mathematical operation, such as a matrix multiplication or transformation, without specifying how the work should be divided among thousands of execution contexts. The hardware itself is responsible for decomposing the operation, distributing it across compute resources, and executing it efficiently. This shift moves complexity out of software and into silicon, where it can be handled deterministically and at far lower cost.

For developers, this tensor-centric abstraction simplifies both programming and reasoning about performance. There is no need to manually manage threads, worry about warp divergence, or tune kernel launch parameters. Because execution management is internalized by the hardware, performance becomes more predictable, and developers can focus on correctness and algorithmic structure rather than orchestration.

Massive Register File and Tightly Coupled Memory

One of the most visible consequences of this architectural philosophy appears in the MPU’s memory design. Traditional processors rely heavily on multi-level cache hierarchies that attempt to predict which data will be needed next. While caches work well in many general-purpose scenarios, they are fundamentally probabilistic. When predictions fail, cache misses introduce long and unpredictable delays, which are especially problematic for real-time inference.

Figure 1: VSORA architecture replaces the traditional multi-level memory hierarchy with a unified, massive flat register file to minimize data movement latency.

VSORA replaces this uncertainty with a large, explicitly managed memory structure. The MPU includes a massive, software-visible register file implemented as several megabytes of tightly coupled memory, or TCM. This memory sits physically close to the compute engines and behaves like a flat, deterministic scratchpad rather than a cache. Its capacity is sufficient to hold entire weight matrices and intermediate activations on-chip, allowing the system to operate without relying on speculative caching behavior. See figure 1.

By designing around tensor-level locality and provisioning enough on-chip storage to support it, the MPU ensures consistent access latency. As long as the working set fits within the TCM, memory access times remain uniform and predictable. This eliminates the performance cliffs that often occur when cache hierarchies are overwhelmed, a common issue in large neural networks.

Continuous Pipelining and Deterministic Throughput

Once data is resident in the TCM, the MPU leverages highly efficient prefetching techniques to minimize latency. Instead of treating AI workloads as a series of discrete kernel launches, VSORA views them as sustained computational flows. An intuitive way to think about this is as an assembly line: once the pipeline is filled, new results emerge at a steady rate every cycle.

This pipelining operates at multiple levels. At the micro-architectural level, data streams continuously into compute units without stalls. At the instruction level, preparation and execution overlap so that hardware resources remain fully utilized. The architecture also allows multiple MPUs to be chained together, enabling data to flow directly from one unit to the next without detouring through external memory. After an initial warm-up period, throughput stabilizes and becomes largely independent of the complexity of individual operations.

Automated Data Layout and Reduced Software Burden

Another area where the MPU reduces developer burden is data layout. On many accelerators, achieving high performance requires manually rearranging data in memory to match hardware-specific access patterns. This process is error-prone, time-consuming, and often ties software to a specific architecture.

VSORA intentionally removes this responsibility from the programmer by introducing a memory access abstraction. The MPU hardware automatically handles alignment, padding, swizzling, and internal data reordering needed to sustain peak bandwidth. Developers work with tensors as abstract mathematical objects, while the hardware transparently performs the low-level transformations required for efficient execution. This approach not only improves productivity but also reduces performance fragility caused by subtle layout mismatches.

These architectural choices make the VSORA MPU particularly well suited for inference workloads, where latency and predictability matter more than raw peak throughput. Unlike GPUs, which often require large batch sizes to amortize overheads and reach high utilization, the MPU remains efficient even with batch size one. This is critical for real-time applications such as robotics, autonomous systems, and interactive AI, where waiting to accumulate large batches is not an option.

Dataflow Execution Model

In conventional multi-core and multi-accelerator systems, scaling often introduces diminishing returns due to synchronization overhead and shared memory contention. Additional compute resources increase coordination costs, reducing effective throughput.

Instead of treating each processing unit as an independent island, multiple MPUs are connected into a single, deeply pipelined dataflow graph. The output of one MPU becomes the direct input of the next, enabling true zero-copy execution at the hardware level.

Each MPU maintains its own TCM, allowing large models to be partitioned cleanly across units. Data moves directly between register files rather than through external memory interfaces, which is especially advantageous for the hot data paths common in modern neural networks. As models scale, throughput remains flat and predictable as long as active tensors fit within the available TCM.

Simplified Scaling and System-Level Efficiency

From a system-level perspective, this results in an architecture that scales without imposing additional complexity on developers. Instead of implementing intricate tiling strategies, synchronization mechanisms, and scheduling logic, programmers define tensor flows and dependencies. The hardware autonomously manages execution, handshaking, and scheduling, ensuring consistent performance even under tight latency constraints.

This makes the VSORA architecture especially conducive to high-pressure environments such as cloud inference platforms, edge deployments, and autonomous systems, where strict latency budgets leave no room for scheduling inefficiencies or unpredictable stalls.

Conclusion

By eliminating kernel launch overhead and dismantling the traditional memory wall between layers, the VSORA Matrix Processing Unit redefines AI efficiency at its core. It delivers near-peak hardware utilization even at batch size one—something conventional accelerators simply cannot achieve. Performance is no longer dependent on artificial batching to mask architectural inefficiencies.

This makes the architecture uniquely suited for interactive and real-time AI, where milliseconds determine safety, usability, and user experience. From real-time autonomy to fluid conversational systems, VSORA prioritizes determinism, latency consistency, efficiency, architectural simplicity, and cost effectiveness over brute-force parallelism.

Equally transformative is the ease of adoption. There is no new programming model, no proprietary language, no disruptive toolchain shift. Developers continue using familiar frameworks such as TensorFlow, PyTorch, or ONNX—without rewriting models or retraining teams. Transitioning to VSORA requires no paradigm change, only performance gains.

In short, the VSORA MPU does not just accelerate AI workloads—it removes the structural bottlenecks that have defined them.

CONTACT VSORA

Also Read:

VSORA Board Chair Sandra Rivera on Solutions for AI Inference and LLM Processing

Silicon Valley, à la Française

Inference Acceleration from the Ground Up

 


Caspia Technologies Unveils A Breakthrough in RTL Security Verification Paving the Way for Agentic Silicon Security

Caspia Technologies Unveils A Breakthrough in RTL Security Verification Paving the Way for Agentic Silicon Security
by Daniel Nenni on 02-25-2026 at 10:00 am

Caspia CODAX

In a significant advancement for the semiconductor industry, Caspia Technologies announced the broad availability of CODAx V2026.1, its flagship RTL security analyzer. The new release strengthens early-stage hardware security verification and positions the company to deliver fully agentic workflows that automate vulnerability detection, triage, and remediation across the entire chip design lifecycle.

CODAx functions as a security-aware static auditing solution specifically engineered for early RTL code in IP and SoC designs. It scans for over 150 insecure coding practices that can introduce vulnerabilities, offering immediate, actionable correction suggestions to designers. What sets CODAx apart is its foundation in public vulnerability databases including CWE, CVE, and Trust-Hub, which collectively document more than 1,000 known hardware security weaknesses. Caspia employs generative AI techniques to systematically map these weaknesses to detectable RTL coding patterns, enabling proactive identification without requiring deep security expertise from every design engineer.

The V2026.1 release introduces deeper, hierarchy-spanning security checks that detect weaknesses propagating up and across design modules—a critical capability for complex modern SoCs. Caspia validated the update through rigorous stress testing on more than 10,000 intentionally vulnerable designs, confirming its robustness and accuracy. In a real-world demonstration, CODAx analyzed a popular open-source root-of-trust design comprising over 400 design files, approximately 3 million gates, and 500,000 lines of RTL code. The entire process took roughly 45 minutes and uncovered multiple previously undetected security weaknesses.

Industry adoption has been swift. Major chip and system companies worldwide are deploying CODAx across diverse applications, including automotive, data center, communications, storage, multimedia, precision analog, and embedded computing. Caspia has collaborated closely with leading EDA suppliers to ensure seamless integration into existing design flows, minimizing disruption while maximizing security gains. The tool’s CI/CD compatibility further supports continuous security checking, delivering what the company describes as 10x efficiency improvements, reducing vulnerability discovery and remediation from days or weeks to minutes.

To accelerate its vision of agentic security platforms, Caspia has strengthened its leadership team with the appointment of Stuart Audley as Vice President and General Manager of Product Management. Audley brings decades of specialized experience in cryptographic hardware and security IP development for top defense contractors and premier semiconductor firms. He previously led advanced security platform initiatives for FPGAs and ASICs at The Athena Group, Inc. and Mercury Systems.

“We are expanding our security verification footprint to include both advanced tools and enablement of agentic workflows,” said Rick Hegberg, CEO of Caspia Technologies. “I am delighted to add someone with Stuart’s experience and background to the team. This will ensure we can focus on delivering cutting-edge capabilities and AI-driven security automation.”

Audley echoed the strategic shift: “Caspia is evolving from a provider of point security verification tools to an agentic platform supplier where AI orchestrates comprehensive hardware security workflows. The elements of our plan include unifying all our tools with AI-assisted workflows that span the entire hardware security lifecycle: analyzing RTL, identifying vulnerabilities, and verifying the results. Traditional design flows remain fully supported, but we are creating a new category for agentic-enabled hardware security verification.”

Caspia will showcase its latest technology, including CODAx V2026.1 and previews of its agentic roadmap, at DVCon 2026 in booth 702. The event runs March 2-5 at the Santa Clara Hyatt Regency in Santa Clara, California. Attendees can register at the official DVCon website.

Founded in 2020 and headquartered in Gainesville, Florida, Caspia Technologies is a privately held innovator in AI-enabled chip and system security. The company blends advanced tools and intelligent agents with existing design flows to deliver expert-level security capabilities to all engineering teams. Drawing on deep expertise in chip design, fabrication, test, verification, and hardware security threats, Caspia aims to secure the future of silicon through proactive, scalable solutions.

With the semiconductor industry facing escalating hardware security challenges—from side-channel attacks to supply-chain vulnerabilities—CODAx represents a timely and powerful response. By empowering designers to address issues at the earliest RTL stage, Caspia is not only reducing risk but fundamentally transforming how secure chips are developed. As the company transitions toward a full agentic platform, the industry can expect even greater automation and assurance in the years ahead.

Contact Caspia

Also Read:

2026 Outlook with Richard Hegberg of Caspia Technologies

A Six-Minute Journey to Secure Chip Design with Caspia

Caspia Focuses Security Requirements at DAC


Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering
by Daniel Nenni on 02-25-2026 at 8:00 am

1 abhijeet chakraborty chiplet summit keynote 2026

At the 2026 Chiplet Summit, Synopsys presented a bold vision for the future of semiconductor innovation: AI-driven multi-die design powered by agentic intelligence. As the semiconductor industry shifts rapidly toward chiplet-based architectures and 3D stacking, the complexity of design, verification, and system integration has increased dramatically. Traditional methodologies, while powerful, are no longer sufficient to keep pace with market demands for higher performance, lower power, faster time to market, and greater reliability. AI is now emerging as the transformative catalyst that re-engineers the entire multi-die design workflow.

Industry momentum behind chiplets and advanced packaging technologies is accelerating. A significant percentage of design teams are already implementing or planning multi-die architectures in their next-generation products. The global semiconductor packaging market is projected to grow substantially through 2033, fueled by 3D stacking, 2.5D interposers, embedded silicon bridges, and fan-out wafer-level packaging. This growth reflects a fundamental shift: system-level performance gains are increasingly achieved through heterogeneous integration rather than traditional monolithic scaling.

At the same time, artificial intelligence models themselves are driving unprecedented hardware requirements. The evolution from early neural networks like AlexNet to transformer-based architectures such as BERT and large-scale systems like GPT-4 demonstrates exponential growth in parameter counts and computational demand. These AI workloads require advanced multi-die systems capable of delivering massive bandwidth, low latency, and optimized power efficiency. Hardware innovation is therefore both driven by AI and enabled by AI, a powerful feedback loop shaping the next decade of semiconductor design.

However, multi-die design introduces immense multidimensional complexity. Engineers must simultaneously optimize system partitioning, die-to-die connectivity, packaging, power delivery, thermal behavior, signal integrity, verification coverage, and software modeling. Achieving optimal results traditionally requires repeated cycles of tuning tool options, analyzing outputs, and rerunning flows, an iterative search across a vast solution space. Expert engineers rely on years of experience to navigate these trade-offs efficiently.

AI transforms this process by enabling engineering teams to perform like experts at scale. Machine learning models correlate tool settings with performance outcomes, learn the impact of design choices, and guide optimization decisions intelligently. For example, AI-driven die-to-die routing techniques have demonstrated dramatic improvements in runtime and signal integrity. Frequency-domain metrics can accelerate optimization, while time-domain eye-diagram analysis provides higher fidelity insights. By combining these intelligently, AI achieves faster convergence with improved quality of results.

Verification also benefits significantly from AI assistance. Traditional constrained-random approaches often require thousands of test seeds to reach target coverage. AI-assisted verification expands coverage while reducing the number of required tests, achieving faster time-to-results and improved quality-of-results (QoR). This reduction in redundant test iterations directly shortens development cycles and lowers compute costs.

Beyond digital implementation and verification, AI is reimagining multiphysics analysis. Machine learning-enabled solvers accelerate simulation across electrical, thermal, and mechanical domains. Reduced-order models and digital twins enable rapid lifecycle exploration, while generative AI systems assist with scripting, constraint generation, and workflow optimization. These capabilities create predictive, fast, and high-fidelity design environments that were previously unattainable.

One of the most compelling advancements presented is the concept of agentic AI, AI systems that go beyond assistance to autonomous orchestration. Instead of merely suggesting optimizations, agentic AI can plan, act, and make decisions within defined objectives. This evolution moves from simple task execution (L1 assistance) toward full workflow autonomy (L5 decision-making). Engineers increasingly collaborate with AI “co-workers” capable of autonomously partitioning multi-die systems, optimizing power networks, resolving signal integrity violations, and even improving RTL for power and performance.

Industry leaders recognize this shift. My favorite semiconductor CEO Jensen Huang of Nvidia has publicly stated that AI employees will become commonplace, envisioning a future where organizations deploy large numbers of AI-driven engineering agents. This perspective underscores the growing confidence that autonomous AI systems will fundamentally augment, and in some cases transform engineering workflows.

The productivity gains reported by leading CPU, GPU, memory, and hyperscale infrastructure providers reinforce this momentum. Tasks that previously required days can now be completed in hours, or even minutes, through AI assistance. Such improvements are not incremental; they represent a structural change in how semiconductor innovation progresses.

Bottom line: AI-driven multi-die design is not merely about automation, it is about amplification. By combining human expertise with autonomous AI agents, the industry can break through existing performance barriers, reduce time to market, and manage escalating system complexity. As advanced packaging technologies continue to grow and AI workloads expand, agentic AI will become indispensable in shaping the future of semiconductor engineering.

CONTACT SYNOPSYS
Also Read:

Smarter IC Layout Parasitic Analysis

Accelerating Static ESD Simulation for Full-Chip and Multi-Die Designs with Synopsys PathFinder-SC

2026 Outlook with Abhijeet Chakraborty VP, R&D Engineering at Synopsys


An Agentic Formal Verifier. Innovation in Verification

An Agentic Formal Verifier. Innovation in Verification
by Bernard Murphy on 02-25-2026 at 6:00 am

Innovation New

In a break from our academic-centric picks, here we look at an agentic verification flow developed within a semiconductor company. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Saarthi: The First AI Formal Verification Engineer. The authors are from Infineon. The paper was posted in March 2025 in arXiv and ranked as second-best paper in DVCon 2025.

Nice paper on using agentic methods for hands-free formal verification. The goal is to explore how much can be achieved with a semi/fully autonomous agentic formal flow: understand design specifications, create a verification plan, allocate tasks to several AI verification engineers, communicate with formal verification to prove properties. Then analyze counter examples, assess formal coverage, and improve by adding missing properties.

Credible effort.

Paul’s view

This paper from Infineon outlines a simple agentic workflow for formally verifying RTL. Commercial verification agents have evolved quite a bit since it was published a year ago, but it still makes for a nice read. The paper begins by walking the reader through prompt engineering concepts like feedback, refinement, chain-of-thought, as well as sampling-and-voting as a method to mitigate hallucinations. The authors make their workflow available on three different open-source frameworks – CrewAI, AutoGen, and LangGraph. The workflow creates a list of properties to prove in plain English (verification plan), then creates and refines System Verilog Assertions (SVAs) for each property, and then runs a commercial model checking tool (Jasper from Cadence) on those SVAs. Each step in the workflow is iterative with multi-shot LLM calls.

The authors benchmark their agent on a range of simple testcases, typically with around 10 properties generated per testcases. All the testcases are considered golden RTL (bug free). The authors benchmark code coverage for the properties generated and how many of these pass in formal. Even though the circuits are fairly simple, both proof rates and coverage average below 50%. Also, in real world situations the RTL is not known to be golden and one of the biggest challenges for agentic workflows is to determine if a proof failure is due to a bad property or a real bug in the design. That said, one year is a long time in AI, and where last year was a breakthrough year for agentic software coding, our view at Cadence is that this year looks set to be a breakthrough year for agentic RTL design and verification. I’m looking forward to several exciting new papers on the topic at upcoming conferences!

Raúl’s view

Launched in early 2024 by Cognition AI, Devin is an autonomous AI agent designed to handle end-to-end software development tasks. Devin created a strong initial buzz, after the dust settled it reportedly resolved about 14% of real-world GitHub issues on the SWE-bench unassisted (vastly outperforming previous AI models). This month’s blog is about Saarthi, “a similar fully autonomous AI formal verification engineer”. Saarthi uses multi-agent collaboration and tight integration with formal tools, enabling systematic property generation, proof and coverage analysis aligned with industrial verification practice.

Saarthi leverages recent work in multi-agent LLM frameworks, applying workflows such as planning, reflection, tool use, and human-in-the-loop to formal hardware verification. It integrates LLMs, a framework (in today’s LLM literature “the control, coordination, and tooling layer around one or more LLMs”) and a tool (an external capability, formal verification) to build “the first AI formal verification engineer”. Human-in-the-loop intervention remains essential to avoid infinite loops and resolve ambiguous or incorrect properties.

Saarthi is evaluated across a broad spectrum of RTL designs ranging from simple counters and FSMs to more complex structures such as FIFOs, pipelined adders, and an RV32I core, using three different LLMs. The good news is that end-to-end autonomous formal verification is achievable in a meaningful fraction of cases, success rates hover around 40–50%. Coverage metrics are consistently stronger than assertion proof rates, not surprising as the first few assertions tend to cover a disproportionately large fraction of the reachable state space and the number of properties is small (10-20).  Model choice matters significantly:  GPT-4o emerges as the most robust and consistent, Llama3-70B underperforms across nearly all metrics. Agentic workflows can mitigate, but not eliminate, underlying model weaknesses. The results support “AI-assisted verification under human supervision more than “hands-off AI verification”. From a practical perspective, this is still valuable; Saarthi is a credible systems-level demonstration of agentic AI applied to formal verification.

While they don’t fully achieve a completely autonomous engineering system, the authors convincingly demonstrate that modern LLMs in a carefully designed agentic workflow and human-in-the-loop control, can handle substantial parts of an industrial formal verification flow. At the same time, the results make clear that today’s LLMs remain probabilistic assistants rather than deterministic engines — success rates vary, retries are frequent, and human intervention remains necessary for successful outcomes. The paper is an important datapoint in the progress of AI-assisted EDA. As for the bold claim that “Artificial General Intelligence (AGI) by 2027 is strikingly plausible, and we believe that Saarthi could be pivotal in reaching that milestone within the hardware design verification domain”, only time will tell.

Also Read:

Agentic EDA Panel Review Suggests Promise and Near-Term Guidance

TSMC and Cadence Strengthen Partnership to Enable Next-Generation AI and HPC Silicon

2025 Retrospective. Innovation in Verification


Reimagining Compute in the Age of Dispersed Intelligence

Reimagining Compute in the Age of Dispersed Intelligence
by Jonah McLeod on 02-24-2026 at 10:00 am

Yuning(lr) copy

At the 2025 RISC-V Summit, amid debates over cloud scaling and AI cost, DeepComputing CEO Yuning Liang offered a radical view: the future of intelligence isn’t in the cloud at all — it’s already in your pocket. His lunchtime conversation began with iPhones and ended with the death of the operating system. In between, he sketched a vision of computing that dissolves centralized infrastructure into personal devices, reshaping everything from hardware to human interaction.

Why Do You Need the Cloud?

Yuning waves his phone and laughs. “People keep saying, ‘We need the cloud for AI.’ Why? You’ve already got the compute right here. The only thing missing is space to store your models.” The cloud is no longer the default; it’s just a backup. In Yuning’s framing, bigger storage tiers aren’t luxury upgrades — they’re AI capacity ratings.

He taps on his iPhone’s storage line: “See? 64 gigabytes isn’t just storage — it’s intelligence. You want Whisper for voice, a 7B LLM for reasoning, a vision model for your camera? Fine. They all live here.” Thanks to aggressive model quantization — including FP4 and FP8 formats — today’s smartphones can host multiple intelligent agents. Storage becomes the limit on your on-device intelligence.

William Gibson imagined this in 1988. In Mona Lisa Overdrive, Kumiko carried a pocket-sized Maas-Neotek biochip that projected an AI companion named Colin — visible only to her, contextually aware, able to access local systems and provide guidance. Thirty-seven years later, Yuning is describing essentially the same device. Gibson conjured it decades before we had the silicon to build it.

Today, Yuning argues, correctness is perceptual, not mathematical. If your LLM gets one token wrong out of a million, no one notices. He shakes his head. “In the 20th century, if Intel was off by 0.001 in floating point, it was a crisis. Divide-by-zero? Immediate trap, big exception. That was engineering pride.”

“In RISC-V, you don’t even need to take the divide-by-zero exception,” he says. “Just flag it. Keep running. Why waste cycles pretending to be perfect when the output is statistical anyway?” This mindset shift explains the rise of ultra-low-precision formats like FP4. What matters isn’t IEEE-754 perfection — it’s the human experience.

“The user interface is a relic,” Yuning says. “You won’t open apps in ten years. You’ll forget what a mouse was. You’ll just tell your glasses what you want.” He describes a lightweight AI OS — not an operating system in the old sense, but a runtime that manages models, sensors, and context. One orchestrator that knows who you are, what you’re doing, and loads the right model at the right moment. No menus. No icons. Just interaction.

When the topic turns to silicon, Yuning grows animated. “Everyone’s building SoCs like they’re building cities — giant, expensive, unmanageable,” he says. “I just want to build the brain.” To Yuning, the ideal AI processor is scalar, vector, and matrix compute wrapped around fast SRAM — maybe GDDR6. Everything else (I/O, radios, sensors) is the body. Let the OEMs build the hands and eyes. His job is to supply the cortex.

“The chiplet model fits perfectly,” he says. “Each part is an organ. You snap them together.” This modular vision aligns with the Open Chiplet Architecture, which promotes interoperable compute, memory, and I/O units. Yuning never names OCA directly, but the alignment is unmistakable. “Just make sure the interfaces fit.”

“Everyone’s obsessed with 1024-bit vectors and out-of-order cores,” Yuning says, voice rising an octave. “They’re expensive, they’re hot, and they don’t scale. Why do that?”

Instead, he champions deterministic simplicity. “If you can get the same user-perceived performance with half the power and half the cost, you win. Don’t chase benchmark vanity. Build something that feels fast.”

Here, he echoes a rising sentiment within the RISC-V community — seen in startups like Simplex Micro — that favors deterministic scheduling over speculative execution. Elegance, Yuning suggests, is the new performance metric.

How can a startup survive against Nvidia and Apple? “Do everything with ten times less — people, money, time,” he says. “That’s the rule.” Big companies, he argues, have thousands of engineers managing thousands more. Startups survive by constraint and speed. “We can do in a week what they do in a quarter — if we stay hungry.”

And forget the cloud. “Cloud is not for startups,” Yuning says. “It’s complex and slow. Don’t envy Cerebras or Groq — unless you want to raise money and become another big corporate like OpenAI or Anthropic.” He laughs. “Once you have hundreds of people, thousands of people? You lose the speed. You get all the bad odds — wasting money, wasting time, wasting opportunities. Like government.”

The key isn’t to outspend giants. It’s to out-focus them.

Yuning’s closing prediction: “You’ll wear your computer,” he says. “Glasses, earbuds, maybe a pendant. They’ll talk to each other through local models. No screen, no keyboard. The model is the interface.” The core, he insists, will be a small deterministic brain — not a GPU farm.

The Evaporating Cloud

“Fast, private, local. That’s the future. The cloud becomes the teacher; your device becomes the student. Once it learns, the cloud becomes an artifact.” When that happens, Yuning says, computing will finally be personal again.

Also Read:

Two Open RISC-V Projects Chart Divergent Paths to High Performance

The Foundry Model Is Morphing — Again

The AI PC: A New Category Poised to Reignite the PC Market

 


Siemens to Deliver Industry-Leading PCB Test Engineering Solutions

Siemens to Deliver Industry-Leading PCB Test Engineering Solutions
by Daniel Nenni on 02-24-2026 at 8:00 am

Siemens Acquires ASTER Technologies to Deliver Industry Leading PCB Test Engineering Solutions

Siemens has strengthened its position in EDA and manufacturing by acquiring ASTER Technologies, a specialist in test and reliability solutions for printed circuit boards. The acquisition represents a strategic step in Siemens’ broader vision to deliver a fully integrated, end-to-end digital thread for electronics design, verification, manufacturing, and lifecycle management.

ASTER Technologies is widely recognized for its expertise in design-for-test (DFT), design-for-manufacturing (DFM), and design-for-reliability (DFR) solutions. Its flagship tools enable engineers to validate PCB designs early in the development cycle, reducing costly errors, improving yield, and accelerating time to market. By bringing ASTER’s capabilities into the Siemens portfolio, the company aims to offer customers a more comprehensive and tightly connected approach to PCB test engineering.

The electronics industry is facing increasing complexity driven by higher component densities, advanced packaging, faster signal speeds, and stringent reliability requirements. Traditional siloed workflows—where design, test, and manufacturing considerations are addressed separately, are no longer sufficient. Siemens’ acquisition of ASTER directly addresses this challenge by embedding test intelligence earlier in the design phase, helping engineers identify potential faults before a board is ever manufactured.

From a technology perspective, ASTER’s solutions complement Siemens’ existing EDA offerings, particularly in PCB design, simulation, and manufacturing preparation. ASTER’s test coverage analysis, boundary-scan expertise, and fault modeling capabilities add a critical layer of validation that enhances Siemens’ digital twin strategy for electronics. When combined with Siemens’ simulation, automation, and data analytics strengths, customers gain deeper insight into how their PCB designs will behave in real-world manufacturing and operational environments.

For customers, the benefits are tangible. Integrating ASTER’s test engineering tools into Siemens’ ecosystem enables earlier detection of design issues, improved test coverage, and optimized test strategies. This leads to lower development costs, fewer design re-spins, and higher product quality. Manufacturers also benefit from smoother transitions from design to production, as test requirements are aligned with manufacturing realities from the outset.

The acquisition also reinforces Siemens’ commitment to supporting industries where reliability and quality are mission-critical, such as automotive, aerospace, industrial automation, medical devices, and telecommunications. In these sectors, PCB failures can have serious safety, financial, and reputational consequences. By strengthening PCB test engineering within its portfolio, Siemens helps customers meet stringent regulatory standards while maintaining innovation speed.

From a strategic standpoint, the move reflects a broader trend in the EDA market toward consolidation and platform-based solutions. Customers increasingly prefer integrated toolchains over fragmented point solutions, seeking consistency, data continuity, and scalability across the product lifecycle. Siemens’ acquisition of ASTER positions the company to meet these expectations, offering a unified environment where design, verification, test, and manufacturing data flow seamlessly.

For ASTER Technologies, becoming part of Siemens provides access to a global customer base, expanded R&D resources, and the ability to scale its innovations more rapidly. The combination allows ASTER’s specialized knowledge to be applied across a wider range of industries and use cases, amplifying its impact within the electronics ecosystem.

Bottom Line: Siemens’ acquisition of ASTER Technologies is more than a portfolio expansion, it’s a strategic investment in the future of PCB test engineering. By combining ASTER’s deep test expertise with Siemens’ comprehensive digital industry capabilities, the company is well positioned to deliver industry-leading solutions that improve quality, reduce risk, and accelerate innovation across the electronics value chain.

Also Read:

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

Automotive Digital Twins Out of The Box and Real Time with PAVE360

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing


Agentic EDA Panel Review Suggests Promise and Near-Term Guidance

Agentic EDA Panel Review Suggests Promise and Near-Term Guidance
by Bernard Murphy on 02-24-2026 at 6:00 am

Massive AI datacenter

NetApp recently hosted a webinar on Agentic AI as the future for EDA and implications for infrastructure. Good list of panelists including Mahesh Turaga (VP Cadence Cloud) with an intro preso on infrastructure and agentic AI at Cadence, then our own Dan Nenni (Mr. SemiWiki) moderating, Khaled Heloue (Fellow AMD, CAD CAD/Methodology/AI), Rob Knoth (Sr Group Director Strategy and New Ventures, Cadence) and Janhavi Giri (NetApp Industry Lead, formerly at Intel). Excellent panel with of course views to the agentic future but also grounded guidance on getting into and progressing in adopting AI and agentic. Since this is a long webinar, I won’t dwell on vision, rather my takeaways on near-term observations.

Infrastructure implications

You’d have to be living under the proverbial rock not to be aware that hyperscalers and hopeful hyperscalers are investing huge amounts – hundreds of billions of dollars – in building mega-datacenters. What you might not know (but shouldn’t be surprising) is that 60% of that investment is going into technology development – our industry. What an opportunity for the systems and semiconductor ecosystem!

Mahesh highlighted some of the infrastructure challenge in these datacenters. Racks that once ran at 10-15kW are now climbing to 100-120kW and by 2027 we may see 1-2MW per rack. Already direct-to-chip liquid cooling is unavoidable and at higher power levels we will have to switch to immersion cooling. Further AI-centric dataflow within and between racks now demands microseconds of latency where previously we were OK with milliseconds.

Fast changes in infrastructure with big implications, especially support for design and operations (e.g. scheduling old hardware replacement with new hardware). GPU design cycles run 12-18 months, and hardware must be amortized over 5-6 years, so refresh updates must be carefully planned. Cadence with their Reality Digital Twin works very closely with NVIDIA in helping enterprises design and maintain their datacenters against these and other (thermal, cooling, etc) objectives.

NetApp also play an important role in infrastructure through their management of storage and cloud operations A large enterprise will have design data scattered around the world: US, Europe, India, Asia. And they also want to take full advantage of flexibility in compute/AI options: on-prem, cloud and hybrid configurations. Especially in agentic systems, learning from patterns in distributed data could suggest a lot of complexity and unacceptable performance overhead.

Managing complexity and performance effectively will depend in part on agentic architecture, also on sufficient agentic-aware support from storage and cloud infrastructure. NetApp provides this through an end-to-end data pipeline to find needed data across hybrid multi-clouds, ensure it is current by updating as sources change, provide data governance and security throughout the data lifecycle, and provide support for data transformation as needed by AI apps. All MCP capable (standard for agent communication) and integrated with container orchestration platforms such as Kubernetes.

Panel discussion takeaways

Dan kicked off with a great question: what are the most time-consuming and repetitive tasks that agentic AI could help automate? I like this because it goes to the heart of silly mass media fear mongering while shining a light on the real benefit to engineers (no, engineers won’t be replaced by AI, yes, they will get more time to focus on high-value tasks).

The daily routine of today’s engineer is consumed by low productivity friction tasks: learning how to run unfamiliar flows (especially for junior engineers), scripting through extension languages like Tcl, Skill, Python, figuring out what to do next when something crashes or they aren’t meeting a PPA goal, assembling reports on current progress for the next design review. Necessary but pedestrian work consuming significant time that could be better spent in creatively moving a design goal forward. Task-centric agents together with RAG-based lookup can minimize friction and help junior engineers spin up more quickly. Agentic methods can take this further by automating pedestrian tasks.

We’re still in the early stages of that journey, though in some applications further advanced than others. Driving EDA tools through natural language could be a big advance (I am sure that the next generation of engineers looking back at how we direct tools today will be stunned by the primitive scripting methods we now use). Agentic methods for repetitive analyses would be another obvious win: run analyses for a set of cases together with sweeps across certain parameters and boil down the results, returning the top 3 options worth investigating more closely. Methods that are easy to trust because you are effectively using agentic as a skilled intern capable of learning under your guidance. You can still monitor, check their work, but you don’t have to do the grunt work yourself.

Some of this is already happening, in verification, in floorplanning, in implementation optimization and multiphysics analysis. Still essentially productivity optimization around specific analyses but you could imagine going further. Khaled suggested aiming for fully automating design implementation on simple tiles.

What about risk management? There was consensus that new objectives must be phased in carefully with human supervision. I believe investment here will need major focus on methods to build trust – confidence scoring and “show your work” reports for example – also metrics to monitor how effectively apps are improving productivity and/or QoR. We don’t want to replace pedestrian work in regular flows with pedestrian work in wrestling agentic systems.

Another good question probed how best to train agentic systems. For me this question touches on multiple topics: the architecture of an agentic system in semiconductor/systems design, who should own which pieces, considering need to protect company secrets but also who has the most expertise to train certain agents. Does the nature of these systems promote a trend to an ecosystem of agents and agentic suppliers? For now discussion suggests fairly localized agents augmenting existing solutions, complemented by RAGs as support/lookup mechanisms to answer “how do I…” questions. Not a bad starting point, though maybe we can do a little better by embedding some of that RAG data inside agents.

Lots of food for thought. You can watch the webinar HERE.

Also Read:

Cloud-Accelerated EDA Development

WEBINAR: Is Agentic AI the Future of EDA?

Agentic AI and the EDA Revolution: Why Data Mobility, Security, and Availability Matter More Than Ever


Hardware is the Center of the Universe (Again)

Hardware is the Center of the Universe (Again)
by Lauro Rizzatti on 02-23-2026 at 10:00 am

Hardware is the Center of the Universe (Again) Figure 1

The 40-Year Evolution of Hardware-Assisted Verification — From In-Circuit Emulation to AI-Era Full-Stack Validation

For more than a decade, Hardware-Assisted Verification platforms have been the centerpiece of the verification toolbox. Today, no serious semiconductor program reaches tapeout without emulation or FPGA-prototyping playing a central role. HAV has become so deeply embedded in the development flow that it is easy to assume it has always been this way.

But it wasn’t.

With a bit of slack around the exact starting point, this year marks roughly the 40th anniversary of hardware-assisted verification, broadly inclusive of both hardware emulation and FPGA prototyping. Over four decades, HAV has evolved from a niche approach into the indispensable backbone of modern silicon development. Its history is also, in many ways, a mirror of the semiconductor industry itself: as chips grew more complex, as software took over, and as AI began reshaping everything, HAV repeatedly reinvented its purpose.

Long before HAV became a recognized category, there were precursors. IBM, for example, experimented with hardware acceleration through systems such as the Yorktown Simulation Engine and later the Engineering Verification Engine. These machines were essentially simulation accelerators—special-purpose computers designed to run hardware models faster than traditional software simulators. They represented an important step forward, but they remained fundamentally tied to the simulation paradigm. They improved speed, yet not nearly enough to apply real-world scenarios to a design-under-test.

HAV platforms belong to a different class of engines. Early emulators relied on reprogrammable hardware, typically arrays of FPGAs, configured to mimic the behavior of a design-under-test (DUT). For the first time, engineers could interact with a representation of the chip before tapeout at execution speeds approaching real life. Companies such as PiE Design Systems, Quickturn, IKOS, and Zycad pioneered this new verification approach, laying the foundation for what would become a defining pillar of semiconductor development.

The evolution of HAV can be understood across three broad eras: the early years, when hardware complexity first forced emulation into existence; the middle period, when software became dominant and HAV moved into virtual environments; and the maturity era, where AI workloads have driven hardware prominently to the center of architectural innovation.

The Early Years: Hardware Complexity Forces the Rise of Emulation

In the early 1980s, semiconductor designs were defined almost entirely by hardware. Embedded software, if present at all, played a minor role. The industry was driven by pioneers in processors and graphics chips, pushing toward what seemed then an astonishing milestone: one million gates. Verification relied overwhelmingly on gate-level simulation, which served as the universal standard of the time.

As designs grew larger, however, simulators encountered an unavoidable performance wall. Limited host memory forced designs to be swapped to disk, simulation runtimes ballooned, test vector volumes exploded, and the sheer computational load to achieve acceptable fault coverage became unmanageable. Full-system validation before tape-out was becoming increasingly impractical, if not outright impossible. The industry needed something faster, something closer to silicon.

Hardware-assisted verification emerged as a response to this crisis. Early HAV platforms were deployed primarily in In-Circuit Emulation mode. In ICE configurations, the emulator was physically cabled into a live target system, allowing engineers to exercise the DUT in a realistic environment with real peripherals. This was revolutionary. Instead of relying on synthetic vectors, designers could validate chips with real workloads, providing a level of realism that synthetic simulation vectors simply could not achieve. For the first time, verification began to resemble how the chip would behave in the field.

The promise was enormous, but the reality was often painful. Early emulators were complex to setup, finnicky to operate, and prone to frequent failures. Long setup times regularly pushed verification readiness past management deadlines, and reliability issues rooted in cabling and hardware dependencies led to frequent downtime. Mean-time-between-failure (MTBF) could be measured in hours rather than weeks or months, forcing verification teams to spend more effort debugging the emulator itself than debugging the DUT. Even so, despite these frustrations, the trajectory was clear: verification could no longer remain purely software-based.

The Middle Ages: Software Eats the World and HAV Becomes the Verification Backbone

The design landscape changed profoundly in the decades that followed, as functionality began migrating steadily from hardware into software code. This shift was famously captured in Marc Andreessen’s 2011 observation that “software is eating the world.” His prediction proved remarkably accurate across nearly every industry touched by computing. SoCs became software-defined platforms, where intelligence increasingly lived in firmware, operating systems, drivers, and application stacks. Hardware was no longer the entire product—it was the substrate for software code.

This transition transformed verification. Static test patterns were no longer capable of exercising the full complexity of modern designs. Instead, engineers turned to software-driven stimulus, using high-level testbenches to validate behavior across broad functional domains. Hardware verification languages and more abstract methodologies emerged to support this new reality.

HAV engines, once used mainly for real-time ICE validation, adapted by becoming the execution engines behind these software-centric environments. The industry formalized the interaction between software testbenches and hardware-mapped DUTs through transaction-based verification, standardized under IEEE SCE-MI. Rather than toggling signals cycle-by-cycle, engineers could interact with the DUT through high-level transactions, dramatically improving performance and productivity.

This shift also removed many of the practical limitations of earlier generations. Virtualized environments reduced reliance on physical connections and eliminated the need for speed adapters required in ICE-based setups. Verification IP replaced hardware interfaces, enabling scalable, fully digital verification ecosystems.

As the industry embraced “shift-left,” HAV platforms emerged as its most powerful enabler. They allowed engineers to bring up software much earlier in the design process, starting with bare-metal initialization and extending through drivers, and operating systems. Verification was no longer limited to functional correctness in isolation; it now encompassed the behavior of the complete hardware/software system, long before silicon existed.

By this stage, HAV had become much more than a way to make verification faster. It was increasingly the bridge between hardware and software development, allowing teams to work in parallel rather than in sequence, and establishing hardware-assisted verification as the backbone of modern system validation.

The AI Age: AI Restores Hardware Centrality and HAV Goes Full-Stack

From the mid-2010s onward, the industry entered a new era, driven by the explosive rise of artificial intelligence. The narrative evolved beyond software eating the world into something more radical: software consuming the hardware. In the AI age, software does not simply run on hardware, rather, it increasingly dictates what hardware must become. The demands of modern AI models are so extreme that they are reshaping processor architectures from the ground up.

Generative AI exposed the limits of traditional, general-purpose architectures. The enormous data movement requirements and computational intensity of modern models overwhelmed CPUs and strained even highly optimized systems. To keep pace, the industry embraced specialized architectures, including GPUs, FPGAs, and purpose-built AI accelerators designed around massive parallelism and tensor computation.

These developments dramatically increased the scale and complexity of SoC designs. AI-era chips routinely contain billions of gates, heterogeneous compute clusters, and sophisticated networks-on-chip (NOC) to move data efficiently between processing elements. See figure 1.

Figure 1: High-level block diagram of an AI processor.

In this context, HAV has taken on a fundamentally expanded role.

Verification is no longer confined to functional correctness in sub-billion-gate designs; it must now scale to multi-billion-gate systems and address far more than logic alone and even more than software stacks. Today’s HAV platforms are increasingly used to evaluate power and thermal behavior, analyze performance, validate safety and security requirements, capture full system-level interactions, and run realistic workloads that reflect how the design will operate in the real world.

At the same time, ICE-style operation matured into a critical engineering capability rather than a legacy mode. This is especially true in at-speed FPGA-based prototyping, where real physical interfaces must be validated at speed before committing a design to manufacturing. By allowing the design-under-test to interact with actual PHY hardware early in the cycle, ICE enables teams to uncover integration, timing, and signal integrity issues that purely virtual environments cannot expose. In doing so, it strengthens both hardware and software confidence long before tape-out.

Just as importantly, AI hardware cannot be separated from its software ecosystem. Compilers, runtimes, libraries, kernels, and deployment frameworks are no longer afterthoughts, they are inseparable from whether the hardware succeeds or fails. HAV platforms are therefore used to run real AI workloads well before silicon exists, ensuring that hardware and software evolve together rather than sequentially. The feedback loop between architecture and execution is increasingly closed before tape-out, not after first silicon.

In this sense, verification in the AI age has become truly full-stack. HAV is no longer just a tool for validation, it is now the environment where hardware/software co-design converges, enabling what might be called a new paradigm: software-driven tape-out.

Conclusion: HAV as the Engine of Hardware–Software Convergence

After four decades of evolution, the semiconductor industry has come full circle. Hardware complexity initially drove the rise of emulation as a practical necessity. The software era that followed expanded HAV’s role, connecting it to virtual environments, richer software stacks, and broader system-level workflows. Today, the AI revolution is once again reshaping the landscape, restoring hardware to the center of innovation and demanding unprecedented levels of specialization, efficiency, and scale.

What has changed most fundamentally, however, is the meaning of that centrality. Hardware is no longer designed first and programmed later. The defining trend of this decade is software-defined design, where architecture is shaped as much by compilers, runtimes, and workloads as by transistors, interconnects, and logic structures. The boundary between hardware and software has blurred into a single, tightly coupled engineering problem.

HAV platforms sit precisely at this intersection. They can no longer be viewed as tools for checking correctness in isolation. Instead, they have become essential environments for validating architectural intent—where hardware meets real software, where performance assumptions are tested under realistic workloads, and where system-level trade-offs are exposed while meaningful design changes are still possible. In the era of software-driven tape-out, HAV is the mechanism that closes the loop before silicon exists.

Hardware is once again the center of the universe, not because software has become less important, but because software has become so important that it now defines the hardware itself.

The metrics of success have shifted accordingly. Choosing an HAV platform is no longer simply a matter of how many gates can be verified for functional correctness. It is now about whether the full spectrum of software-driven use cases can be executed, analyzed, and optimized before tape-out. In this new era, hardware is not just the foundation beneath softwa2026 Outlook with Abhijeet Chakraborty VP, R&D Engineering at Synopsysre innovation, it has become one of its primary engines.

Also Read:

Smarter IC Layout Parasitic Analysis

Accelerating Static ESD Simulation for Full-Chip and Multi-Die Designs with Synopsys PathFinder-SC

2026 Outlook with Abhijeet Chakraborty VP, R&D Engineering at Synopsys

 


Smarter ECOs: Inside Easy-Logic’s ASIC Optimization Engine

Smarter ECOs: Inside Easy-Logic’s ASIC Optimization Engine
by Daniel Nenni on 02-23-2026 at 8:00 am

Easy Logic EDA

Easy-Logic Technology Ltd. is a specialized Electronic Design Automation (EDA) company focused on solving one of the most complex and time-sensitive challenges in semiconductor design: functional Engineering Change Orders (ECOs). Founded in 2014 and headquartered in Hong Kong, the company has built its reputation around advanced logic optimization algorithms that help ASIC and SoC design teams implement late-stage design changes quickly and safely.

In modern chip development, errors or specification changes often surface late in the design cycle, sometimes after synthesis, place-and-route, or even physical layout. At that stage, making changes manually can be extremely risky and costly. A single modification may ripple through large portions of the logic, potentially affecting timing, testability, or power consumption. Traditionally, engineering teams would rely on manual patching or partial redesign, which can extend schedules and increase the chance of introducing new bugs.

Easy-Logic’s core focus is automating this ECO process. Its flagship solution, commonly referred to as EasylogicECO, provides functional ECO automation that generates optimized logic patches directly from specification changes or RTL updates. Instead of requiring a full re-synthesis or broad structural modifications, the tool computes minimal patch logic that satisfies the new functionality while preserving as much of the existing netlist as possible. This significantly reduces disruption to timing closure and layout integrity.

One of the key strengths of Easy-Logic’s approach lies in its algorithmic foundation. The company was founded by researchers and engineers with strong academic backgrounds in logic synthesis, formal methods, and verification. By combining formal equivalence techniques with optimization strategies, the tool can identify the smallest set of logic changes needed to meet new design requirements. This minimizes area overhead and maintains design stability, critical factors in advanced process nodes where margins are tight.

Another important area of capability is post-layout ECO support. In late tape-out stages, designers often prefer “metal-only” ECOs, where changes are implemented by modifying metal layers without altering lower layers of the silicon. This approach reduces manufacturing costs and avoids restarting expensive mask processes. Easy-Logic’s solutions are designed to support these constraints, enabling functional updates while preserving physical design structures. This makes the tool particularly valuable for high-volume ASIC programs with strict deadlines.

Scan chain repair and DFT preservation are also integrated into the ECO flow. Functional modifications can disrupt scan connectivity, which is essential for manufacturing test coverage. Easy-Logic addresses this by automatically repairing or maintaining scan chains after logic patches are applied. This ensures that testability remains intact without requiring separate manual correction steps.

Within the broader EDA industry, Easy-Logic occupies a focused niche. The global EDA market is dominated by large vendors offering end-to-end tool suites covering synthesis, verification, simulation, and physical implementation. Rather than competing directly across all categories, Easy-Logic concentrates on the functional ECO segment. This specialization allows it to deliver deep technical solutions in an area that is critical but often underserved by broader platforms.

The company’s tools are used by semiconductor firms developing complex ASICs for consumer electronics, networking, artificial intelligence accelerators, and industrial applications. As chip complexity increases with billions of transistors integrated into advanced nodes the likelihood of late-stage design changes grows. ECO automation therefore becomes increasingly important for meeting aggressive time-to-market targets.

Beyond time savings, Easy-Logic’s solutions contribute to risk reduction. Late design changes are inherently dangerous because they can unintentionally impact previously verified functionality. Automated formal verification embedded within the ECO flow helps ensure that only the intended modifications are introduced. This reduces the probability of silicon re-spins, which can cost millions of dollars and months of delay.

Bottom line: Easy-Logic Technology Ltd. represents a highly specialized player within the semiconductor software ecosystem. By focusing on functional ECO automation, it addresses a bottleneck that directly affects schedule, cost, and design stability. As semiconductor projects continue to grow in complexity and competitive pressure increases, the ability to implement safe, minimal, and efficient late-stage logic changes will remain a critical advantage and that is precisely the domain in which Easy-Logic has built its expertise.

Contact EASY Logic