RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Podcast EP327: Third Quarter 2025 Electronic Design Market Data Report Overview and More with Dr. Walden Rhines

Podcast EP327: Third Quarter 2025 Electronic Design Market Data Report Overview and More with Dr. Walden Rhines
by Daniel Nenni on 01-16-2026 at 10:00 am

Daniel is joined by Wally Rhines, CEO of Silvaco, about the Electronic Design Market Data report that was just released. Wally is the industry coordinator for the EDA data collection program called EDMD. SEMI and the Electronic System Design Alliance collect data from almost all of the electronic design automation companies in the world and compile it by product category and region of the world where the sales occurred. It’s the most reliable data for the EDA industry and provides insight into what design tools and IP are in highest demand around the world.

Dan explores the results of the current report in detail with Wally, who explains that the current report documents another good quarter for EDA with an 8.8% overall growth compared to last year. Total revenue was $5.6B for the quarter, with EDA now solidly delivering over $20B in annual run rate. Dan and Wally explore the details of the report and discuss worldwide trends in EDA, IP and services across various regions. Some of the insights are surprising. Worldwide EDA employment is also discussed, which grew 17.3% compared to last year representing approximately 73,000 employees.

Dan also discusses Wally’s recent decision to join Silvaco as CEO. Wally offers some excellent insights into what drove that decision and what the future looks like.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Verification Futures with Bronco AI Agents for DV Debug

Verification Futures with Bronco AI Agents for DV Debug
by Daniel Nenni on 01-16-2026 at 6:00 am

Bronco AI Verification Futures 2025

Verification has become the dominant bottleneck in modern chip design. As much as 70% of the overall design cycle is now spent on verification, a figure driven upward by increasing design complexity, compressed schedules, and a chronic shortage of design verification (DV) engineering bandwidth. Modern chips generate thousands of tests per night, producing massive volumes of logs and waveforms. Within this flood of data, engineers must find the rare, chip-killing bug hidden among hundreds of failures. Verification today is fundamentally a large-scale data analysis problem, repeated daily under intense time pressure.

Traditional approaches struggle to scale with this reality. Human engineers are exceptionally strong at deep, creative reasoning about a single complex failure, but they cannot efficiently process thousands of datasets simultaneously. Classical machine learning techniques, while powerful in narrow contexts, face severe limitations in DV. They often fail to generalize across architectures such as CPUs, GPUs, NoCs, or memory subsystems. Training data is difficult to collect due to IP sensitivity, labeling requires expert engineers, and constant design evolution creates distribution shifts between chip versions. These constraints limit the long-term impact of conventional ML in verification.

Bronco AI agents for DV represent a step change. Instead of relying on narrow models trained for specific tasks, agent-based systems leverage large reasoning models combined with tool use, memory, and decision-making loops. These agents generalize more effectively because they are trained on internet-scale code and problem-solving data rather than proprietary design specifics. They can be steered through natural language, allowing DV engineers to guide investigations intuitively. Crucially, agents learn from metadata and patterns rather than memorizing raw data, reducing overfitting and mitigating IP and security concerns by selectively handling and discarding context.

In DV workflows, Bronco AI agents operate much like a highly scalable junior-to-senior engineer hybrid. When a simulation fails, the agent autonomously decides how to investigate, executes standard DV actions such as log parsing and waveform inspection, and iterates until it identifies a likely root cause. If the issue exceeds its confidence threshold, the agent escalates with a well-formed ticket for a human engineer. This approach allows routine debug work to be handled automatically while preserving human expertise for the hardest problems.

The impact of this agentic approach is measurable. In real subsystem-level UVM test failures on next-generation ASICs, Bronco AI agents were able to index new regressions within minutes, adapt to unfamiliar error signatures, and build an understanding of designs containing hundreds of thousands of lines of RTL.

In one case, an agent analyzed over 100,000 lines of logs and approximately 20 GB of waveform data to identify a deeply nested root cause in less than 10 minutes, work that a DV lead estimated would have taken hours, or days for a less experienced engineer.

AI agents also fundamentally change how waveform debug is performed. Traditional waveform analysis forces engineers to scroll through laggy GUIs, manually correlating thousands of signals across time windows and following one hypothesis at a time. Agents, by contrast, can examine many signals, hierarchies, and failure modes simultaneously. They can correlate errors across CPU, memory controllers, fabrics, and accelerators, classify failures, and recognize recurring patterns across regressions.

Perhaps most importantly, these systems improve over time. By learning from past failures, tickets, and human feedback, AI agents build reusable debug playbooks, discover efficient shortcuts, and develop generalized intuition—such as recognizing which issue types tend to appear in certain subsystems. This continuous learning enables faster time-to-value without custom AI training and allows seamless integration into existing EDA flows.

Bottom line: AI agents deliver value in verification not by replacing human insight, but by amplifying it through scale, speed, and learning. As verification complexity continues to grow, agentic AI offers a practical path to closing the verification gap.

Contact Bronco for a Demo

Also Read:

Superhuman AI for Design Verification, Delivered at Scale

AI RTL Generation versus AI RTL Verification

Scaling Debug Wisdom with Bronco AI


AI Bubble?

AI Bubble?
by Bill Jewell on 01-15-2026 at 12:00 pm

unnamed (1)

The currently strong semiconductor market is being driven by AI applications. A McKinsey survey showed 88% of businesses used AI in 2025 compared to just 55% in 2023. According to Inc.com, 306 of the S&P 500 companies mentioned AI in their third quarter 2025 earnings conference calls, up from only 53 citations three years earlier in third quarter 2022.

The WSTS December 2025 forecast called for 22% semiconductor market growth in 2025 and 26% in 2026, following 20% growth in 2024. This growth has been driven by the memory and logic categories. Memory is forecast to grow 28% in 2025 and 39% in 2026. Logic is predicted to grow 37% in 2025 and 32% in 2026. Excluding the memory and logic categories, the remainder of the semiconductor market declined 3% in 2024 and is projected to grow only 6% in 2025. The memory companies have all cited AI as their major growth driver in the last two years. Nvidia, the largest AI semiconductor company, grew its revenue 114% in 2024 and is guiding for 63% growth in 2025. Most Nvidia AI semiconductors are included in the logic category.

How long will the AI boom last? What will happen to the semiconductor market? We can look at previous bubbles in the semiconductor market for clues.

PC Bubble

The introduction of the IBM PC in 1981 led to a boom in the PC market. With IBM setting a standard for PC hardware and software, businesses felt safe investing in PCs. PC unit shipments grew 85% in 1983 and 29% in 1984. The PC boom led to a boom in the semiconductor market, especially for memory and Intel microprocessors. However, the PC boom came to an abrupt halt in 1985, as PC unit shipments fell 11 percent. The weakness in 1985 was due to several factors. Clones of IBM PCs from Compaq, Dell and HP disrupted the market. U.S. GDP growth slowed from 7% in 1984 to 4% in 1985.

The PC bust in 1985 led to a 17% decline in the semiconductor market. Memory fell 38% and Intel revenues dropped 16%. The decline was short-lived. In 1986 PC unit shipment grew 22% and semiconductors grew 23%. Memory and Intel revenues returned to strong growth in 1987.

Internet Bubble

Internet use began to explode in the 1990s as major businesses established links and the World Wide Web created standards and enabled browsing. The number of Internet users roughly doubled each year in 1995 and 1996. Users grew around 50% a year from 1997 through 2000. In 2001, growth in Internet users slowed to 21%. The rapid growth of the Internet led to the creation of numerous dot-com companies fueled by venture capital. By early 2000, rising interest rates and the lack of profitability of most dot-com startups led to a slump in investment. The NASDAQ-100 index, which was heavily weighted with dot-coms, dropped 78% from March 2000 to October 2002.

The collapse of many dot-com companies resulted in telecommunications companies having over capacity in Internet infrastructure. Cisco, the largest provider of Internet infrastructure hardware, saw revenue change from 50% plus growth in 1999 and 2000 to a 23% decline in 2001. In 2001, the semiconductor market dropped 32% with memory down 49%. The semiconductor and memory markets returned to double-digit growth in 2003.

AI Bubble?

The chart below shows the change in the semiconductor market during the PC bubble (blue line), the Internet bubble (red line) and the current AI period (green line). In the PC and Internet bubbles, the semiconductor market had two years of strong growth in the 19% to 46% range followed by major declines when the bubbles burst. In the current cycle, semiconductor growth was 20% in 2024 and is projected at 23% in 2025 and 26% in 2026.

The question is not if the AI bubble will burst, but when. Basically, all major new technologies go through a period of strong growth in the first few years. Many new companies emerge to try to take advantage of the new technology, largely driven by investments from venture capital funds. Eventually, the growth of the new technology slows or declines. Investment funds then begin to dry up. The revenues of hardware companies enabling the new technology fall, leading to semiconductor market declines. History suggests a possible bursting of the AI bubble in the next year or two.

The bubbles are not the end of the new technologies, but an adjustment. Certainly, PCs and the Internet are major economic drivers which have transformed the way of life for both business and consumers. AI also promises a major transformation. How smoothly the transformation is implemented remains to be seen.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Up Over 20% in 2025

U.S. Electronics Production Growing

Semiconductor Equipment Spending Healthy


There is more to prototyping than just FPGA: See how S2C accelerates SoC Bring-Up with high productivity toolchain?

There is more to prototyping than just FPGA: See how S2C accelerates SoC Bring-Up with high productivity toolchain?
by Daniel Nenni on 01-15-2026 at 10:00 am

cover image

System-on-Chip designs continue to grow in scale and interface diversity, placing greater demands on prototype capacity, interconnect planning, and bring-up efficiency. These challenges arise not only in large multi-FPGA programs but also in smaller designs implemented on a single device or a small FPGA cluster. In all cases, teams must build a representative verification environment, manage logic operating at different rates, and isolate functional issues with minimal iteration time.

S2C’s FPGA solution addresses these needs through a structured prototyping ecosystem that combines automation software, implementation flows, system IP, and hardware expansion options to support efficient and predictable SoC bring-up across a wide range of design scales.

Building a Scalable System-Level Prototyping Methodology

A predictable and efficient bring-up process depends on tight coordination between software automation and hardware infrastructure. S2C’s PlayerPro™ CT software supports both automatic and guided partitioning, including interconnect planning for designs that span multiple FPGAs. Timing-driven and congestion-aware algorithms help improve partition quality and stability. For designs that do not require partitioning, PlayerPro CT also enhances gated-clock conversion and memory mapping, improving overall implementation robustness.

The RTL Compile Flow (RCF) further streamlines implementation by reducing memory footprint, improving iteration turnaround, and maintaining RTL-level visibility for downstream debug. These capabilities are valuable not only for large multi-FPGA designs, but also for projects that ultimately fit into a single FPGA yet still require controlled timing convergence and manageable compile cycles during early architectural exploration.

Clock-domain and rate matching are common requirement when integrating subsystems with different clock frequencies or operating characteristics. In practical SoC bring-up, many IP blocks—such as memory controllers, external interfaces, or third-party subsystems—are often unable to operate at their final target frequencies during early prototyping stages.

S2C addresses this challenge by providing Memory Models and Speed Adapters that decouple functional validation from frequency constraints. These mechanisms allow subsystems to run at reduced or independent rates while preserving correct transaction ordering, protocol behavior, and system-level interactions.

A representative system environment also depends on access to the appropriate peripheral interfaces without extensive custom hardware development. S2C offers a broad portfolio of daughter cards covering high-speed connectivity, memory, storage, display, and general-purpose interfaces. PCIe EP/RC, Mini-SAS, USB PHY, and SFP+/QSFP+ modules support high-bandwidth links; DDR4, LPDDR4, eMMC, and Flash modules enable memory subsystem evaluation; HDMI, DisplayPort, and MIPI D-PHY daughter cards support video and imaging use cases. GPIO headers, JTAG modules, and SerDes extensions enable signal probing and low-speed peripheral access. Together, these hardware options help teams reproduce system-level conditions that closely reflect the target deployment environment.

System-Level Debug Visibility

Debug is a critical part of prototype validation, and S2C provides mechanisms that deliver visibility at multiple levels of the system.

At the I/O level, engineers can validate basic functionality using push buttons, DIP switches, GPIOs, and UART interfaces. PlayerPro also enables virtual access to these controls, supporting remote operation and simplifying early functional checks.

For bus-level visibility, S2C offers ProtoBridge, which uses a PCIe connection to provide high-throughput transaction access suitable for software-driven stimulus generation and data movement. NTBus provides an alternative lower-bandwidth access path over embedded Ethernet.

Signal-level visibility is supported through probe insertion and waveform capture. MDM Pro enables concurrent capture of up to 16K signals across as many as eight FPGAs, with deep trace storage and support for both IP-mode and compile-time configurations—often without requiring a full recompile.

Conclusion

With a structured prototyping ecosystem and a comprehensive debug infrastructure, S2C’s Prodigy prototyping solution provides a stable foundation for building, scaling, and validating FPGA-based prototypes. Whether used for single-FPGA bring-up or large multi-board configurations, S2C enables teams to create representative verification environments, balance subsystem operation, and efficiently isolate functional issues throughout the SoC development cycle.

Contact S2C

Also Read:

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China


Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering

Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering
by Admin on 01-15-2026 at 6:00 am

DAC Call for Contributions 2026

By Frank Schirrmeister, Synopsys
Disclaimer: This article is written in my role as Engineering Track Chair for DAC 2026

If you’ve ever walked out of DAC with a handful of practical ideas you could put to work when you return to work, you already know the value of the Engineering Track. It’s where practitioners talk to practitioners – front‑end to back‑end, IP to systems & software – about what actually shipped, what almost derailed, and what really worked.

With DAC 2026 heading to Long Beach this year (July 26–29), the submission window gives you one more week to get your story into the program: the Engineering Track Call for Contributions closes on January 20, 2026.

DAC’s Engineering Track = The Workbench of the Conference

Think of the Engineering Track as the place where the industry checks claims against data. The four pillars are well‑known:

Front‑End Design, Back‑End Design, IP, and Systems & Software.

The bar for acceptance is pragmatic impact reviewed by your industry peers: integration experience, deployment lessons, measurable outcomes. This is curated by a large Technical Program Committee – we are about 150 reviewers strong this year – precisely to keep sessions practical, concise, and dense with takeaways.

For 2026, DAC is doubling down on that ethos. The official call emphasizes practical insights and real‑world experiences across the design ecosystem, and you have until January 20th to submit your experiences.

What’s Hot – and Why Your Experience Matters Now

First, Agentic AI is moving into mainstream workflows. At DAC62, the buzz wasn’t just “AI in EDA,” it was agentic AI – multi‑agent systems reasoning over flows, data, and constraints. The Microsoft Monday keynote spotlighted reasoning agents, and live demonstrations showed agents collaborating autonomously on parts of a chip design flow.

The consensus: agents can tame complexity, but humans remain firmly in the loop for sign‑off and safety. If you’ve piloted LLMs or agents in verification, synthesis exploration, debug, or collateral generation, the community wants the gritty details—wins, misses, cost curves, and guardrails.

Zooming in, last year’s Engineering Track special session “AI‑enabled EDA” assembled startup and industry voices (ChipStack→Cadence, Silimate, Rise, ChipAgents, VerifAI, Bronco AI, VerifAIX).

The through‑line? Move beyond script helpers to agent‑driven reasoning across the silicon lifecycle. If you’ve tried that jump (even partially), your evidence—coverage movement, regression stability, debug time, data hygiene—will land well for the DAC 2026 Engineering Track.

Second, Chiplets and multi‑die aren’t hypothetical anymore. From the Chiplet Pavilion to tool vendor panels, DAC62 made clear: heterogeneous packaging, 3D‑IC, and chiplets are scaling fast – and the bottlenecks are integration discipline and automation.

The industry’s discussions around “SoC Cockpit” automation, dedicated chiplet content, and panel takeaways all point to flows that must span system spec to signoff with tighter feedback loops.

What has your team learned about interposer modeling, package‑aware verification, DFT for multi‑die, or HBM timing closure under realistic workloads? Bring the receipts.

Third, Software‑defined systems reshape verification and bring‑up. Automotive and aerospace teams at DAC62 described accelerating pre‑silicon software bring‑up using emulation, virtual models, and scenario‑based validation – because lifetime validation and OTA‑driven features demand it.

If you’ve connected emulation/prototyping to CI/CD, instrumented performance/power at scale, or fused virtual platforms with lab rigs, those “how we wired it” details are gold for peers as part of the Engineering Track in 2026.

Finally, the AI imperative touches the whole stack. Arm’s SKYTalk at DAC62 framed AI as a systems problem: technology leadership, a complete systems approach, and a robust ecosystem. That systems lens maps perfectly to the Engineering Track: cross‑discipline integration stories beat single‑point heroics every time.

If you navigated cross‑org collaboration (IP vendors, foundry, cloud, toolchains) to make an AI workload viable, that’s exactly the content the Engineering Track is built to surface.

What a Strong Engineering Track Submission Looks Like

SemiWiki readers gravitate to specifics: numbers, tooling, and lessons learned. So do DAC’s reviewers. Across the four categories, consider framing your six-page abstract submission using this structure:

  • Problem & context. One page: node, die count, workload class, volume/latency/latency‑jitter, safety/security constraints. If applicable, cite DAC alignment – AI agents, chiplets, SW‑
  • Approach & toolchain. Call out the stack – simulation + emulation, prototyping, virtual platforms, physical signoff, package/thermal analysis, LLM/agent frameworks, data backplane. Reviewers will assess architectural clarity.
  • Evidence & metrics. Be precise: % timing slack improvement post‑package‑aware optimization; coverage delta via agent‑generated tests; regression throughput speedups (x‑fold) after moving to hardware‑assisted verification; power/perf shifts under production workloads. The various DAC62 recaps made clear: people are quantifying their results.
  • Pitfalls & playbook. What failed first? Data cleanliness, model fidelity, agent drift, spec/versioning? Turn your scar tissue into a 3–5 step “do this first” list.

Remember, DAC 2026 is Chips to Systems end‑to‑end.

The conference is explicitly inviting contributions that span disciplines, not just point optimizations.

Special Sessions: Curate the Cross‑Cutting Conversations

Beyond standard talks and posters, Engineering Track Special Sessions can stitch together themes like Agentic AI across the flow, From executable spec to multi‑die signoff, or Software‑defined validation at scale. The 2025 panel on AI‑enabled EDA startups drew strong interest because it packed diverse, hands‑on perspectives. If you can convene 3–5 voices (user + tool + ecosystem), you’ll help frame where practice is heading in 2026.

Final Nudge: Your Story Will Help Someone Ship

The best DAC Engineering Track talks I’ve seen aren’t victory laps; they’re honest accounts where a team wrestled with modern complexity – agents in the loop, multi‑die realities, software‑defined validation – and found a pattern worth sharing.

DAC62 proved the appetite is huge: agentic AI is advancing quickly, chiplets are mainstreaming, and the community is aligning on systems‑level thinking.

Now it’s your turn to add to that collective playbook.

Submit by January 20 and bring your hard‑won lessons to the people who will put them to work. Start here for the Engineering Track Submissions, and here for the Engineering Track Special Sessions.

See you in Long Beach!

Disclaimer: This article is written in my role as Engineering Track Chair for DAC 2026

Also Read:

CES 2026 and all things Cycling

Podcast EP326: How PhotonDelta is Advancing the Use of Photonic Chip Technology with Jorn Smeets

Webinar: Why AI-Assisted Security Verification For Chip Design is So Important


2026 Outlook with Randy Caplan of Silicon Creations

2026 Outlook with Randy Caplan of Silicon Creations
by Daniel Nenni on 01-14-2026 at 10:00 am

randy caplan

Randy Caplan is co-founder and CEO of Silicon Creations, and a lifelong technology enthusiast. For almost two decades, he has helped grow Silicon Creations into a leading mixed-signal semiconductor IP company with 500+ customers spanning almost every major market segment. He has driven the development of key technologies for over 700 unique SerDes and PLL IP products, in all mainstream manufacturing nodes from 350nm down to 2nm. He is well known for his promotion of an organic growth model for business and has helped guide Silicon Creations to an average annual growth rate of 25% for the past decade, with nearly 100% employee retention. Prior to Silicon Creations he was a design engineer for PLLs and SerDes at Agilent Technologies, Virtual Silicon, and MOSAID.

Tell us a little bit about yourself and your company.

Silicon Creations is a mixed-signal IP provider, specializing in low-risk, high performance clocking solutions (PLLs), high speed data interfaces (SerDes), and accurate temperature and voltage sensors. We offer designs in every foundry (TSMC, Samsung, GF, Intel, Rapidus, UMC, SMIC, and more…), and every process node (below 2nm up to 180nm, all inclusive). We were founded in 2006, are ISO9001 certified, and after 19 years, still have close to 100% employee retention.

What was the most exciting high point of 2025 for your company?

Passing 14 million (TSMC) wafers shipped using our IP. Also, we passed our 1000th production license of our Fractional SoC PLL IP.

We developed, taped out, and tested a portfolio of 2nm TSMC IP including multiple PLLs, free-running oscillators, low-noise IOs, and temperature sensor IP. We also developed select 2nm PLLs with multiple other foundries including Samsung, Intel, and Rapidus.

What was the biggest challenge your company faced in 2025?

Simultaneously developing IP in all the leading process nodes (TSMC, Samsung, Intel Foundry, Rapidus). This required new design techniques to support core-device-only requirements in GAA (gate-all-around) nano-sheet / nano-wire processes.

How is your company’s work addressing this biggest challenge?

Chip development costs in advanced nodes have gone up exponentially. We provide a substantial portfolio of low-risk, proven, foundational IP which helps enable our customers to get their designs right the first time. Our advanced design and verification flow enables fast iteration and re-verification in processes where PDKs are frequently changing. This ensures we’re not the bottleneck in our customers’ development schedule.

What do you think the biggest growth area for 2026 will be, and why?

We’re seeing growth in many parallel market segments including AI accelerator chips, consumer electronics (AR/VR/mobile), and automotive. Customer tapeouts in advanced nodes (5nm and below) are picking up. We’re even seeing active development in sub-2nm nodes. Due to long chip development schedules and fab times in advanced nodes, our IP sales tend to be a strong leading indicator of chip sales one to two years in the future. We don’t have first-hand knowledge of the end market forces, but we can infer from our IP sales which semiconductor market segments are seeing increased investment now.

How is your company’s work addressing this growth?

We use an advanced IP design flow, leveraging the latest EDA flows from Synopsys, Cadence, Siemens, Silvaco, and others. This helps to reduce IP development time, improve simulation-to-silicon correlation, and ensure our customers have the foundational blocks they need to build their high-performance chips.

What conferences did you attend in 2025 and how was the traffic?

All TSMC shows, ICCAD (China), DAC and many other foundry and EDA events (Samsung, GF, Intel, Siemens U2U, CadenceLive). Traffic was especially strong at TSMC and ICCAD, but we also had a good turn-out at our booth for the other events as well.

Will you attend conferences in 2026? Same or more?

Silicon Creations attended over 30 trade shows / conferences in 2025 and expects to attend a similar number in 2026.

How do customers engage with your company?

We already work with 18 of the top 20 TSMC customers, and over 550 companies overall. For new inquiries, please send an email to sales@siliconcr.com, or come to our booth at any trade show.

Also Read:

Silicon Creations Company Update 2025

Silicon Creations at the 2025 Design Automation Conference #62DAC

Silicon Creations Presents Architectures and IP for SoC Clocking


Where is Quantum Error Correction Headed Next?

Where is Quantum Error Correction Headed Next?
by Bernard Murphy on 01-14-2026 at 6:00 am

quantum computer and QEC coprocessor min

I have written earlier in this series that quantum error correction (QEC), a concept parallel to ECC in classical computing, is a gating factor for production quantum computing (QC). Errors in QC accumulate much faster than in classical systems, requiring QEC methods that can fix errors fast enough to permit production applications. I have read of leaders in the field using FPGAs or GPUs to support QEC, which to me sounded intriguing but also difficult to scale for several reasons. Qubits can’t exist in the classical regime, seeming to imply that they must be collapsed prior to transfer, which would destroy carefully constructed superpositions and entanglement. Communication to an external chip (and back again after calculation) must travel through bulky and constrained channels with significant latencies, particularly in superconducting QC. On the return trip into the QC, the corrected qubit would need to be reconstructed. Would all this added latency undermine performance expectations for a production algorithm?

What’s really happening in QEC today

It took quite a bit of digging, including an excursion into ChatGPT (surprisingly helpful in response to a very technical question), to figure out what is really happening: a more refined partitioning than I had understood for information to be communicated off-chip, and an acknowledgement that FPGA and GPU methods are widely used but temporary expedients, allowing QC builders to research and refine QEC techniques, but not expected to survive as a part of production fault tolerant systems.

First, the primary qubits on chip aren’t measured. The additional qubits used for QEC can be measured, and that data can be communicated to an off-chip device. Said device figures out what corrections must be made per qubit, communicates that back to the QC, where quantum circuitry takes over again to apply those corrections by a mechanism which does not break coherence.

Second, latencies in this path can be significant. High noise rates require frequent correction, but frequency is limited by those latencies. Equally this limits the number of qubits that can be managed (qubits X latency all needing to channel out to the coprocessor and back again) This is why, useful though FPGA and GPU are as QEC co-processors today, they are not seen as long-term solutions for production QEC algorithms.

From prototypes to production QEC support

All prototyping systems eventually evolve to ASICs unless prototype performance is adequate and volumes are not expected to be high. Since QC vendors aim for high performance and (eventually) high qubit count, they too are planning ASICs for QEC. But these must sit very close to the QC core to minimize communication overhead. Which puts them in or very close to the deep cooling cavity for superconducting QCs. IBM plans a QEC core built with cryogenic CMOS, I’m guessing on the bottom layer in their 3D-stack architecture: qubits on the top-layer, resonators on the middle layer, and QEC on the layer under that.

This is very nice technology advance. I don’t know where other QC vendors and technologies are at in this race, but I have to believe IBM QC, already a dominant player in the QC market, is aiming to further widen its lead.

Usual caveat that this is a fast-evolving market and promises aren’t yet proven deliverables. Still, keep a close eye on IBM!

Also Read:

2026 Outlook with Nilesh Kamdar of Keysight EDA

Verifying RISC-V Platforms for Space

2026 Outlook with Paul Neil of Mach42


2026 Outlook with Nilesh Kamdar of Keysight EDA

2026 Outlook with Nilesh Kamdar of Keysight EDA
by Daniel Nenni on 01-13-2026 at 10:00 am

Nilesh Kamdar Show

Tell us a little bit about yourself and your company.
I’m Nilesh Kamdar, General Manager of the Keysight EDA business unit. Keysight is an S&P 500 company that provides design, emulation, and test solutions to help engineers develop and deploy faster with less risk. On the EDA side, we focus on RFMW, high-speed digital, systems, power and photonic design challenges. These are the problems that keep semiconductor and system designers up at night: multi-physics simulation, signal integrity, power consumption, and making sure complex designs work.

What was the most exciting high point of 2025 for your company?

Acquiring Synopsys’ Optical Solutions Group and Ansys’ PowerArtist were high points. Rather than just buying market share, these additions bring innovative optical design capabilities (CODE V, LightTools, RSoft) and leading RTL power analysis (PowerArtist) to our portfolio, addressing the multi-physics imperative. We’re bringing in decades of expertise in photonics, optics, and power analysis, enabling Keysight to deliver multi-domain system design in an open, vendor-agnostic ecosystem. As thermal and power constraints tighten, these capabilities are imperative.

What was the biggest challenge your company faced in 2025?

Helping our customers on their AI journey and ensuring that the technology is carefully and securely deployed. This is often at odds with some in the C-Suite who view AI as the answer to every problem, and that substantial cost savings are inevitable.

How is your company’s work addressing this challenge?

To help our customers, we’re building AI features to solve problems in design workflows – accelerating verification, prioritizing corner-case testing, and reducing manual iterations. Part of this is educating on where AI delivers value and where human expertise remains essential. At Keysight, we believe AI is about augmentation.

What do you think the biggest growth area for 2026 will be, and why?

Multi-physics simulation. As designs push against thermal and power limits, particularly in data centers, the ability to simultaneously analyze electrical, thermal, and mechanical properties has become essential. Every milliwatt matters when data centers consume billions of watts of energy. Tools that optimize across domains will be critical for next-generation designs.

How is your company’s work addressing this growth?

We’re continuing to develop our multi-physics capabilities to improve co-design and co-verification. This includes advancing photonics integration, enhancing thermal analysis capabilities, and ensuring that designers can understand trade-offs across electrical performance, thermal management, and manufacturing constraints within workflows.

Are you incorporating AI into your products?

Keysight has been at the forefront of integrating AI. From an EDA perspective, we’re focused on integrating capabilities that augment productivity in verification and design workflows. Our goal is to harness AI to help engineers work faster while utilizing their unique expertise to design complex semiconductors.

Is AI affecting the way you develop your products?

We’re using AI to accelerate our own development cycles and improve product quality. More importantly, we’re deploying AI judiciously to solve real problems our customers face, such as workflow bottlenecks.

What conferences did you attend in 2025 and how was the traffic?

We attended various industry events spanning DesignCon, DAC (Design Automation Conference), IMS (International Microwave Symposium), DVCon India, ECOC, and European Microwave Week. Traffic was robust across all of these. Conversations focused on detailed implementation discussions around AI-enhanced tools, multi-physics solutions, and chiplet design, where the technical challenges are most acute.

Will you participate in conferences in 2025? Same or more as 2025?

Events remain an important part of our strategy to connect and I don’t see that changing anytime soon. As design complexity increases, face-to-face technical discussions are invaluable!

How do customers normally engage with your company?

There are multiple ways we engage with customers, including direct sales, technical support teams, and field application engineers who work closely with design teams. We also connect through training programs, webinars, and technical content. Personally, I spend a lot of time on the road and engage regularly with customers to understand their perspective. Keysight is solving specific design challenges with our customers, not just selling licenses.

Also Read:

From Silos to Systems, From Data to Insight: Keysight’s Upcoming Webinar on EDA Data Transformation

An Insight into Building Quantum Computers

Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires

Video EP11: Meeting the Challenges of Superconducting Quantum System Design with Mohamed Hassan


Verifying RISC-V Platforms for Space

Verifying RISC-V Platforms for Space
by Bernard Murphy on 01-13-2026 at 6:00 am

User making a call through a satellite

Space applications are booming, prompted by rapidly declining launch costs now attainable through commercial competition. Thanks to ventures like SpaceX, the cost to put a satellite into low earth orbit (LEO) has dropped from $20k/kg to $2k/kg today and is expected to drop further to $200/kg or lower. Plummeting costs drive new opportunities including widely accessible SATCOM (satellite communication), offering phone and broadband support through large constellations of satellites from Starlink and Amazon Leo (previously Amazon Kuiper). China is also on this bandwagon, building multiple constellations of their own. Standardization through the 3GPP (cellular) consortium is accelerating and will enable competitive interoperability, already partially supported in 5G-Advanced and expected to be fully supported in 6G. This isn’t just for emergency calls but also for regular calls direct to your smartphone. Standards-based phone communication, broadband and IoT support, across 85%+ of the earth’s surface where there are no terrestrial base stations? This is an exciting time for space-ready systems.

Equally space-based services will continue to grow in importance for defense, weather and climate monitoring, disaster support, and many other applications. Across all these deployments, expect to see growth in similar directions to those we see in terrestrial applications: AI, high performance servers, security, and much more. A new space-based economy is blossoming and will demand electronics able to function reliably for many years in this harsh environment.

Space-ready is not just about rad-hard

Radiation, solar and deep space, makes space a particularly hostile environment for electronics. Protection against electron cascades triggered by high energy cosmic rays demands special radiation hardened (rad-hard) processes, logic redundancy, ECC, all the tricks we now see for high-safety automotive systems but in space much more demanding, lacking the shielding our atmosphere provides.

Rad-hard is important but alone it is not enough. Beyond remotely triggered options, electronics in space can’t be serviced cost-effectively. If something stops working and a reset or reboot won’t fix the problem, the satellite is dead. This puts much higher emphasis on bullet-proof verification while the system is still in design. Good enough for a phone, even for a car, isn’t good enough for a satellite.

Comprehensive verification against a spec

Complex specifications present some unique challenges in this respect. According to Dave Kelf (CEO, Breker) the RISC-V International ISA spec runs to around 1400 pages, all carefully considered and agreed. Now you must verify behavior not just for what you added to the ISA but also across interactions with other features, driven by as many positive and corner case use-cases as you can construct.

It is not difficult to generate comprehensive unit test cases for rad-hardening features such as ECC or redundancy. Testing individually against other spec features, one at a time, say cache coherence management, may not be too bad but difficult to test comprehensively. But where testing becomes hard is in cross verifying all relevant use cases against each other. Real systems run many objectives simultaneously, stalling as needed to deal with traffic contention in bus fabrics. Stalls, latency and congestion are where difficult bugs lurk, satellite-dooming bugs. Getting to high confidence here requires system-level test definition, with the ability to run multiple use-cases simultaneously.

Breker’s approach to testing starts with system-level test models abstracted from implementation details, allowing you to easily combine VIPs defined at the same level. Breker themselves offer several VIPs, including RISC-V-related tests, together with tests checking coherency compliance, security and other areas. You can easily add your own models following the PSS standard or using standard C++, allowing for randomization especially around test models for your ISA extensions. Running these models together, generating interwoven and high demand traffic loads, will probe every dark corner of your system behavior.

What if you missed spec corners when building your test models?

The purpose of verification is to test compliance of your implementation with the spec. As a description of what compliance should mean, the RISC-V spec may be one of the more debated and refined specifications in our industry. But there is still a fallible, human step in mapping that document to a complete implementation of intent as represented in a test specification.

As linear documents, even the best and most reviewed of specs are an awkward way to capture a web of interconnected feature dependencies. This is not an academic concern. Dave shared a real challenge in interpreting RISC-V requirements around fence instructions. Quick reminder, these are instructions inserted in assembly code to control accesses to shared memory between multiple cores, which must be correctly ordered to avoid races. Standard coherence protocols can handle many but not all such cases. Special cases are typically application specific, such as a shared memory location used as a semaphore. Core A updates the semaphore indicating it is safe for core B to perform some other action. If core B reads the semaphore before Core A has updated, it may incorrectly assume it is OK to proceed. Adding fence instructions forces sufficient delay to avoid races between read and write.

Coherency bugs can be very challenging to catch, sometimes only appearing after billions of cycles in production. Not the kind of problem you want to see in a deployed satellite. The Breker guys spotted a RISC-V spec challenge that could lead to such a bug (leading one now loyal Breker customer to a redesign). The spec is completely accurate, detailing fence behaviors early in the document. But in a later unrelated part of the document, there is a mention of a fence behavior which is also important to understand, yet is easy to miss if your reading is limited to the earlier section.

Specs are living documents and it is probably unrealistic to expect this kind of problem cannot appear again. A safer approach would be to use AI to tease out such traps and more generally the web of relationships through a spec. I have talked earlier about Breker’s approach to AI, based on NLP rather than LLM. Dave tells me this is still in development, but they have already applied it to detect these distributed fence references in the spec, which it has done very successfully. To me this looks like an essential second step to closing the understanding loop in specs. First make sure the spec is oracle-worthy (not a problem for RISC-V), second make sure that you understand all relationship webs throughout the spec when translating into test models.

Very interesting. You can learn more in this press release on the Breker and FrontGrade Gaisler collaboration

Also Read:

A Principled AI Path to Spec-Driven Verification

RISC-V Virtualization and the Complexity of MMUs

How Breker is Helping to Solve the RISC-V Certification Problem


2026 Outlook with Paul Neil of Mach42

2026 Outlook with Paul Neil of Mach42
by Daniel Nenni on 01-12-2026 at 10:00 am

Paul's headshot

Tell us a little bit about yourself and your company

I’m Paul, Chief Operating Officer at Mach42. As COO, I am responsible for the business growth of Mach42, as well as driving customer success. My previous roles included VP of Product at Axelera AI, Graphcore and XMOS. I hold a PhD in Electrical Engineering and an MBA in Technology Management.

Mach42 is delivering a modern solution to accelerate analog and mixed-signal verification, leveraging advanced machine learning and AI to simplify, automate, and speed up complex verification tasks. Our proprietary neural network technology enables the creation of high-accuracy surrogate models from minimal data, dramatically reducing development and computational costs. These models can be automatically exported in Verilog-A, System Verilog, and C/C++ formats, enabling seamless integration with industry-standard simulators.

What was the most exciting high point of 2025 for your company?

One of the standout high points in 2025 was successfully unveiling a breakthrough AI-powered solution for analog circuit analysis. Our Discovery Platform was enhanced to dramatically improve validation of design performance across varying spec conditions, supporting near-realtime analysis to rapidly detect out-of-spec violations.

Another major highlight was being named a finalist in four prestigious awards this year. This includes the Design Tool and Development Software Product of the Year at the Elektra Awards, the Innovation Award at the OXBA Awards and both AI Innovation of the Year and Innovative Tech Company of the Year at the Thames Valley Tech and Innovation Awards.

What was the biggest challenge your company faced in 2025?

Our main challenge was striking the right balance between long-term product innovation and near-term customer deliverables. We’ve addressed this by embedding domain-specific analog intelligence into our neural network technology, enabling near–real-time verification while remaining fully compatible with existing EDA workflows.

As a result, Mach42 can automatically generate accurate surrogate models in Verilog-A format that run on standard SPICE-class simulators, significantly reducing verification time without compromising accuracy.

How is your company’s work addressing this challenge?

Mach42 addresses this challenge by combining physics-aware neural network models with deep analog design expertise to deliver immediate, production-ready value. Our platform integrates directly into existing EDA workflows, allowing engineers to achieve near–real-time verification speeds without changing how they design or verify circuits.

By automatically generating accurate surrogate models in Verilog-A, Mach42 dramatically reduces simulation time while preserving SPICE-level fidelity, enabling faster design iteration and earlier identification of corner-case issues.

Additionally, our advisory board and dedicated technical team help shape a credible long-term roadmap. Industry recognition and programs such as Cadence’s Connections Program further reinforce trust in our technology.

What do you think the biggest growth area for 2026 will be, and why?

Looking to 2026, AI-driven verification and simulation in semiconductor design is poised for significant growth. As chips become increasingly complex and time-to-market pressures intensify, engineers will demand solutions that accelerate verification while maintaining the highest levels of accuracy. Tools that combine speed, scalability, and predictive insights will become essential to meeting these challenges.

How is your company’s work addressing this growth?

Mach42’s Discovery Platform directly addresses this growth by leveraging machine-learning–driven emulation to rapidly predict design outcomes and accelerate design space exploration, identifying out-of-spec conditions early. It also streamlines IP reuse by validating performance across varied specifications and integrates seamlessly with existing simulators and flows, making adoption straightforward for design teams. These capabilities position Mach42 as a key enabler for next-generation semiconductor design.

Are you incorporating AI into your products? / Is AI affecting the way you develop your products?

Absolutely. AI is at the heart of our platform, powering proprietary algorithms that accelerate traditionally slow simulation and verification tasks while preserving accuracy, delivering orders-of-magnitude speedups. It’s not just a feature—AI shapes both the product and how we develop it. Our models learn from past simulation data, adapt to complex analog design challenges, and continuously improve through real-world feedback, enhancing both performance and reliability.

How do customers normally engage with your company?

Customers typically start with a focused engagement to identify their key areas of interest. After this initial phase, they move to an annual subscription, which provides full access to the Mach42 Discovery Platform, R&D support, and ongoing technical assistance.

Additional comments?

Mach42’s rapid progress in 2025—from major product milestones to industry recognition—highlights the rising demand for AI-first solutions in complex engineering. Positioned at the intersection of AI, simulation, and semiconductor design, we are uniquely equipped to shape the future of chip development.

Contact Mach42

Also Read:

Video EP12: How Mach42 is Changing Analog Verification with Antun Domic

Video EP10: An Overview of Mach42’s AI Platform with Brett Larder

An Important Advance in Analog Verification

CEO Interview: Bijan Kiani of Mach42