RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Quantum Computers: Are We There Yet?

Quantum Computers: Are We There Yet?
by Bernard Murphy on 01-06-2026 at 6:00 am

quantum are we there yet

R&D for any fundamentally new technology takes time, especially for hardware; over 10 years passed from the first transistor to the first (very small) integrated circuit. The engineering behind quantum computers is arguably even more challenging than for electronic circuits, at least from today’s perspective. We shouldn’t be surprised that even now there are more challenges to overcome. As usual, for a quick read I will look at just a couple, a gate essential to demonstrating quantum advantage over classical computing, and quantum error correction (QEC) / fault-tolerant architectures.

T-gates

Introductions to quantum programming start with a simple set of gates called the Clifford group: H, S, and C-NOT gates. This group is important in many aspects of quantum programming and has the added advantage that algorithms based on the group can be simulated classically as efficiently as in quantum implementations. However the group is not complete for supporting general calculations, especially for algorithms that would be exponentially complex to compute on a classical system.

T-gates are a popular option to extend the set. Skimming over the details, gates in the Clifford group can modify a qubit through a very limited set of possibilities. A T-gate extends the range and can be thought of as the square root of an S gate (equivalent to S if applied twice). When combined in a sequence with Clifford gates this enables any arbitrary operation on a qubit to some desired accuracy. (Might seem surprising but the math goes all the way back to Euler.)

That’s the theory. How can a T-gate be implemented on a real quantum computer (QC)? Looking back at a couple of my earlier blogs on QC implementation (this and this), you’ll remember that gates are implemented by pulsing qubits with microwave, laser or other EM options. Naturally these mechanisms have imperfections, including finite frequency spread in pulses and imperfect focusing optics for lasers.

Add ambient noise on top of these imperfections and you can see that any intended gate operation will be performed with less than perfect fidelity. Clifford group states are in some sense maximally separated so perhaps a little less prone to errors, but a T-gate operation, as a square root of a Clifford gate, will typically be more sensitive.

The QC experts have an answer to this need. In one example, they pre-build one or more sets of purified T-gates, distilled from multiple noisy T-gates. In this approach, when considering your target algorithm, the number of T-gates you will need must be pre-determined along with the level of accuracy you must meet.

I expect the mechanism to purify T-gates will be captured in a library though given constraints on available qubit and coherence times for production systems today, any bloat will further limit allowable size and duration of your algorithm. Adding this new gate also adds complexity to verifying an algorithm on a classical computer, since complexity on a classical system grows exponentially with the number of T-gates. Recent research has shown it is possible to simulate relatively shallow circuits with T-gates so there is still hope for debugging noisy quantum circuits.

QEC and fault-tolerant computing

I touched on QEC in my blog on physical implementation of quantum computers. I’ll expand a bit more here. The basic concept in QEC is the same as in classical error correction – “copy” onto redundant qubits, run an operation, detect single qubit errors through majority voting and correct any errors (also work hard to make the probability of 2 or more errors very small). An operation starts with a qubit, adds redundancy, performs the operation, detects and corrects errors, from that reconstructing a corrected post-operation qubit.

There are complications.

Simply copying real qubits breaks coherence, instead they are entangled with the redundant qubits. After an operation, the method detects incorrect entangled states and uses that result to correct errors. However, qubit error possibilities aren’t just bit-flips; they can also be phase flips (e.g. |0>+|1> flips to |0>-|1>), requiring more circuitry to detect and correct. QEC around gates outside the Clifford group (like a T-gate) is more complicated to protect. Of course, error detection and correction circuitry will contribute further errors.

If this sounds difficult, you are not wrong. This is a race between error creation and error detection and correction. Some views held that ~1000 physical qubits would be required per logical qubit, clearly not scalable. Very active research is going into this area, in higher reliability qubits and in fault-tolerant computing, computing with accuracy using faulty systems.

I can’t find data on superconducting versus ion trap qubit reliabilities. Before considering external noise, I feel that ion trap reliabilities should be intrinsically high simply thanks to the physics of a single ion, whereas manufactured superconducting qubit will have inevitable tolerance errors.

IBM recently published breakthrough results, to reduce noise in memory (a set of qubits) though this does not touch on gate operations. Here is a nicely readable version, including some background on evolution of error correcting codes from Hamming to surface codes and now their latest “gross code”. For gate operations IBM points to next steps in the readable link immediately above. I couldn’t find reported results. In my (possibly faulty 😀) reading this suggests greatly reduced overhead for error correction, partly through choice of error-correcting codes and partly through pre-distillation of more error-prone operations such as T-gates (see above).

More background

There is a great video tutorial series from Artur Ekert (professor of quantum physics at Oxford) that I mentioned in an earlier blog. This is a deep dive into quantum information science, not for the timid or anyone looking for a quick summary. But I found it illuminated many points for me, especially in viewing any quantum computation as essentially a very elaborate quantum interference experiment. If you are familiar with regular wave interference (drop 2 stones in a pond and watch how the ripples interact) and/or the double-slit experiment this will make complete sense.

Also Read:

Simulating Quantum Computers. Innovation in Verification

Quantum Advantage is About the Algorithm, not the Computer

Quantum Computing Technologies and Challenges


Chips&Media and Visionary.ai Unveil the World’s First AI-Based Full Image Signal Processor, Redefining the Future of Image Quality

Chips&Media and Visionary.ai Unveil the World’s First AI-Based Full Image Signal Processor, Redefining the Future of Image Quality
by Daniel Nenni on 01-05-2026 at 12:00 pm

AI Chips&Media Visionary AI


In a groundbreaking announcement that is poised to transform digital imaging, Chips&Media, a leading provider of video codec and image processing hardware IP, has partnered with Visionary.ai, an innovative startup specializing in AI-driven vision technology, to unveil the world’s first fully AI-based Image Signal Processor (ISP). This revolutionary solution marks a significant shift from traditional hardware-fixed ISPs to a dynamic, software-powered system that leverages artificial intelligence for unparalleled image quality and adaptability.

Traditional Image Signal Processors have long been the backbone of digital cameras, smartphones, security systems, and automotive sensors. These hardware-based pipelines handle raw sensor data, performing tasks like demosaicing, noise reduction, color correction, and exposure adjustment. However, they are rigid, power-hungry, and limited in handling challenging conditions such as low light, high dynamic range (HDR), or fog. Visionary.ai has pioneered a software-based ISP that replaces conventional algorithms with AI models, achieving superior noise reduction. Results in low light show over 75% increase in detection, 91% reduction in false positives, and real-time performance that rivals or exceeds non-real-time alternatives.

By integrating Visionary.ai’s AI ISP technology with Chips&Media’s expertise in efficient hardware IP for image signal processing and computer vision, the collaboration delivers the first complete AI-native full ISP pipeline. This hybrid approach combines the flexibility of software with optimized hardware acceleration, enabling devices to process raw images with machine learning at every stage. The result? Dramatically sharper, more accurate colors, better low-light performance, and enhanced detail preservation, all while reducing power consumption and allowing over-the-air updates to improve performance post-deployment.

This innovation redefines image quality across industries. In consumer electronics, smartphones and action cameras will capture vibrant, noise-free videos in near-darkness. For automotive applications, ADAS and autonomous vehicles gain more reliable object detection in adverse weather. Security and surveillance systems benefit from clearer identification, while drones, medical imaging, robotics, and IoT devices see boosted accuracy in machine vision tasks. The AI ISP can be tuned for specific needs emphasizing greens in agriculture or reds in medical diagnostics and offering customization unmatched by fixed hardware.

What sets this apart as the “world’s first” is its end-to-end AI integration: unlike partial AI enhancements from competitors like Ambarella or Sony, this is a comprehensive replacement of the traditional ISP pipeline with predictive, learning-based processing. Running efficiently on edge devices, it addresses the “garbage-in, garbage-out” problem in AI vision systems by providing cleaner input data from the start.

As AI becomes a standard feature in imaging, this partnership signals a new era where cameras evolve intelligently over time. Chips&Media and Visionary.ai are not just improving image quality, they are future-proofing vision technology, empowering devices to see the world more like the human eye, but better. This breakthrough promises to accelerate advancements in edge AI, making high-performance imaging accessible and affordable worldwide.

Contact Chips&Media

Also Read:

2026 Outlook with William Wang of ChipAgents.ai

2026 Outlook With Mahesh Tirupattur of Analog Bits

Silicon Catalyst: Searching for the Next Great Start-up


2026 Outlook with William Wang of ChipAgents.ai

2026 Outlook with William Wang of ChipAgents.ai
by Daniel Nenni on 01-05-2026 at 10:00 am

William Wang ChipAgents SemiWiki

William Wang is a world-leading expert in artificial intelligence, specializing in generative AI and large language models. As the Founder, CEO, and Chairman of Alpha Design AI, he brings a wealth of experience from academia and industry, having previously shipped Amazon Q at Amazon AWS Bedrock

A Mellichamp Chair Professor of AI at UCSB, William has been recognized with prestigious honors such as the IEEE Laplace Award, BCS Karen Spärck Jones Award, NSF CAREER Award, IEEE AI’s 10 to Watch, and the DARPA Young Faculty Award.

William has published over 250+ articles at premier AI venues and widely cited in top academic conferences and media outlets such as VentureBeat, Wired, Fortune, and SemiWiki.com. William is at the forefront of AI innovation, driving the next generation of AI-powered semiconductor design and verification.

Tell us a little bit about yourself and your company.

I’m William Wang, founder and CEO of ChipAgents.ai. I’m also a professor of AI at UC Santa Barbara. ChipAgents builds an agentic AI platform for semiconductor design and verification, helping RTL, DV, and CAD teams accelerate spec-to-silicon workflows using AI agents that integrate directly into existing EDA environments.

What was the most exciting high point of 2025 for your company?

In 2025, we achieved large-scale production deployments with multiple tier-1 semiconductor companies and saw rapid expansion from pilot teams to hundreds of engineers at a single customer. The scale of real usage, measured in billions of tokens and daily active engineers, strongly validated product-market fit.

What was the biggest challenge your company faced in 2025?

The biggest challenge was scaling from early adopters to enterprise-wide deployments while meeting strict security, infrastructure, and workflow integration requirements across very different semiconductor organizations.

How is your company’s work addressing this challenge?

We invested heavily in enterprise readiness, including different deployment environments, fine-grained access control, auditability, and deep integration with customers’ existing RTL, DV, and CAD toolchains, without forcing workflow changes.

What do you think the biggest growth area for 2026 will be, and why?

The biggest growth area in 2026 will be AI-native verification, debug, and system-level reasoning. As designs grow more complex, verification productivity, not raw RTL coding, is becoming the primary bottleneck.

How is your company’s work addressing this growth?

ChipAgents focuses on multi-agent reasoning for root-cause analysis, coverage closure, testbench generation, and design-verification co-optimization. Our agents operate across code, waveforms, logs, and specs, not just text.

What conferences did you attend in 2025 and how was the traffic?

We attended major industry events including DAC, DVCon, and several private semiconductor and EDA executive summits. Traffic and engagement were very strong, with a noticeable increase in hands-on technical discussions rather than exploratory conversations. It’s great to see the industry embracing our new AI agents platform to be a force multiplier for their tapeout projects.

Will you participate in conferences going forward? Same or more?

Yes. We plan to participate in more conferences, with a stronger focus on targeted executive meetings, closed-door technical sessions, and customer-led use-case discussions rather than broad booth marketing.

How do customers normally engage with your company?

Customers typically start with a technical evaluation or pilot, followed by expansion into production teams. Engagement is highly collaborative, with close interaction between our engineering team and customer CAD, RTL, and DV groups.

Are you incorporating AI into your products?

Yes. AI is core to our product. ChipAgents is built around multi-agent systems designed specifically for semiconductor design and verification workflows.

Is AI affecting the way you develop your products?

Absolutely. We use AI internally for rapid prototyping, testing, and workflow optimization, and customer feedback from real production usage directly shapes how our agents evolve.

Additional comments?

AI in semiconductors is shifting from experimentation to mission-critical infrastructure. The winners will be platforms that integrate deeply, respect existing workflows, and deliver measurable productivity gains.

CONTACT ChipAgents

Also Read:

I Have Seen the Future with ChipAgents Autonomous Root Cause Analysis

AI RTL Generation versus AI RTL Verification

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification


2026 Outlook With Mahesh Tirupattur of Analog Bits

2026 Outlook With Mahesh Tirupattur of Analog Bits
by Daniel Nenni on 01-05-2026 at 6:00 am

Mahesh Tirupattur Analog Bits

Tell us a little bit about yourself and your company. 

As CEO of Analog Bits, I am quite excited to be at the helm of a company that plays a critical role in managing power both efficiently and in an intelligent manner.  The sustainability challenges resulting from the explosive growth in AI demand this. I joined the company over 22 years ago and became CEO in 2024. I have seen very strong growth for the company during 2025.

So, you asked about Analog Bits. Well, we are the industry’s leading supplier of mixed signal IP for intelligent energy and power management in a broad range of semiconductor products. We have a track record spanning over 30 years delivering silicon-proven IPs on the latest process nodes. We have an established reputation for easy and reliable integration into advanced SoCs through all the major silicon foundries, and Analog Bits has an outstanding heritage of first-time working IP that lowers risk.  Our leading low-power integrated clocking, sensors, and interconnect IP are pervasive in virtually all of today’s semiconductors, resulting in billions of our IP cores fabricated in customer silicon, from 0.35 micron to 2nm processes.

What was the most exciting high point of 2025 for your company? 

Demonstrating first working silicon of several of our IPs on test chips using TSMC N2P and demonstrating N3 IPs on our customers’ products. And demonstrating power observability and delivering power savings on our power sensors and LDO IPs. We presented five joint TSMC OIP papers with customers such as ARM, Cerebras, NXP, Socionext and Semidrive in three continents. This work demonstrated successful use cases of our IPs in AI, data center and automotive applications. Furthermore, we received the Analog IP Partner of the Year Award from TSMC.

What was the biggest challenge your company faced in 2025? 

The biggest challenge was to keep up with the fast pace of demand from the industry because of AI driving growth. We had to make sure we built not only the best engineering products, but also achieve the scale needed to support market demands. We had to build teams that were scalable and able to adapt quickly to changing needs and requirements to deliver solutions with the highest quality. Another challenge we addressed was to enhance financial discipline and structure to help the IPO process of our parent company.

How is your company’s work addressing this challenge?  

In addition to our engineering team continuing to grow their skills on advanced node designs, we also stayed close to our customers to ensure we understood their needs. So, our customer-facing team not only participated in many industry events to present papers and showcase our solutions but also spent a lot of time directly with customers. This  meant spending a lot of time on planes to ensure first-hand that our IP solutions were meeting their needs.

What do youthink the biggest growth area for 2026 will be, and why?

Clearly AI is the buzzword of the moment, but that is a very broad term. For Analog Bits, it translates to huge growth opportunities in areas like data centers and automotive. Today, we see that helping our customers manage power will also optimize performance. These qualities go hand in hand. We see the biggest growth opportunity is helping our customers to gain greater insights into power and managing power for the best performance.

How is your company’s work addressing this growth? 

A key benefit of our technology is something we call the Intelligent Power Architecture. This means providing a way to manage the power within an SoC in a smart way, monitoring power on-chip so that you can optimize power utilization, while not compromising on performance. High accuracy temperature sensors and power glitch sensors are key IPs that can detect abnormalities in SoC designs. And our LDOs, regulators and PLLs help balance performance and power.

Is AI affecting the way you develop your products?

Of course, we are not different than anyone else in this industry in that AI is certainly going to change the way we develop products. For example, Analog Bits collaborates with large system companies designing AI SoC’s who face a myriad of energy efficiency challenges. This allows us to develop smart mixed-signal IPs to solve these application needs.

What conferences did you attend in 2025 and how was the traffic?

We attended several conferences around the world addressing many parts of the value chain, from design (e.g., DAC) to manufacturing (e.g., TSMC, GF and Samsung events globally). These days, most of these conferences are not about traffic but about engaging with customers and partners around the world so that we can be highly responsive to their rapidly changing needs.

Will you participate in conferences in 2026? Same or more as 2025?

Indeed, this will be an important part of our strategy for customer and partner engagement in 2026. We will continue as we did in 2025 but also review and explore what makes sense to do more of.

How do customers normally engage with your company?

Our customers are truly global. They reach out to us through events we attend, our website and our sales channels. We license over 200 IPs each year for 90+ customer engagements. It is exciting to see more system companies license IPs from us directly even though they work with large ASIC houses. This is a good indication that we are solving problems that have system-level importance.

Additional comments? 

The analog IPs we are developing are no longer an end of the line purchase decision. The IPs we develop deliver architectural-level impact and we are excited to build upon our success of 2025 and look forward to even more success in 2026.

Also Read:

Podcast EP322: A Wide-Ranging and Colorful Conversation with Mahesh Tirupattur

Analog Bits Steps into the Spotlight at TSMC OIP

Analog Bits Steals the Show with Working IP on TSMC 3nm and 2nm and a New Design Strategy


Silicon Catalyst: Searching for the Next Great Start-up

Silicon Catalyst: Searching for the Next Great Start-up
by Daniel Nenni on 01-04-2026 at 2:00 pm

SiC 400 Jan2026Deadline Static

Silicon Catalyst has emerged as a distinctive force in the global start-up ecosystem, positioning itself not merely as an accelerator, but as a launchpad for deep-technology innovation. Focused primarily on semiconductor and hardware-based start-ups, Silicon Catalyst addresses a long-standing gap in the venture landscape: while software companies can scale quickly with limited capital, hardware and silicon ventures often require years of development, significant funding, and specialized expertise. In its search for the next great start-up, Silicon Catalyst blends technical rigor, industry partnerships, and long-term vision to nurture companies capable of reshaping entire industries.

At the heart of Silicon Catalyst’s mission is the recognition that silicon remains foundational to modern technology. From artificial intelligence and autonomous vehicles to healthcare devices and clean energy systems, advances in semiconductors drive progress across sectors. Yet, despite their importance, early-stage silicon start-ups face daunting barriers, high fabrication costs, long design cycles, and limited access to manufacturing resources. Silicon Catalyst seeks to remove these obstacles by offering selected start-ups unparalleled access to tools, mentorship, and industry networks that would otherwise be out of reach.

The search for the next great start-up begins with a strong emphasis on technical differentiation. Silicon Catalyst looks for founding teams with deep domain expertise and novel approaches to chip design, materials, or system architecture. Incremental improvements are not enough; the organization prioritizes ideas that promise step-change performance, cost efficiency, or energy savings. Whether it is a breakthrough in photonic computing, advanced sensors, or specialized AI accelerators, the goal is to identify technologies with the potential to become industry standards rather than niche solutions.

Equally important is the quality of the founding team. Silicon Catalyst understands that even the most promising technology can fail without capable leadership. Successful applicants typically combine technical excellence with entrepreneurial resilience. Founders must demonstrate the ability to learn quickly, adapt to market feedback, and navigate the complex relationships inherent in the semiconductor supply chain. The accelerator’s mentors, many of whom are seasoned executives, engineers, and investors, play a crucial role in shaping these teams, helping them avoid common pitfalls and make informed strategic decisions.

What sets Silicon Catalyst apart is its ecosystem-driven model. Instead of relying solely on cash investments, it offers start-ups access to an extensive network of partners, including semiconductor companies, EDA tool providers, foundries, and packaging firms. This in-kind support dramatically reduces development costs and accelerates time to market. For start-ups, this can mean the difference between an idea that remains on paper and a product that reaches customers. For Silicon Catalyst, it ensures that the start-ups it supports are grounded in real-world feasibility, not just theoretical promise.

The search for the next great start-up is also shaped by long-term thinking. Silicon Catalyst recognizes that hardware innovation does not conform to the rapid timelines typical of software ventures. As a result, it encourages patience, from both founders and investors, while maintaining rigorous milestones and accountability. This balanced approach allows start-ups to tackle ambitious problems without being forced into premature commercialization.

Bottom line: Silicon Catalyst’s pursuit of the next great start-up is about more than financial returns. It is about advancing the technological infrastructure that underpins modern society. By empowering silicon-focused entrepreneurs, the organization helps ensure continued innovation in areas critical to economic growth, national competitiveness, and global sustainability. In doing so, Silicon Catalyst is not just searching for the next success story—it is actively shaping the future of technology itself.

Submit Application Here

Also Read:

Revitalizing Semiconductor StartUps

Silicon Catalyst on the Road to $1 Trillion Industry

The 2025 Semi Industry Forum: On the Road to a $1 Trillion Industry


CEO Interview with Artem Golubev of testRigor

CEO Interview with Artem Golubev of testRigor
by Daniel Nenni on 01-04-2026 at 12:00 pm

Testrigor Artem

Artem Golubev is the founder and CEO of testRigor, a company revolutionizing software test automation through AI-powered plain English testing. With a mission to eliminate the massive maintenance overhead that plagues traditional testing tools while simultaneously improving test coverage, testRigor has enabled hundreds of companies, including Netflix, Splunk, Business Wire, and Koch Industries, to build test automation 15X faster and spend 200X less time maintaining it. I recently had the opportunity to sit down with Artem to discuss testRigor’s innovative approach to solving one of the software industry’s most persistent challenges.

Tell us about your company.

For almost a decade, I’ve been on a mission to reduce the number of human efforts spent on testing AND improve test coverage at the same time. We built testRigor to solve problems that prevent people from being able to achieve 100% test automation.

The fundamental issue is that many QAs make the mistake of using tools like Selenium simply because they’re free and open-source. However, Selenium introduces a gigantic maintenance overhead that is so huge that it prevents companies from being able to build more automation because they’re drowning in test maintenance. The problem is that with Selenium, people test how engineers built things yesterday rather than how the application works from an end-user’s perspective.

testRigor was born to empower ANYONE to use free-flowing, plain English to build test automation. You don’t need to know how to code. Moreover, there is almost no maintenance since the specifications are in plain English, purely from the end-user’s perspective. Testing can be as easy as writing: “find and select a Kindle” and “add it to the cart.” Our AI-driven platform translates these high-level instructions into specific test steps automatically.

We strongly believe in our mission that technology can do MUCH more for testing compared to what is available on the market today. Our goal is to allow our customers to have as valuable a test suite as possible with as little effort as possible.

What problems are you solving?

The software testing industry faces three critical problems that we’re addressing head-on.

First is the maintenance nightmare. Traditional script-based automation tools require constant updates whenever there are changes to the application’s HTML structure, IDs, or XPath selectors. Companies spend 90-99% of their automation time on maintenance rather than creating new tests. This is simply unsustainable. With testRigor, because tests are written in plain English from the end-user’s perspective, they withstand changes in implementation details. When you write “click on the Submit button,” it doesn’t matter if developers change the button’s ID or CSS class; the test continues to work.

Second, there’s a massive skills gap. Traditional automation requires specialized programming knowledge, which means companies either need to hire expensive automation engineers or watch their manual QA teams sit on the sidelines. This creates a bottleneck where only a small percentage of test cases ever get automated. testRigor democratizes test automation by allowing product managers, manual testers, and business analysts to create and understand automated tests using plain English.

Third is the coverage problem. Most companies struggle to get beyond 30-40% test automation coverage because the maintenance burden becomes overwhelming. We’ve seen Fortune 1000 companies go from 34% automation coverage to 91% in under 9 months using testRigor with only their manual QA team. That’s the kind of transformation we enable.

What application areas are your strongest in?

testRigor excels in several key areas that make us the only end-to-end test automation tool companies need.

Our strongest area is cross-system end-to-end testing. We can build tests spanning web, mobile (both native and hybrid for iOS and Android), desktop applications, APIs, Salesforce, ServiceNow, Microsoft Dynamics, SAP, and any other third-party systems, all in one simple test. This is unique in the market. For instance, you can create a test that starts on a web application, sends an email, verifies the email content, clicks a link in that email, continues the flow on a mobile app, and validates API responses all in plain English within a single test case.

We’re particularly strong with form-based UIs and functionality that has predictable input/output. We excel at test cases that require multiple users to interact with the same flow, whether via email, SMS, or instant messages. Our platform is ideal for applications with constantly changing code and HTML IDs, where traditional automation fails spectacularly.

What keeps your customers up at night?

Our customers lose sleep over several critical concerns, and we’ve designed testRigor to address each one.

The biggest worry is production bugs reaching customers. Companies know they should have comprehensive test coverage, but with traditional tools, building and maintaining that coverage requires resources they don’t have. They’re caught in a terrible dilemma: invest heavily in test automation that will consume endless maintenance time, or ship with inadequate testing and risk customer-impacting bugs. With testRigor, they can finally achieve comprehensive coverage without the maintenance penalty.

Release velocity is another major concern. In today’s competitive market, companies need to ship features fast, but they’re terrified of breaking existing functionality. Traditional regression testing is a bottleneck; either you wait days for manual testing, or you skip tests and cross your fingers. Our customers use testRigor to run comprehensive regression suites in parallel, getting results in minutes instead of hours or days. This enables true continuous deployment.

Technical debt in testing is a silent killer. Many companies have invested years in Selenium-based automation, only to find themselves with brittle test suites that break with every release. They’re spending more time fixing tests than finding bugs. Migration seems daunting, but companies like Enumerate saved $180,000 by switching to testRigor, avoiding the Selenium setup costs and not needing to hire specialized automation engineers.

What new features/technology are you working on?

We’re continuously pushing the boundaries of what’s possible with AI-powered test automation, and we have several exciting developments in the pipeline.

Our generative AI capabilities are evolving rapidly. We recently launched features that allow users to generate entire test cases by simply providing an app description. The AI analyzes the application and creates comprehensive test scenarios automatically. We’re enhancing this to handle increasingly complex applications and edge cases. Users can also generate tests from their existing manual test documentation, just paste in their test steps, and testRigor converts them into executable automation.

We’re expanding our AI-powered testing capabilities specifically for LLM-based applications. As more companies build chatbots, virtual assistants, and other AI-driven features, they need ways to validate these systems. We’re developing advanced natural language validation that can assess whether AI responses are appropriate, detect bias or sensitive information leakage, and validate sentiment across diverse inputs.

Self-healing test capabilities are being enhanced. Our AI already automatically adapts tests when UI elements move or change, but we’re making this even more intelligent. The system will soon provide predictive insights about which tests might be affected by upcoming code changes, allowing teams to proactively review and adjust tests before running them.

How do customers normally engage with your company?

We’ve designed multiple pathways for customers to experience testRigor’s value, because we understand that different organizations have different needs and evaluation processes.

Many customers start with our forever-free public account. This allows a single user to explore testRigor with unlimited test cases, though tests are publicly visible. It’s perfect for individuals who want to experiment with the technology and see the plain English approach in action. The free tier includes a limited feature set and community support, but it’s enough to understand testRigor’s core value proposition.

For teams ready to get serious, we offer private tier plans starting at $99/month for a single user with 1,000 test cases. This provides access to our extended feature set and customer support. Our mid-tier plans, starting at $900 per month, offer unlimited users and test cases – ideal for growing teams that need to scale their automation efforts. Enterprise plans provide custom pricing with additional features like SSO, SLA guarantees, and on-premise deployment options.

The typical engagement starts with a personalized demo. A testRigor specialist walks prospects through the platform, often using the customer’s own application to demonstrate real value. We can usually show meaningful results within the first session – writing actual tests in plain English that immediately work against their system. This hands-on approach resonates with teams who are tired of lengthy evaluation processes with traditional tools.

Contact testRigor.

Also Read:

CEO Interview with Gopi Sirineni of Axiado

CEO Interview with Masha Petrova of Nullspace

CEO Interview with Eelko Brinkhoff of PhotonDelta


Podcast EP325: How KIOXIA is Revolutionizing NAND FLASH Memory

Podcast EP325: How KIOXIA is Revolutionizing NAND FLASH Memory
by Daniel Nenni on 01-02-2026 at 10:00 am

Daniel is joined by Doug Wong, senior member of the technical staff at KIOXIA America, where he has contributed to the advancement of memory technologies since 1993. He began his career with KIOXIA in the company’s Memory Division (then part of Toshiba America) and has since focused on a broad range of memory solutions, including PSRAM, SRAM, MROM, EPROMs, NOR flash, and NAND flash. Doug has been an active contributor to industry standards as a member of the JEDEC JC42.4 committee since 1996.

Dan explores the broad range of innovations KIOXIA has made in the development of 3D NAND FLASH memory with Doug. The technology innovations to produce 3D memories using multiple wafers are discussed, along with the work KOIXIA has done with JEDEC to advance the technology for the industry. Doug also describes the impact these advances have had on many growth industries, including AI and automotive. He also describes the the newer generations of the technology which are emerging along with the live demonstrations presented at several major shows.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Revolutionizing Hardware Design Debugging with Time Travel Technology

Revolutionizing Hardware Design Debugging with Time Travel Technology
by Daniel Nenni on 01-02-2026 at 6:00 am

DVCon Europe 2025 Undo.io

In the semiconductor industry High-Level Synthesis (HLS) and SystemC have become essential tools, allowing engineers to model complex hardware designs using familiar C/C++ constructs. Yet, despite the widespread adoption of these languages, the debugging workflows in hardware development lag far behind those in software engineering. Traditional methods rely heavily on print statements, logs, waveform viewers, and iterative trial-and-error, often leading to frustration when bugs appear intermittently or in third-party libraries. This is where time travel debugging changes everything.

Time travel debugging, as pioneered by tools like Undo, introduces a powerful paradigm: record, replay, and resolve. Instead of repeatedly rerunning a failing simulation in hopes of reproducing a bug, engineers record the entire execution of a Linux process from the process level down to individual CPU instructions. This recording captures every deterministic and nondeterministic event, including system calls, I/O, timing functions, and multithreaded interactions. Once a crash or failure occurs, the tool automatically stops recording, preserving the exact state at the point of failure.

The magic happens during replay. Engineers load the portable recording into the debugger and navigate freely forwards and backwards in time. If a crash is at the end of the recording, simply jump there and step backward from symptom to root cause. Traditional forward-only debuggers like GDB force users to restart runs repeatedly, but time travel eliminates guesswork. Commands mirror GDB’s familiar syntax with reverse counterparts: reverse-step, reverse-next, reverse-finish, and reverse-continue. A particularly powerful feature is “last,” which instantly jumps to the exact moment a variable or memory location was last modified—ideal for tracking memory corruption or race conditions.

In a live demonstration involving a SystemC testbench with multiple libraries, a subtle off-by-one error caused a failure: code intended to read the zero-index bit of a string incorrectly accessed the first bit, yielding garbage output. Using the recording, an AI assistant (Claude) interfaced with Undo via a custom library, autonomously navigated backward, set bookmarks, executed reverse commands, and pinpointed the exact faulty array access in minutes—without any manual intervention.

This approach shines in complex scenarios common to hardware modeling:
  • Race conditions: Multithreaded SystemC simulations often exhibit nondeterministic behavior. The “last” command, combined with reverse-continue, reveals which thread overwrote shared memory and when, exposing missing locks without recompilation.
  • Deadlocks: Recordings capture all thread states, allowing engineers to trace blocking calls across time.
  • Intermittent failures: By integrating recording into regression pipelines, failing tests automatically generate recordings only when assertions fail, ensuring reproducible evidence is ready the next morning.

Undo also addresses hardware engineers’ needs with a waveform viewer that generates standard waveforms from recordings. Clicking any signal jumps directly to the corresponding source code line in the debugger, bridging the gap between high-level C++ models and low-level signal behavior.

Performance overhead is minimal for computational code, often near full speed, though I/O-heavy or highly nondeterministic workloads incur some slowdown due to logging external inputs. The tool requires Linux with modern Intel/AMD processors but needs no code changes, debug builds, or instrumentation.

Compared to alternatives like the open-source rr project (great for academic use) or Microsoft’s Time Travel Debugging in Visual Studio, Undo offers production-grade reliability, multithreading support, and seamless integration with modern EDA workflows.

Engineers report that debugging complex SystemC models traditionally takes at least a day, often involving consultations with library vendors or code owners. Time travel debugging reduces this by 4x or more, democratizing debugging: junior engineers can trace issues in unfamiliar codebases by simply following data flow backward. This accelerates verification, improves coverage, shortens time-to-market, and preserves team sanity.

Bottom line: In an industry racing toward ever-more-complex designs, adopting time travel debugging isn’t just an upgrade, it’s a necessity. Tools like Undo bring software’s most powerful debugging techniques to hardware, empowering engineers to resolve bugs faster, more reliably, and with less frustration.

Contact Undo

Also Read:

Taming Concurrency: A New Era of Debugging Multithreaded Code

Video EP7: The impact of Undo’s Time Travel Debugging with Greg Law

CEO Interview with Dr Greg Law of Undo


Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing
by Daniel Nenni on 01-01-2026 at 10:00 am

Siemens Broadcom TSMC OIP2025 SemiWiki

Silent Data Corruption (SDC) represents a critical challenge in modern semiconductor design, particularly in high-performance computing environments like AI data centers. As highlighted in a collaborative presentation by Broadcom Inc. and Siemens EDA at the 2025 TSMC OIP event, SDC occurs when hardware defects cause erroneous computations without triggering detectable errors, leading to subtle yet devastating failures. In one customer experiment involving a 54-day training run on 16,384 GPUs, 419 unexpected interruptions were reported, with 6 attributed directly to SDC. Though rare, accounting for about 1.4% of fails, these incidents can disrupt mission-critical operations, such as AI model training, where reliability is paramount.

The presentation underscores the industry-wide nature of SDC, driven by shrinking process nodes and increasing chip complexity. Defects that evade manufacturing tests may manifest in-field due to aging, voltage fluctuations, or thermal stress. Traditional testing methods fall short here, as they require device removal for diagnostics, which is impractical in deployed systems. To combat this, the teams advocate for in-system testing capabilities that allow periodic checks without downtime. Running ATPG patterns directly in the field detects latent defects that could precipitate SDC, ensuring system integrity. For AI applications, this means integrating test suites that can be executed routinely, preventing costly interruptions. Moreover, new patterns tailored to SDC can be deployed remotely, extending device lifespan without physical intervention.

Siemens’ In-System Test (IST) solution emerges as a key enabler. Built on the Streaming Scan Network (SSN), IST interfaces with embedded deterministic test (EDT) structures to deliver ATPG patterns efficiently. The IST controller drives the SSN’s parallel interface, supporting high-bandwidth data transfer via protocols like APB or AXI. In Broadcom’s implementation, IST was adapted for an EDT-based design with a Streaming Scan Host at the chip level. The controller resides at the top level, loading patterns into local SRAM via an on-chip CPU. Block-level EDT patterns, originally for production testing, are retargeted to IST inputs, allowing selective testing of targeted blocks while maintaining functional operation elsewhere.

Implementation brought several design challenges to the fore. Functional isolation is paramount: “functional” blocks (e.g., CPU subsystems) must remain active to load and execute IST operations, while “targeted” blocks switch to scan mode for testing. This requires isolating scan inputs to prevent interference. All functional block inputs that could disrupt IST, such as interrupts or AXI signals, must be held in a “quiet” state. Outputs from targeted blocks, which toggle during capture, are gated to avoid propagating noise. Broadcom addressed this by inserting isolation blocks and enabling Test Data Registers for control.

Clock splitting posed another hurdle. Broadcom’s methodology places On-Chip Clock controllers (OCC) at the chip top due to custom clocking. Functional blocks need free-running clocks, but targeted ones require OCC activation for scan shifts. Solutions included branching pre-OCC clocks for functional paths or adding secondary OCCs for targeted branches, ensuring synchronized yet independent clock domains.

Verification and Static Timing Analysis added complexity. Typically, STA modes separate functional and Design-for-Test (DFT) paths, but IST demands a hybrid “merged” mode where some blocks are functional and others in DFT. The Siemens tool provides verification collaterals like transaction files, C code, and SystemVerilog tasks for Design Verification (DV) environments. Testing occurs on post-DFT netlists, incorporating boot sequences, which extends runtime. Close collaboration between DV and DFT teams was essential for deliverables and debugging handshakes.

Results from the APB-based IST implementation demonstrate feasibility. With a 32-bit wide subordinate interface and SSN data bus, hardware overhead was modest: the IST Controller (ISTC) added 200 flops and 5,000 normalized combinational logic units, while SSH contributed 1,000 flops and 30,000 units. Five intest modes were run for 2,500 patterns, using 2 MB on-chip SRAM (about 0.5 million 32-bit words). Pattern storage ranged from 165,000 to 260,000 words per mode, with counts of 22-35 patterns. Overall, ~1.9 million 32-bit words were managed, with 4 loads per mode, showcasing efficient compression and bandwidth utilization.

Bottom line: The collaboration between Broadcom and Siemens highlights IST’s role in mitigating SDC through in-field testing. Despite challenges in isolation, clocking, and verification, the solution was successfully implemented and verified in DFT and DV setups. Future efforts will extend to AXI-based IST, promising broader adoption. This approach not only enhances reliability in AI and hyperscale environments but also reduces field failures, underscoring the value of embedded deterministic testing in next-generation silicon.

Also Read:

Podcast EP323: How to Address the Challenges of 3DIC Design with John Ferguson

3D ESD verification: Tackling new challenges in advanced IC design

Signal Integrity Verification Using SPICE and IBIS-AMI


TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion
by Daniel Nenni on 01-01-2026 at 6:00 am

TSMC ESG Award Ceremony 2025

Taiwan Semiconductor Manufacturing Company has once again demonstrated its leadership in corporate sustainability with the successful conclusion of its 6th ESG AWARD, which attracted more than 5,800 proposals from employees across the organization. The overwhelming response reflects not only TSMC’s strong internal engagement but also the growing momentum of environmental, social, and governance (ESG) values within the global semiconductor industry.

Launched as a platform to encourage employee participation in sustainable innovation, the ESG AWARD has become one of TSMC’s most influential internal initiatives. The sixth edition recorded a significant increase in submissions compared to previous years, highlighting how sustainability has evolved from a corporate objective into a shared mission embraced by employees at all levels. Proposals covered a wide range of topics, including energy efficiency, carbon reduction, water resource management, waste minimization, supply chain responsibility, workplace well-being, and community engagement.

TSMC emphasized that the award is not merely a competition, but a catalyst for turning ideas into action. Many past award-winning proposals have been successfully implemented across fabs and offices, delivering measurable environmental and social benefits. These include innovations in energy-saving manufacturing processes, circular economy practices for materials reuse, and digital solutions to enhance operational transparency and governance. By empowering employees to contribute ideas directly linked to real-world impact, TSMC reinforces a culture where sustainability is embedded into daily operations.

The strong participation in the 6th ESG AWARD also reflects the broader pressures and responsibilities facing semiconductor manufacturers today. As demand for advanced chips grows alongside global digital transformation, the industry’s environmental footprint has come under increasing scrutiny. High energy consumption, water usage, and complex supply chains pose challenges that require both technological innovation and organizational commitment. TSMC’s approach demonstrates how internal engagement can play a crucial role in addressing these challenges proactively.

According to TSMC, proposals submitted this year showed greater maturity and cross-functional collaboration than in previous editions. Many teams combined technical expertise with ESG thinking, proposing solutions that balance productivity, cost efficiency, and sustainability. This shift suggests that ESG considerations are no longer treated as separate from core business goals, but rather as integral to long-term competitiveness and resilience.
The award process includes rigorous evaluation criteria, focusing on innovation, feasibility, scalability, and alignment with TSMC’s sustainability strategy. Selected proposals receive recognition and resources to support further development and implementation. This mechanism not only motivates employees but also accelerates the company’s progress toward its ESG targets, including net-zero ambitions and responsible supply chain management.

Beyond internal impact, the ESG AWARD sends a strong signal to stakeholders, including customers, investors, and partners. It highlights TSMC’s commitment to transparency, accountability, and continuous improvement in ESG performance. In an era where ESG metrics increasingly influence investment decisions and customer trust, such initiatives strengthen TSMC’s reputation as a responsible industry leader.

The enthusiasm generated by the 6th ESG AWARD underscores a key lesson for global corporations: sustainability thrives when employees are empowered to participate meaningfully.

Bottom Line: By transforming ESG from a top-down directive into a bottom-up movement, TSMC has ignited a passion that extends beyond awards and recognition. As the company looks ahead, the ideas and energy unleashed by this year’s record-breaking participation are expected to play a vital role in shaping a more sustainable future for both TSMC and the semiconductor industry as a whole.

Also Read:

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!

Why TSMC is Known as the Trusted Foundry

TSMC’s Customized Technical Documentation Platform Enhances Customer Experience