Bronco Webinar 800x100 1

2026 Outlook with Howard Pakosh of Tekstart

2026 Outlook with Howard Pakosh of Tekstart
by Daniel Nenni on 01-07-2026 at 10:00 am

TekStart Cognitum Processor

Tell us a little bit about yourself and your company.
I founded TekStart Group in Ontario, Canada, in 1998 with a very clear objective: to help innovators turn breakthrough technology concepts into real, market-ready products. Over the past 25-plus years, we have worked across the full lifecycle of technology development, from early concept and architecture through funding, commercialization, and exit.

To date, TekStart has helped develop, fund, and successfully exit more than 120 companies. That experience has given us a deep appreciation for what it really takes to bring complex technologies to market, especially in semiconductors and systems where timelines are long and execution risk is high.

Today, I am most excited about the transformation underway as AI inferencing moves out of centralized data centers and into edge devices. We are seeing a fundamental shift in how intelligence is deployed, and that shift creates both technical and economic opportunities that simply did not exist a few years ago.

What was the most exciting high point of 2025 for your company?

Without question, the most exciting milestone for us in 2025 has been reaching the final stages of tapeout for our latest semiconductor product, Cognitum. This has been a five-year journey, and like most deep-tech efforts, it has involved plenty of unexpected turns along the way.

Bringing a new AI-focused chip to market is never straightforward. There were moments where progress felt incremental and others where challenges stacked up quickly. Still, seeing Cognitum reach this level of maturity has been extremely rewarding. If I ever do write a book, this program would certainly deserve a few chapters.

As we approach launch, the excitement comes not just from completing the silicon but from seeing how customers are already thinking about deploying it to solve real problems at the edge.

What was the biggest challenge your company faced in 2025?

The honest answer is that the biggest challenge in 2025 was everything at once. If there was a category of delay or disruption you could imagine, we likely encountered it.

Market uncertainty created constant pressure on planning and forecasting. We worked closely with customers whose own roadmaps were being affected by factors well beyond their control. On top of that, new tariffs introduced additional complexity around cost structures, supply chains, and contract negotiations.

Each issue on its own would have been manageable. Experiencing them simultaneously required continuous adjustment, clear communication, and a willingness to revisit assumptions more often than usual.

How is your company’s work addressing this challenge?

What I am most proud of is the resilience our team demonstrated throughout the year. There were moments when obstacles felt genuinely insurmountable, yet the team stayed focused on execution and problem-solving rather than distraction.

We concentrated on what we could control: engineering discipline, customer transparency, and forward momentum. That mindset allowed us to navigate uncertainty while continuing to move Cognitum toward final tapeout.
In semiconductors, persistence matters. Staying aligned as a team and maintaining confidence in the long-term vision is often the difference between programs that stall and programs that succeed.

What do you think the biggest growth area for 2026 will be, and why?

I firmly believe that Edge AI will be one of the most important growth areas in 2026. For several years, the industry’s focus has been heavily weighted toward cloud-based large language models and massive data center build-outs. While those investments are necessary, they do not fully address the needs of real-world autonomous systems.

There is a growing gap between what cloud LLMs are optimized for and what edge systems actually require. Latency, bandwidth, cost, power consumption, and data privacy all become critical constraints outside the data center.
In 2026, we will see greater emphasis on pushing intelligence closer to where data is generated, enabling faster, more predictable, and more economical decision-making at the edge.

How is your company’s work addressing this growth?

The demand for edge reasoning is expanding rapidly across robotics, industrial automation, infrastructure, IoT, and enterprise systems, where privacy and determinism are critical. These applications increasingly require local decision-making, yet cannot tolerate the latency, recurring costs, or lack of transparency associated with cloud-based inference.

Cognitum is designed specifically to address this gap. It enables a new class of low-cost devices capable of autonomous reasoning directly on the chip. By moving inference to the edge, customers can significantly reduce or eliminate recurring cloud inference costs.

This changes the economic model from per-token or usage-based billing to a fixed hardware cost, making large-scale deployments practical in scenarios that were previously uneconomical or operationally constrained.

What conferences did you attend in 2025, and how was the traffic?

We sponsored and attended the AI Infra Summit in September 2025, and the difference compared to the prior year was striking. Attendance was significantly higher, and the level of engagement was much stronger.
We saw more informed conversations, more qualified prospects, and a clearer understanding of why edge AI infrastructure matters. Being featured in a success story at the event was also valuable, as it allowed us to share our journey and lessons learned with a broader audience.

Will you participate in conferences in 2026? Same or more than 2025

Conferences will continue to play an important role in our marketing and business development strategy. While digital engagement is effective, there is still no substitute for in-person conversations when discussing complex technologies.

We will kick off 2026 at CES and expect to maintain, if not increase, our level of conference participation throughout the year. The quality of interaction we see at these events continues to justify the investment.

How do customers normally engage with your company?

Our business is highly specialized, and we intentionally focus on a narrow, well-defined customer segment. We have served this market for many years and have built a reputation as a trusted partner rather than a transactional vendor.
That trust works to our advantage as we introduce new products. Being a known entity means customers are already familiar with our approach and capabilities. As a result, maintaining awareness, sharing meaningful updates, and staying engaged with key stakeholders remains one of the most effective ways we work with our customers.

Additional comments?

The world has become a more complex and challenging place, both personally and professionally. Technology continues to advance at an accelerating pace, and it can be difficult to keep up with everything that is changing.

That makes it even more important for technology providers to be thoughtful and deliberate about what they build and how those technologies are brought to market. The decisions made over the next few years are likely to have an impact that extends far beyond the near term.

If we approach this moment with care and responsibility, the opportunities ahead are significant. How we move forward matters.

Also Read:

CEO Interview with Rabin Sugumar of Akeana

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

Quantum Computers: Are We There Yet?


Automotive Digital Twins Out of The Box and Real Time with PAVE360

Automotive Digital Twins Out of The Box and Real Time with PAVE360
by Bernard Murphy on 01-07-2026 at 6:00 am

Digital twin

Digital twins are amazing technology, virtual representations mirroring a real physical system. Twin virtual models span software, electrical/electronic and mechanical subsystems, closing the loop with feedback from real physical counterparts. The virtual model calibrates against real sensing feedback gathered in production and prototype testing spanning braking, cornering, road surfaces, weather and traffic conditions, and other factors, all with high fidelity. Digital twins greatly accelerate development, debug, and quality control for the advanced car systems we expect today, now even more essential as these systems become more complex and more safety critical.

David Fritz (VP of Hybrid Virtual and Physical AI Systems at Siemens Digital Industries) told me that, for all their benefits, building a twin by integrating multiple parts from multiple suppliers used to take large teams years. An immense effort, out of reach for most enterprises except for the largest OEMs and Tier1s, and too slow even for them to adapt in fast-moving markets. To deliver on the real time expectation of this promise, Siemens have developed and announced at CES 2026 their “PAVE360 Digital Twin Blueprint”, a fully integrated platform providing a jumpstart not only for large OEMs and Tier1s but also for other innovators in the automotive ecosystem.

Real-time simulation for realistic workloads

Earlier generations of digital twins apparently would run a couple of orders of magnitude slower than real time. Software developers no longer consider that level of performance sufficient for their needs, given that today they must run millions of lines of code together with realistic AI workloads.

Siemens, working with Arm and AWS, announced a new approach last year based on Arm Zena CSS/cloud-native support. This can deliver near real-time performance for those target workloads, within the full scope of the digital twin. That’s a very big deal because developing and exercising ADAS, autonomous driving (AD) and infotainment (IVI) features in the twin requires that level of performance.

Equally important, SDV Digital Twin Blueprint provides a ready-to-deploy reference platform including virtual reference hardware and software stacks (AI functionality also) for ADAS, AD, and IVI. Running on day one, accessible for cloud-based collaboration. As needed you can swap in your own software stacks, replacing corresponding reference stacks. The platform builds on IPs such as Arm Zena CSS and provides mechanisms to connect to real vehicle hardware.

I asked (and I’m sure you wondered also) how AI models run inside this platform, whether cloud-based or on-prem. David said that if GPUs are available, the system will take advantage of them. If not, it will leverage Zena cores, appearing everywhere these days in cloud servers. David can’t share more details on this topic but I’m looking forward to learning more about Arm’s directions in server AI since they have already announced hardware acceleration options for AI in mobile.

Extending opportunity to the larger ecosystem

David said that they are seeing interesting adoption from organizations outside the traditional supply chain, such as engineering service suppliers. These are often OEM-sponsored ventures with trusted, specialized expertise in advanced capabilities that OEMs are targeting for 2030. Such ventures modify PAVE360 Automotive Blueprint with their own stack, to demonstrate to an OEM the value their solution can offer. Better yet, this can drop straight into the OEM’s own development platform if they also use PAVE360.

As one example, Silicon Auto is part of a joint venture with Stellantis and Foxconn, focused particularly on ADAS systems. They used PAVE360 Automotive Blueprint to model the silicon they wanted to build, putting it in the content of a whole vehicle. Silicon Auto systems are now planned to support Stellantis, Foxconn and other customers.

Another example is SAIC (Shanghai Automotive Industrial Corporation) which David calls “the Volkswagen of China”. SAICEC (EC is Engineering Corporation) is another joint venture providing design, system and applications services to SAIC and to the larger automotive industry in China. According to David, they are using Pave 360 in some interesting ways. Some no doubt like the earlier example, others exploring PAVE360 becoming a certification service to OEMs, inside of cloud.

Partnerships in addition these include AWS, AMD and Microsoft, Wipro and Cognizant. I would not be surprised to hear of OEM, Tier1, Cloud Services Provider and other partnerships in the near future.

CES 2026 demo and availability

This story hinges heavily on real time twin performance in the cloud. As proof that this capability is real, Siemens are demoing this capability at CES 2026, featuring a Volkswagen ID.Buzz in their booth connected to the Internet through Wi-Fi, with the brains of the car running in the cloud and controlling the car. You can (though the car) tell it where you want to go and watch it (virtually) navigate there. You can tell it to turn on the AC while the car is virtually enroute. From command, to cloud, back to the car in real time. Pretty impressive.

SDV Digital Twin Blueprint will be available February 2026. You can learn more about Siemens digital twin technology HERE.

Also Read:

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing

Podcast EP323: How to Address the Challenges of 3DIC Design with John Ferguson

3D ESD verification: Tackling new challenges in advanced IC design


Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation
by Daniel Nenni on 01-06-2026 at 8:00 am

Acceleration of Complex RISC V Processor Verification Using Test Generation Integrated with Hardware Emulation Synopsys
Verification Futures Conference 2025 Austin (USA)

The rapid evolution of RISC-V processors has introduced unprecedented verification challenges. Modern high-end RISC-V cores now incorporate complex features such as vector and hypervisor extensions, virtual memory systems, multi-level caches, advanced interrupt architectures, and multi-hart out-of-order execution. While these capabilities enable powerful and flexible processor designs, they also dramatically increase verification complexity. To ensure correctness and quality, verification methodologies must evolve to handle massive state spaces, long-running workloads, and extreme performance demands.

One of the most striking realities of contemporary RISC-V verification is the sheer scale of execution required. Verifying a typical high-end RISC-V core can demand on the order of 10¹⁵ cycles—far beyond the reach of traditional simulation alone. Effective verification therefore requires three essential components: stimulus derived from deep understanding of the RISC-V specification, complex and lengthy test programs that drive the design into meaningful microarchitectural states, and a fast execution platform capable of achieving verification closure within practical timelines.

The XiangShan RISC-V core exemplifies these challenges. Supporting RV64 with extensive extensions such as RVV 1.0, advanced memory management, large cache hierarchies with ECC, multi-level TLBs, and AIA-compliant interrupt handling, the design represents a realistic, high-complexity target. Verifying such a design requires stress-testing interactions across caches, memory ordering, interrupts, and multi-hart execution—scenarios that cannot be adequately covered by short, isolated tests.

To address these needs, Synopsys introduces STING, a bare-metal test generator purpose-built for RISC-V processor verification. STING employs a software-driven methodology that integrates multiple test generation approaches, including random stimulus, directed tests, workloads, and real-world scenarios. It generates both self-checking and pure stimulus tests that are portable across simulation, emulation, FPGA prototypes, and silicon. With comprehensive support for 32-bit and 64-bit RISC-V specifications, privilege modes, memory protection, virtualization, and multi-hart configurations, STING provides a scalable and reusable verification foundation. Its extensive library of over 100,000 test fragments enables rapid construction of complex test programs tailored to specific microarchitectural goals.

However, generating effective stimulus is only half the solution. Advanced RISC-V features such as cache coherency, memory ordering, atomicity, and synchronization demand long-running workloads to expose subtle corner cases. Scenarios involving true and false sharing, cache evictions, conflicting traffic, and fence ordering require sustained execution under varied conditions. For multi-processor platforms, repeating the same test sequence across different scheduling interleavings is essential to achieve thorough coverage. These demands make fast execution platforms indispensable.

Hardware-assisted verification (HAV) provides the necessary performance boost. By synthesizing the design under test and running it on emulation or prototyping platforms such as Synopsys ZeBu or HAPS, verification teams can execute tests at speeds orders of magnitude faster than simulation. In this approach, STING generates self-checking tests that embed reference model results directly into the executable. Tests are generated in parallel and streamed continuously into the hardware platform, ensuring that execution units remain fully utilized.

The streaming methodology is a key innovation in this solution. By avoiding repeated hardware re-initialization and redundant configuration cycles, and by enabling concurrent test generation and execution, streaming dramatically improves throughput. Results demonstrate performance improvements of up to 6000× per test when moving from simulation to emulation, making large-scale regression feasible for complex RISC-V designs.

Debugging failures discovered in high-speed regressions presents its own challenges. Because failures may depend on accumulated microarchitectural state across multiple tests, simple re-execution may not reproduce the issue. The recommended strategy involves replaying sequences of streaming-enabled tests to reconstruct the failing conditions. Hardware/software debug using tools such as Verdi enables synchronized analysis of CPU traces and waveforms, allowing engineers to step through execution while correlating software behavior with hardware signals.

Verification Futures Conference 2025 Austin (USA)

Bottom line: Accelerating complex RISC-V processor verification requires a tightly integrated strategy combining intelligent test generation, hardware-assisted execution, and advanced debug methodologies. By uniting STING with high-performance emulation platforms, verification teams can achieve comprehensive stimulus coverage, unprecedented execution speed, and effective debug—making verification closure achievable even for today’s most sophisticated RISC-V processors.

Also Read:

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!

CISCO ASIC Success with Synopsys SLM IPs

How PCIe Multistream Architecture Enables AI Connectivity at 64 GT/s and 128 GT/s


Quantum Computers: Are We There Yet?

Quantum Computers: Are We There Yet?
by Bernard Murphy on 01-06-2026 at 6:00 am

quantum are we there yet

R&D for any fundamentally new technology takes time, especially for hardware; over 10 years passed from the first transistor to the first (very small) integrated circuit. The engineering behind quantum computers is arguably even more challenging than for electronic circuits, at least from today’s perspective. We shouldn’t be surprised that even now there are more challenges to overcome. As usual, for a quick read I will look at just a couple, a gate essential to demonstrating quantum advantage over classical computing, and quantum error correction (QEC) / fault-tolerant architectures.

T-gates

Introductions to quantum programming start with a simple set of gates called the Clifford group: H, S, and C-NOT gates. This group is important in many aspects of quantum programming and has the added advantage that algorithms based on the group can be simulated classically as efficiently as in quantum implementations. However the group is not complete for supporting general calculations, especially for algorithms that would be exponentially complex to compute on a classical system.

T-gates are a popular option to extend the set. Skimming over the details, gates in the Clifford group can modify a qubit through a very limited set of possibilities. A T-gate extends the range and can be thought of as the square root of an S gate (equivalent to S if applied twice). When combined in a sequence with Clifford gates this enables any arbitrary operation on a qubit to some desired accuracy. (Might seem surprising but the math goes all the way back to Euler.)

That’s the theory. How can a T-gate be implemented on a real quantum computer (QC)? Looking back at a couple of my earlier blogs on QC implementation (this and this), you’ll remember that gates are implemented by pulsing qubits with microwave, laser or other EM options. Naturally these mechanisms have imperfections, including finite frequency spread in pulses and imperfect focusing optics for lasers.

Add ambient noise on top of these imperfections and you can see that any intended gate operation will be performed with less than perfect fidelity. Clifford group states are in some sense maximally separated so perhaps a little less prone to errors, but a T-gate operation, as a square root of a Clifford gate, will typically be more sensitive.

The QC experts have an answer to this need. In one example, they pre-build one or more sets of purified T-gates, distilled from multiple noisy T-gates. In this approach, when considering your target algorithm, the number of T-gates you will need must be pre-determined along with the level of accuracy you must meet.

I expect the mechanism to purify T-gates will be captured in a library though given constraints on available qubit and coherence times for production systems today, any bloat will further limit allowable size and duration of your algorithm. Adding this new gate also adds complexity to verifying an algorithm on a classical computer, since complexity on a classical system grows exponentially with the number of T-gates. Recent research has shown it is possible to simulate relatively shallow circuits with T-gates so there is still hope for debugging noisy quantum circuits.

QEC and fault-tolerant computing

I touched on QEC in my blog on physical implementation of quantum computers. I’ll expand a bit more here. The basic concept in QEC is the same as in classical error correction – “copy” onto redundant qubits, run an operation, detect single qubit errors through majority voting and correct any errors (also work hard to make the probability of 2 or more errors very small). An operation starts with a qubit, adds redundancy, performs the operation, detects and corrects errors, from that reconstructing a corrected post-operation qubit.

There are complications.

Simply copying real qubits breaks coherence, instead they are entangled with the redundant qubits. After an operation, the method detects incorrect entangled states and uses that result to correct errors. However, qubit error possibilities aren’t just bit-flips; they can also be phase flips (e.g. |0>+|1> flips to |0>-|1>), requiring more circuitry to detect and correct. QEC around gates outside the Clifford group (like a T-gate) is more complicated to protect. Of course, error detection and correction circuitry will contribute further errors.

If this sounds difficult, you are not wrong. This is a race between error creation and error detection and correction. Some views held that ~1000 physical qubits would be required per logical qubit, clearly not scalable. Very active research is going into this area, in higher reliability qubits and in fault-tolerant computing, computing with accuracy using faulty systems.

I can’t find data on superconducting versus ion trap qubit reliabilities. Before considering external noise, I feel that ion trap reliabilities should be intrinsically high simply thanks to the physics of a single ion, whereas manufactured superconducting qubit will have inevitable tolerance errors.

IBM recently published breakthrough results, to reduce noise in memory (a set of qubits) though this does not touch on gate operations. Here is a nicely readable version, including some background on evolution of error correcting codes from Hamming to surface codes and now their latest “gross code”. For gate operations IBM points to next steps in the readable link immediately above. I couldn’t find reported results. In my (possibly faulty 😀) reading this suggests greatly reduced overhead for error correction, partly through choice of error-correcting codes and partly through pre-distillation of more error-prone operations such as T-gates (see above).

More background

There is a great video tutorial series from Artur Ekert (professor of quantum physics at Oxford) that I mentioned in an earlier blog. This is a deep dive into quantum information science, not for the timid or anyone looking for a quick summary. But I found it illuminated many points for me, especially in viewing any quantum computation as essentially a very elaborate quantum interference experiment. If you are familiar with regular wave interference (drop 2 stones in a pond and watch how the ripples interact) and/or the double-slit experiment this will make complete sense.

Also Read:

Simulating Quantum Computers. Innovation in Verification

Quantum Advantage is About the Algorithm, not the Computer

Quantum Computing Technologies and Challenges


Chips&Media and Visionary.ai Unveil the World’s First AI-Based Full Image Signal Processor, Redefining the Future of Image Quality

Chips&Media and Visionary.ai Unveil the World’s First AI-Based Full Image Signal Processor, Redefining the Future of Image Quality
by Daniel Nenni on 01-05-2026 at 12:00 pm

AI Chips&Media Visionary AI


In a groundbreaking announcement that is poised to transform digital imaging, Chips&Media, a leading provider of video codec and image processing hardware IP, has partnered with Visionary.ai, an innovative startup specializing in AI-driven vision technology, to unveil the world’s first fully AI-based Image Signal Processor (ISP). This revolutionary solution marks a significant shift from traditional hardware-fixed ISPs to a dynamic, software-powered system that leverages artificial intelligence for unparalleled image quality and adaptability.

Traditional Image Signal Processors have long been the backbone of digital cameras, smartphones, security systems, and automotive sensors. These hardware-based pipelines handle raw sensor data, performing tasks like demosaicing, noise reduction, color correction, and exposure adjustment. However, they are rigid, power-hungry, and limited in handling challenging conditions such as low light, high dynamic range (HDR), or fog. Visionary.ai has pioneered a software-based ISP that replaces conventional algorithms with AI models, achieving superior noise reduction. Results in low light show over 75% increase in detection, 91% reduction in false positives, and real-time performance that rivals or exceeds non-real-time alternatives.

By integrating Visionary.ai’s AI ISP technology with Chips&Media’s expertise in efficient hardware IP for image signal processing and computer vision, the collaboration delivers the first complete AI-native full ISP pipeline. This hybrid approach combines the flexibility of software with optimized hardware acceleration, enabling devices to process raw images with machine learning at every stage. The result? Dramatically sharper, more accurate colors, better low-light performance, and enhanced detail preservation, all while reducing power consumption and allowing over-the-air updates to improve performance post-deployment.

This innovation redefines image quality across industries. In consumer electronics, smartphones and action cameras will capture vibrant, noise-free videos in near-darkness. For automotive applications, ADAS and autonomous vehicles gain more reliable object detection in adverse weather. Security and surveillance systems benefit from clearer identification, while drones, medical imaging, robotics, and IoT devices see boosted accuracy in machine vision tasks. The AI ISP can be tuned for specific needs emphasizing greens in agriculture or reds in medical diagnostics and offering customization unmatched by fixed hardware.

What sets this apart as the “world’s first” is its end-to-end AI integration: unlike partial AI enhancements from competitors like Ambarella or Sony, this is a comprehensive replacement of the traditional ISP pipeline with predictive, learning-based processing. Running efficiently on edge devices, it addresses the “garbage-in, garbage-out” problem in AI vision systems by providing cleaner input data from the start.

As AI becomes a standard feature in imaging, this partnership signals a new era where cameras evolve intelligently over time. Chips&Media and Visionary.ai are not just improving image quality, they are future-proofing vision technology, empowering devices to see the world more like the human eye, but better. This breakthrough promises to accelerate advancements in edge AI, making high-performance imaging accessible and affordable worldwide.

Contact Chips&Media

Also Read:

2026 Outlook with William Wang of ChipAgents.ai

2026 Outlook With Mahesh Tirupattur of Analog Bits

Silicon Catalyst: Searching for the Next Great Start-up


2026 Outlook with William Wang of ChipAgents.ai

2026 Outlook with William Wang of ChipAgents.ai
by Daniel Nenni on 01-05-2026 at 10:00 am

William Wang ChipAgents SemiWiki

William Wang is a world-leading expert in artificial intelligence, specializing in generative AI and large language models. As the Founder, CEO, and Chairman of Alpha Design AI, he brings a wealth of experience from academia and industry, having previously shipped Amazon Q at Amazon AWS Bedrock

A Mellichamp Chair Professor of AI at UCSB, William has been recognized with prestigious honors such as the IEEE Laplace Award, BCS Karen Spärck Jones Award, NSF CAREER Award, IEEE AI’s 10 to Watch, and the DARPA Young Faculty Award.

William has published over 250+ articles at premier AI venues and widely cited in top academic conferences and media outlets such as VentureBeat, Wired, Fortune, and SemiWiki.com. William is at the forefront of AI innovation, driving the next generation of AI-powered semiconductor design and verification.

Tell us a little bit about yourself and your company.

I’m William Wang, founder and CEO of ChipAgents.ai. I’m also a professor of AI at UC Santa Barbara. ChipAgents builds an agentic AI platform for semiconductor design and verification, helping RTL, DV, and CAD teams accelerate spec-to-silicon workflows using AI agents that integrate directly into existing EDA environments.

What was the most exciting high point of 2025 for your company?

In 2025, we achieved large-scale production deployments with multiple tier-1 semiconductor companies and saw rapid expansion from pilot teams to hundreds of engineers at a single customer. The scale of real usage, measured in billions of tokens and daily active engineers, strongly validated product-market fit.

What was the biggest challenge your company faced in 2025?

The biggest challenge was scaling from early adopters to enterprise-wide deployments while meeting strict security, infrastructure, and workflow integration requirements across very different semiconductor organizations.

How is your company’s work addressing this challenge?

We invested heavily in enterprise readiness, including different deployment environments, fine-grained access control, auditability, and deep integration with customers’ existing RTL, DV, and CAD toolchains, without forcing workflow changes.

What do you think the biggest growth area for 2026 will be, and why?

The biggest growth area in 2026 will be AI-native verification, debug, and system-level reasoning. As designs grow more complex, verification productivity, not raw RTL coding, is becoming the primary bottleneck.

How is your company’s work addressing this growth?

ChipAgents focuses on multi-agent reasoning for root-cause analysis, coverage closure, testbench generation, and design-verification co-optimization. Our agents operate across code, waveforms, logs, and specs, not just text.

What conferences did you attend in 2025 and how was the traffic?

We attended major industry events including DAC, DVCon, and several private semiconductor and EDA executive summits. Traffic and engagement were very strong, with a noticeable increase in hands-on technical discussions rather than exploratory conversations. It’s great to see the industry embracing our new AI agents platform to be a force multiplier for their tapeout projects.

Will you participate in conferences going forward? Same or more?

Yes. We plan to participate in more conferences, with a stronger focus on targeted executive meetings, closed-door technical sessions, and customer-led use-case discussions rather than broad booth marketing.

How do customers normally engage with your company?

Customers typically start with a technical evaluation or pilot, followed by expansion into production teams. Engagement is highly collaborative, with close interaction between our engineering team and customer CAD, RTL, and DV groups.

Are you incorporating AI into your products?

Yes. AI is core to our product. ChipAgents is built around multi-agent systems designed specifically for semiconductor design and verification workflows.

Is AI affecting the way you develop your products?

Absolutely. We use AI internally for rapid prototyping, testing, and workflow optimization, and customer feedback from real production usage directly shapes how our agents evolve.

Additional comments?

AI in semiconductors is shifting from experimentation to mission-critical infrastructure. The winners will be platforms that integrate deeply, respect existing workflows, and deliver measurable productivity gains.

CONTACT ChipAgents

Also Read:

I Have Seen the Future with ChipAgents Autonomous Root Cause Analysis

AI RTL Generation versus AI RTL Verification

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification


2026 Outlook With Mahesh Tirupattur of Analog Bits

2026 Outlook With Mahesh Tirupattur of Analog Bits
by Daniel Nenni on 01-05-2026 at 6:00 am

Mahesh Tirupattur Analog Bits

Tell us a little bit about yourself and your company. 

As CEO of Analog Bits, I am quite excited to be at the helm of a company that plays a critical role in managing power both efficiently and in an intelligent manner.  The sustainability challenges resulting from the explosive growth in AI demand this. I joined the company over 22 years ago and became CEO in 2024. I have seen very strong growth for the company during 2025.

So, you asked about Analog Bits. Well, we are the industry’s leading supplier of mixed signal IP for intelligent energy and power management in a broad range of semiconductor products. We have a track record spanning over 30 years delivering silicon-proven IPs on the latest process nodes. We have an established reputation for easy and reliable integration into advanced SoCs through all the major silicon foundries, and Analog Bits has an outstanding heritage of first-time working IP that lowers risk.  Our leading low-power integrated clocking, sensors, and interconnect IP are pervasive in virtually all of today’s semiconductors, resulting in billions of our IP cores fabricated in customer silicon, from 0.35 micron to 2nm processes.

What was the most exciting high point of 2025 for your company? 

Demonstrating first working silicon of several of our IPs on test chips using TSMC N2P and demonstrating N3 IPs on our customers’ products. And demonstrating power observability and delivering power savings on our power sensors and LDO IPs. We presented five joint TSMC OIP papers with customers such as ARM, Cerebras, NXP, Socionext and Semidrive in three continents. This work demonstrated successful use cases of our IPs in AI, data center and automotive applications. Furthermore, we received the Analog IP Partner of the Year Award from TSMC.

What was the biggest challenge your company faced in 2025? 

The biggest challenge was to keep up with the fast pace of demand from the industry because of AI driving growth. We had to make sure we built not only the best engineering products, but also achieve the scale needed to support market demands. We had to build teams that were scalable and able to adapt quickly to changing needs and requirements to deliver solutions with the highest quality. Another challenge we addressed was to enhance financial discipline and structure to help the IPO process of our parent company.

How is your company’s work addressing this challenge?  

In addition to our engineering team continuing to grow their skills on advanced node designs, we also stayed close to our customers to ensure we understood their needs. So, our customer-facing team not only participated in many industry events to present papers and showcase our solutions but also spent a lot of time directly with customers. This  meant spending a lot of time on planes to ensure first-hand that our IP solutions were meeting their needs.

What do youthink the biggest growth area for 2026 will be, and why?

Clearly AI is the buzzword of the moment, but that is a very broad term. For Analog Bits, it translates to huge growth opportunities in areas like data centers and automotive. Today, we see that helping our customers manage power will also optimize performance. These qualities go hand in hand. We see the biggest growth opportunity is helping our customers to gain greater insights into power and managing power for the best performance.

How is your company’s work addressing this growth? 

A key benefit of our technology is something we call the Intelligent Power Architecture. This means providing a way to manage the power within an SoC in a smart way, monitoring power on-chip so that you can optimize power utilization, while not compromising on performance. High accuracy temperature sensors and power glitch sensors are key IPs that can detect abnormalities in SoC designs. And our LDOs, regulators and PLLs help balance performance and power.

Is AI affecting the way you develop your products?

Of course, we are not different than anyone else in this industry in that AI is certainly going to change the way we develop products. For example, Analog Bits collaborates with large system companies designing AI SoC’s who face a myriad of energy efficiency challenges. This allows us to develop smart mixed-signal IPs to solve these application needs.

What conferences did you attend in 2025 and how was the traffic?

We attended several conferences around the world addressing many parts of the value chain, from design (e.g., DAC) to manufacturing (e.g., TSMC, GF and Samsung events globally). These days, most of these conferences are not about traffic but about engaging with customers and partners around the world so that we can be highly responsive to their rapidly changing needs.

Will you participate in conferences in 2026? Same or more as 2025?

Indeed, this will be an important part of our strategy for customer and partner engagement in 2026. We will continue as we did in 2025 but also review and explore what makes sense to do more of.

How do customers normally engage with your company?

Our customers are truly global. They reach out to us through events we attend, our website and our sales channels. We license over 200 IPs each year for 90+ customer engagements. It is exciting to see more system companies license IPs from us directly even though they work with large ASIC houses. This is a good indication that we are solving problems that have system-level importance.

Additional comments? 

The analog IPs we are developing are no longer an end of the line purchase decision. The IPs we develop deliver architectural-level impact and we are excited to build upon our success of 2025 and look forward to even more success in 2026.

Also Read:

Podcast EP322: A Wide-Ranging and Colorful Conversation with Mahesh Tirupattur

Analog Bits Steps into the Spotlight at TSMC OIP

Analog Bits Steals the Show with Working IP on TSMC 3nm and 2nm and a New Design Strategy


Silicon Catalyst: Searching for the Next Great Start-up

Silicon Catalyst: Searching for the Next Great Start-up
by Daniel Nenni on 01-04-2026 at 2:00 pm

SiC 400 Jan2026Deadline Static

Silicon Catalyst has emerged as a distinctive force in the global start-up ecosystem, positioning itself not merely as an accelerator, but as a launchpad for deep-technology innovation. Focused primarily on semiconductor and hardware-based start-ups, Silicon Catalyst addresses a long-standing gap in the venture landscape: while software companies can scale quickly with limited capital, hardware and silicon ventures often require years of development, significant funding, and specialized expertise. In its search for the next great start-up, Silicon Catalyst blends technical rigor, industry partnerships, and long-term vision to nurture companies capable of reshaping entire industries.

At the heart of Silicon Catalyst’s mission is the recognition that silicon remains foundational to modern technology. From artificial intelligence and autonomous vehicles to healthcare devices and clean energy systems, advances in semiconductors drive progress across sectors. Yet, despite their importance, early-stage silicon start-ups face daunting barriers, high fabrication costs, long design cycles, and limited access to manufacturing resources. Silicon Catalyst seeks to remove these obstacles by offering selected start-ups unparalleled access to tools, mentorship, and industry networks that would otherwise be out of reach.

The search for the next great start-up begins with a strong emphasis on technical differentiation. Silicon Catalyst looks for founding teams with deep domain expertise and novel approaches to chip design, materials, or system architecture. Incremental improvements are not enough; the organization prioritizes ideas that promise step-change performance, cost efficiency, or energy savings. Whether it is a breakthrough in photonic computing, advanced sensors, or specialized AI accelerators, the goal is to identify technologies with the potential to become industry standards rather than niche solutions.

Equally important is the quality of the founding team. Silicon Catalyst understands that even the most promising technology can fail without capable leadership. Successful applicants typically combine technical excellence with entrepreneurial resilience. Founders must demonstrate the ability to learn quickly, adapt to market feedback, and navigate the complex relationships inherent in the semiconductor supply chain. The accelerator’s mentors, many of whom are seasoned executives, engineers, and investors, play a crucial role in shaping these teams, helping them avoid common pitfalls and make informed strategic decisions.

What sets Silicon Catalyst apart is its ecosystem-driven model. Instead of relying solely on cash investments, it offers start-ups access to an extensive network of partners, including semiconductor companies, EDA tool providers, foundries, and packaging firms. This in-kind support dramatically reduces development costs and accelerates time to market. For start-ups, this can mean the difference between an idea that remains on paper and a product that reaches customers. For Silicon Catalyst, it ensures that the start-ups it supports are grounded in real-world feasibility, not just theoretical promise.

The search for the next great start-up is also shaped by long-term thinking. Silicon Catalyst recognizes that hardware innovation does not conform to the rapid timelines typical of software ventures. As a result, it encourages patience, from both founders and investors, while maintaining rigorous milestones and accountability. This balanced approach allows start-ups to tackle ambitious problems without being forced into premature commercialization.

Bottom line: Silicon Catalyst’s pursuit of the next great start-up is about more than financial returns. It is about advancing the technological infrastructure that underpins modern society. By empowering silicon-focused entrepreneurs, the organization helps ensure continued innovation in areas critical to economic growth, national competitiveness, and global sustainability. In doing so, Silicon Catalyst is not just searching for the next success story—it is actively shaping the future of technology itself.

Submit Application Here

Also Read:

Revitalizing Semiconductor StartUps

Silicon Catalyst on the Road to $1 Trillion Industry

The 2025 Semi Industry Forum: On the Road to a $1 Trillion Industry


CEO Interview with Artem Golubev of testRigor

CEO Interview with Artem Golubev of testRigor
by Daniel Nenni on 01-04-2026 at 12:00 pm

Testrigor Artem

Artem Golubev is the founder and CEO of testRigor, a company revolutionizing software test automation through AI-powered plain English testing. With a mission to eliminate the massive maintenance overhead that plagues traditional testing tools while simultaneously improving test coverage, testRigor has enabled hundreds of companies, including Netflix, Splunk, Business Wire, and Koch Industries, to build test automation 15X faster and spend 200X less time maintaining it. I recently had the opportunity to sit down with Artem to discuss testRigor’s innovative approach to solving one of the software industry’s most persistent challenges.

Tell us about your company.

For almost a decade, I’ve been on a mission to reduce the number of human efforts spent on testing AND improve test coverage at the same time. We built testRigor to solve problems that prevent people from being able to achieve 100% test automation.

The fundamental issue is that many QAs make the mistake of using tools like Selenium simply because they’re free and open-source. However, Selenium introduces a gigantic maintenance overhead that is so huge that it prevents companies from being able to build more automation because they’re drowning in test maintenance. The problem is that with Selenium, people test how engineers built things yesterday rather than how the application works from an end-user’s perspective.

testRigor was born to empower ANYONE to use free-flowing, plain English to build test automation. You don’t need to know how to code. Moreover, there is almost no maintenance since the specifications are in plain English, purely from the end-user’s perspective. Testing can be as easy as writing: “find and select a Kindle” and “add it to the cart.” Our AI-driven platform translates these high-level instructions into specific test steps automatically.

We strongly believe in our mission that technology can do MUCH more for testing compared to what is available on the market today. Our goal is to allow our customers to have as valuable a test suite as possible with as little effort as possible.

What problems are you solving?

The software testing industry faces three critical problems that we’re addressing head-on.

First is the maintenance nightmare. Traditional script-based automation tools require constant updates whenever there are changes to the application’s HTML structure, IDs, or XPath selectors. Companies spend 90-99% of their automation time on maintenance rather than creating new tests. This is simply unsustainable. With testRigor, because tests are written in plain English from the end-user’s perspective, they withstand changes in implementation details. When you write “click on the Submit button,” it doesn’t matter if developers change the button’s ID or CSS class; the test continues to work.

Second, there’s a massive skills gap. Traditional automation requires specialized programming knowledge, which means companies either need to hire expensive automation engineers or watch their manual QA teams sit on the sidelines. This creates a bottleneck where only a small percentage of test cases ever get automated. testRigor democratizes test automation by allowing product managers, manual testers, and business analysts to create and understand automated tests using plain English.

Third is the coverage problem. Most companies struggle to get beyond 30-40% test automation coverage because the maintenance burden becomes overwhelming. We’ve seen Fortune 1000 companies go from 34% automation coverage to 91% in under 9 months using testRigor with only their manual QA team. That’s the kind of transformation we enable.

What application areas are your strongest in?

testRigor excels in several key areas that make us the only end-to-end test automation tool companies need.

Our strongest area is cross-system end-to-end testing. We can build tests spanning web, mobile (both native and hybrid for iOS and Android), desktop applications, APIs, Salesforce, ServiceNow, Microsoft Dynamics, SAP, and any other third-party systems, all in one simple test. This is unique in the market. For instance, you can create a test that starts on a web application, sends an email, verifies the email content, clicks a link in that email, continues the flow on a mobile app, and validates API responses all in plain English within a single test case.

We’re particularly strong with form-based UIs and functionality that has predictable input/output. We excel at test cases that require multiple users to interact with the same flow, whether via email, SMS, or instant messages. Our platform is ideal for applications with constantly changing code and HTML IDs, where traditional automation fails spectacularly.

What keeps your customers up at night?

Our customers lose sleep over several critical concerns, and we’ve designed testRigor to address each one.

The biggest worry is production bugs reaching customers. Companies know they should have comprehensive test coverage, but with traditional tools, building and maintaining that coverage requires resources they don’t have. They’re caught in a terrible dilemma: invest heavily in test automation that will consume endless maintenance time, or ship with inadequate testing and risk customer-impacting bugs. With testRigor, they can finally achieve comprehensive coverage without the maintenance penalty.

Release velocity is another major concern. In today’s competitive market, companies need to ship features fast, but they’re terrified of breaking existing functionality. Traditional regression testing is a bottleneck; either you wait days for manual testing, or you skip tests and cross your fingers. Our customers use testRigor to run comprehensive regression suites in parallel, getting results in minutes instead of hours or days. This enables true continuous deployment.

Technical debt in testing is a silent killer. Many companies have invested years in Selenium-based automation, only to find themselves with brittle test suites that break with every release. They’re spending more time fixing tests than finding bugs. Migration seems daunting, but companies like Enumerate saved $180,000 by switching to testRigor, avoiding the Selenium setup costs and not needing to hire specialized automation engineers.

What new features/technology are you working on?

We’re continuously pushing the boundaries of what’s possible with AI-powered test automation, and we have several exciting developments in the pipeline.

Our generative AI capabilities are evolving rapidly. We recently launched features that allow users to generate entire test cases by simply providing an app description. The AI analyzes the application and creates comprehensive test scenarios automatically. We’re enhancing this to handle increasingly complex applications and edge cases. Users can also generate tests from their existing manual test documentation, just paste in their test steps, and testRigor converts them into executable automation.

We’re expanding our AI-powered testing capabilities specifically for LLM-based applications. As more companies build chatbots, virtual assistants, and other AI-driven features, they need ways to validate these systems. We’re developing advanced natural language validation that can assess whether AI responses are appropriate, detect bias or sensitive information leakage, and validate sentiment across diverse inputs.

Self-healing test capabilities are being enhanced. Our AI already automatically adapts tests when UI elements move or change, but we’re making this even more intelligent. The system will soon provide predictive insights about which tests might be affected by upcoming code changes, allowing teams to proactively review and adjust tests before running them.

How do customers normally engage with your company?

We’ve designed multiple pathways for customers to experience testRigor’s value, because we understand that different organizations have different needs and evaluation processes.

Many customers start with our forever-free public account. This allows a single user to explore testRigor with unlimited test cases, though tests are publicly visible. It’s perfect for individuals who want to experiment with the technology and see the plain English approach in action. The free tier includes a limited feature set and community support, but it’s enough to understand testRigor’s core value proposition.

For teams ready to get serious, we offer private tier plans starting at $99/month for a single user with 1,000 test cases. This provides access to our extended feature set and customer support. Our mid-tier plans, starting at $900 per month, offer unlimited users and test cases – ideal for growing teams that need to scale their automation efforts. Enterprise plans provide custom pricing with additional features like SSO, SLA guarantees, and on-premise deployment options.

The typical engagement starts with a personalized demo. A testRigor specialist walks prospects through the platform, often using the customer’s own application to demonstrate real value. We can usually show meaningful results within the first session – writing actual tests in plain English that immediately work against their system. This hands-on approach resonates with teams who are tired of lengthy evaluation processes with traditional tools.

Contact testRigor.

Also Read:

CEO Interview with Gopi Sirineni of Axiado

CEO Interview with Masha Petrova of Nullspace

CEO Interview with Eelko Brinkhoff of PhotonDelta


Podcast EP325: How KIOXIA is Revolutionizing NAND FLASH Memory

Podcast EP325: How KIOXIA is Revolutionizing NAND FLASH Memory
by Daniel Nenni on 01-02-2026 at 10:00 am

Daniel is joined by Doug Wong, senior member of the technical staff at KIOXIA America, where he has contributed to the advancement of memory technologies since 1993. He began his career with KIOXIA in the company’s Memory Division (then part of Toshiba America) and has since focused on a broad range of memory solutions, including PSRAM, SRAM, MROM, EPROMs, NOR flash, and NAND flash. Doug has been an active contributor to industry standards as a member of the JEDEC JC42.4 committee since 1996.

Dan explores the broad range of innovations KIOXIA has made in the development of 3D NAND FLASH memory with Doug. The technology innovations to produce 3D memories using multiple wafers are discussed, along with the work KOIXIA has done with JEDEC to advance the technology for the industry. Doug also describes the impact these advances have had on many growth industries, including AI and automotive. He also describes the the newer generations of the technology which are emerging along with the live demonstrations presented at several major shows.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.