Bronco Webinar 800x100 1

2026 Outlook with Nilesh Kamdar of Keysight EDA

2026 Outlook with Nilesh Kamdar of Keysight EDA
by Daniel Nenni on 01-13-2026 at 10:00 am

Nilesh Kamdar Show

Tell us a little bit about yourself and your company.
I’m Nilesh Kamdar, General Manager of the Keysight EDA business unit. Keysight is an S&P 500 company that provides design, emulation, and test solutions to help engineers develop and deploy faster with less risk. On the EDA side, we focus on RFMW, high-speed digital, systems, power and photonic design challenges. These are the problems that keep semiconductor and system designers up at night: multi-physics simulation, signal integrity, power consumption, and making sure complex designs work.

What was the most exciting high point of 2025 for your company?

Acquiring Synopsys’ Optical Solutions Group and Ansys’ PowerArtist were high points. Rather than just buying market share, these additions bring innovative optical design capabilities (CODE V, LightTools, RSoft) and leading RTL power analysis (PowerArtist) to our portfolio, addressing the multi-physics imperative. We’re bringing in decades of expertise in photonics, optics, and power analysis, enabling Keysight to deliver multi-domain system design in an open, vendor-agnostic ecosystem. As thermal and power constraints tighten, these capabilities are imperative.

What was the biggest challenge your company faced in 2025?

Helping our customers on their AI journey and ensuring that the technology is carefully and securely deployed. This is often at odds with some in the C-Suite who view AI as the answer to every problem, and that substantial cost savings are inevitable.

How is your company’s work addressing this challenge?

To help our customers, we’re building AI features to solve problems in design workflows – accelerating verification, prioritizing corner-case testing, and reducing manual iterations. Part of this is educating on where AI delivers value and where human expertise remains essential. At Keysight, we believe AI is about augmentation.

What do you think the biggest growth area for 2026 will be, and why?

Multi-physics simulation. As designs push against thermal and power limits, particularly in data centers, the ability to simultaneously analyze electrical, thermal, and mechanical properties has become essential. Every milliwatt matters when data centers consume billions of watts of energy. Tools that optimize across domains will be critical for next-generation designs.

How is your company’s work addressing this growth?

We’re continuing to develop our multi-physics capabilities to improve co-design and co-verification. This includes advancing photonics integration, enhancing thermal analysis capabilities, and ensuring that designers can understand trade-offs across electrical performance, thermal management, and manufacturing constraints within workflows.

Are you incorporating AI into your products?

Keysight has been at the forefront of integrating AI. From an EDA perspective, we’re focused on integrating capabilities that augment productivity in verification and design workflows. Our goal is to harness AI to help engineers work faster while utilizing their unique expertise to design complex semiconductors.

Is AI affecting the way you develop your products?

We’re using AI to accelerate our own development cycles and improve product quality. More importantly, we’re deploying AI judiciously to solve real problems our customers face, such as workflow bottlenecks.

What conferences did you attend in 2025 and how was the traffic?

We attended various industry events spanning DesignCon, DAC (Design Automation Conference), IMS (International Microwave Symposium), DVCon India, ECOC, and European Microwave Week. Traffic was robust across all of these. Conversations focused on detailed implementation discussions around AI-enhanced tools, multi-physics solutions, and chiplet design, where the technical challenges are most acute.

Will you participate in conferences in 2025? Same or more as 2025?

Events remain an important part of our strategy to connect and I don’t see that changing anytime soon. As design complexity increases, face-to-face technical discussions are invaluable!

How do customers normally engage with your company?

There are multiple ways we engage with customers, including direct sales, technical support teams, and field application engineers who work closely with design teams. We also connect through training programs, webinars, and technical content. Personally, I spend a lot of time on the road and engage regularly with customers to understand their perspective. Keysight is solving specific design challenges with our customers, not just selling licenses.

Also Read:

From Silos to Systems, From Data to Insight: Keysight’s Upcoming Webinar on EDA Data Transformation

An Insight into Building Quantum Computers

Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires

Video EP11: Meeting the Challenges of Superconducting Quantum System Design with Mohamed Hassan


Verifying RISC-V Platforms for Space

Verifying RISC-V Platforms for Space
by Bernard Murphy on 01-13-2026 at 6:00 am

User making a call through a satellite

Space applications are booming, prompted by rapidly declining launch costs now attainable through commercial competition. Thanks to ventures like SpaceX, the cost to put a satellite into low earth orbit (LEO) has dropped from $20k/kg to $2k/kg today and is expected to drop further to $200/kg or lower. Plummeting costs drive new opportunities including widely accessible SATCOM (satellite communication), offering phone and broadband support through large constellations of satellites from Starlink and Amazon Leo (previously Amazon Kuiper). China is also on this bandwagon, building multiple constellations of their own. Standardization through the 3GPP (cellular) consortium is accelerating and will enable competitive interoperability, already partially supported in 5G-Advanced and expected to be fully supported in 6G. This isn’t just for emergency calls but also for regular calls direct to your smartphone. Standards-based phone communication, broadband and IoT support, across 85%+ of the earth’s surface where there are no terrestrial base stations? This is an exciting time for space-ready systems.

Equally space-based services will continue to grow in importance for defense, weather and climate monitoring, disaster support, and many other applications. Across all these deployments, expect to see growth in similar directions to those we see in terrestrial applications: AI, high performance servers, security, and much more. A new space-based economy is blossoming and will demand electronics able to function reliably for many years in this harsh environment.

Space-ready is not just about rad-hard

Radiation, solar and deep space, makes space a particularly hostile environment for electronics. Protection against electron cascades triggered by high energy cosmic rays demands special radiation hardened (rad-hard) processes, logic redundancy, ECC, all the tricks we now see for high-safety automotive systems but in space much more demanding, lacking the shielding our atmosphere provides.

Rad-hard is important but alone it is not enough. Beyond remotely triggered options, electronics in space can’t be serviced cost-effectively. If something stops working and a reset or reboot won’t fix the problem, the satellite is dead. This puts much higher emphasis on bullet-proof verification while the system is still in design. Good enough for a phone, even for a car, isn’t good enough for a satellite.

Comprehensive verification against a spec

Complex specifications present some unique challenges in this respect. According to Dave Kelf (CEO, Breker) the RISC-V International ISA spec runs to around 1400 pages, all carefully considered and agreed. Now you must verify behavior not just for what you added to the ISA but also across interactions with other features, driven by as many positive and corner case use-cases as you can construct.

It is not difficult to generate comprehensive unit test cases for rad-hardening features such as ECC or redundancy. Testing individually against other spec features, one at a time, say cache coherence management, may not be too bad but difficult to test comprehensively. But where testing becomes hard is in cross verifying all relevant use cases against each other. Real systems run many objectives simultaneously, stalling as needed to deal with traffic contention in bus fabrics. Stalls, latency and congestion are where difficult bugs lurk, satellite-dooming bugs. Getting to high confidence here requires system-level test definition, with the ability to run multiple use-cases simultaneously.

Breker’s approach to testing starts with system-level test models abstracted from implementation details, allowing you to easily combine VIPs defined at the same level. Breker themselves offer several VIPs, including RISC-V-related tests, together with tests checking coherency compliance, security and other areas. You can easily add your own models following the PSS standard or using standard C++, allowing for randomization especially around test models for your ISA extensions. Running these models together, generating interwoven and high demand traffic loads, will probe every dark corner of your system behavior.

What if you missed spec corners when building your test models?

The purpose of verification is to test compliance of your implementation with the spec. As a description of what compliance should mean, the RISC-V spec may be one of the more debated and refined specifications in our industry. But there is still a fallible, human step in mapping that document to a complete implementation of intent as represented in a test specification.

As linear documents, even the best and most reviewed of specs are an awkward way to capture a web of interconnected feature dependencies. This is not an academic concern. Dave shared a real challenge in interpreting RISC-V requirements around fence instructions. Quick reminder, these are instructions inserted in assembly code to control accesses to shared memory between multiple cores, which must be correctly ordered to avoid races. Standard coherence protocols can handle many but not all such cases. Special cases are typically application specific, such as a shared memory location used as a semaphore. Core A updates the semaphore indicating it is safe for core B to perform some other action. If core B reads the semaphore before Core A has updated, it may incorrectly assume it is OK to proceed. Adding fence instructions forces sufficient delay to avoid races between read and write.

Coherency bugs can be very challenging to catch, sometimes only appearing after billions of cycles in production. Not the kind of problem you want to see in a deployed satellite. The Breker guys spotted a RISC-V spec challenge that could lead to such a bug (leading one now loyal Breker customer to a redesign). The spec is completely accurate, detailing fence behaviors early in the document. But in a later unrelated part of the document, there is a mention of a fence behavior which is also important to understand, yet is easy to miss if your reading is limited to the earlier section.

Specs are living documents and it is probably unrealistic to expect this kind of problem cannot appear again. A safer approach would be to use AI to tease out such traps and more generally the web of relationships through a spec. I have talked earlier about Breker’s approach to AI, based on NLP rather than LLM. Dave tells me this is still in development, but they have already applied it to detect these distributed fence references in the spec, which it has done very successfully. To me this looks like an essential second step to closing the understanding loop in specs. First make sure the spec is oracle-worthy (not a problem for RISC-V), second make sure that you understand all relationship webs throughout the spec when translating into test models.

Very interesting. You can learn more in this press release on the Breker and FrontGrade Gaisler collaboration

Also Read:

A Principled AI Path to Spec-Driven Verification

RISC-V Virtualization and the Complexity of MMUs

How Breker is Helping to Solve the RISC-V Certification Problem


2026 Outlook with Paul Neil of Mach42

2026 Outlook with Paul Neil of Mach42
by Daniel Nenni on 01-12-2026 at 10:00 am

Paul's headshot

Tell us a little bit about yourself and your company

I’m Paul, Chief Operating Officer at Mach42. As COO, I am responsible for the business growth of Mach42, as well as driving customer success. My previous roles included VP of Product at Axelera AI, Graphcore and XMOS. I hold a PhD in Electrical Engineering and an MBA in Technology Management.

Mach42 is delivering a modern solution to accelerate analog and mixed-signal verification, leveraging advanced machine learning and AI to simplify, automate, and speed up complex verification tasks. Our proprietary neural network technology enables the creation of high-accuracy surrogate models from minimal data, dramatically reducing development and computational costs. These models can be automatically exported in Verilog-A, System Verilog, and C/C++ formats, enabling seamless integration with industry-standard simulators.

What was the most exciting high point of 2025 for your company?

One of the standout high points in 2025 was successfully unveiling a breakthrough AI-powered solution for analog circuit analysis. Our Discovery Platform was enhanced to dramatically improve validation of design performance across varying spec conditions, supporting near-realtime analysis to rapidly detect out-of-spec violations.

Another major highlight was being named a finalist in four prestigious awards this year. This includes the Design Tool and Development Software Product of the Year at the Elektra Awards, the Innovation Award at the OXBA Awards and both AI Innovation of the Year and Innovative Tech Company of the Year at the Thames Valley Tech and Innovation Awards.

What was the biggest challenge your company faced in 2025?

Our main challenge was striking the right balance between long-term product innovation and near-term customer deliverables. We’ve addressed this by embedding domain-specific analog intelligence into our neural network technology, enabling near–real-time verification while remaining fully compatible with existing EDA workflows.

As a result, Mach42 can automatically generate accurate surrogate models in Verilog-A format that run on standard SPICE-class simulators, significantly reducing verification time without compromising accuracy.

How is your company’s work addressing this challenge?

Mach42 addresses this challenge by combining physics-aware neural network models with deep analog design expertise to deliver immediate, production-ready value. Our platform integrates directly into existing EDA workflows, allowing engineers to achieve near–real-time verification speeds without changing how they design or verify circuits.

By automatically generating accurate surrogate models in Verilog-A, Mach42 dramatically reduces simulation time while preserving SPICE-level fidelity, enabling faster design iteration and earlier identification of corner-case issues.

Additionally, our advisory board and dedicated technical team help shape a credible long-term roadmap. Industry recognition and programs such as Cadence’s Connections Program further reinforce trust in our technology.

What do you think the biggest growth area for 2026 will be, and why?

Looking to 2026, AI-driven verification and simulation in semiconductor design is poised for significant growth. As chips become increasingly complex and time-to-market pressures intensify, engineers will demand solutions that accelerate verification while maintaining the highest levels of accuracy. Tools that combine speed, scalability, and predictive insights will become essential to meeting these challenges.

How is your company’s work addressing this growth?

Mach42’s Discovery Platform directly addresses this growth by leveraging machine-learning–driven emulation to rapidly predict design outcomes and accelerate design space exploration, identifying out-of-spec conditions early. It also streamlines IP reuse by validating performance across varied specifications and integrates seamlessly with existing simulators and flows, making adoption straightforward for design teams. These capabilities position Mach42 as a key enabler for next-generation semiconductor design.

Are you incorporating AI into your products? / Is AI affecting the way you develop your products?

Absolutely. AI is at the heart of our platform, powering proprietary algorithms that accelerate traditionally slow simulation and verification tasks while preserving accuracy, delivering orders-of-magnitude speedups. It’s not just a feature—AI shapes both the product and how we develop it. Our models learn from past simulation data, adapt to complex analog design challenges, and continuously improve through real-world feedback, enhancing both performance and reliability.

How do customers normally engage with your company?

Customers typically start with a focused engagement to identify their key areas of interest. After this initial phase, they move to an annual subscription, which provides full access to the Mach42 Discovery Platform, R&D support, and ongoing technical assistance.

Additional comments?

Mach42’s rapid progress in 2025—from major product milestones to industry recognition—highlights the rising demand for AI-first solutions in complex engineering. Positioned at the intersection of AI, simulation, and semiconductor design, we are uniquely equipped to shape the future of chip development.

Contact Mach42

Also Read:

Video EP12: How Mach42 is Changing Analog Verification with Antun Domic

Video EP10: An Overview of Mach42’s AI Platform with Brett Larder

An Important Advance in Analog Verification

CEO Interview: Bijan Kiani of Mach42


Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System
by Daniel Nenni on 01-12-2026 at 6:00 am

CES 2026 Jensen Huang founder and CEO of NVIDIA Roland Busch President and CEO of Siemens AG

At CES in Las Vegas, Siemens and NVIDIA announced a major expansion of their long-standing collaboration, aiming to create what they term the “Industrial AI Operating System.” This ambitious initiative seeks to embed artificial intelligence deeply across the entire industrial value chain—from design and engineering to manufacturing, operations, and supply chains—transforming how physical systems are conceived, built, and managed in the real world.

The partnership builds on previous efforts, including integrations between Siemens’ Xcelerator platform and NVIDIA’s Omniverse for photorealistic digital twins. Now, the companies are fusing NVIDIA’s expertise in accelerated computing, generative AI, and simulation with Siemens’ industrial software, automation, and domain knowledge to develop AI-native workflows that turn passive digital twins into active, intelligent systems.

NVIDIA will contribute its AI infrastructure, simulation libraries, models, frameworks like Omniverse and CUDA-X, and blueprints for scalable deployment. Siemens, in turn, is committing hundreds of industrial AI experts along with its leading hardware and software portfolio. As Roland Busch, President and CEO of Siemens AG, stated, “Together, we are building the Industrial AI operating system—redefining how the physical world is designed, built, and run—to scale AI and create real-world impact.”

Jensen Huang, NVIDIA’s founder and CEO, emphasized the revolutionary potential: “Generative AI and accelerated computing have ignited a new industrial revolution, transforming digital twins from passive simulations into the active intelligence of the physical world.” The collaboration closes the loop between virtual simulation and physical execution, allowing industries to model complex systems virtually, optimize in real time, and automate seamlessly.

A key focus is creating fully AI-driven, adaptive manufacturing sites. The blueprint begins in 2026 with Siemens’ Electronics Factory in Erlangen, Germany, serving as the world’s first such facility. Powered by an “AI Brain”—combining software-defined automation, industrial operations software, and NVIDIA Omniverse libraries—these factories will continuously analyze digital twins, test improvements virtually, and apply validated changes directly to the shop floor. This promises reduced commissioning times, higher productivity, lower risks, and more sustainable operations.

The partnership extends to semiconductor design, where Siemens will GPU-accelerate its electronic design automation tools using NVIDIA’s PhysicsNeMo and CUDA-X, targeting 2x to 10x speedups in verification and layout processes. Additionally, the companies are developing blueprints for next-generation AI factories that optimize power, cooling, and automation for high-density computing.

To prove scalability, Siemens and NVIDIA will first implement these technologies in their own operations before rolling them out to customers. Early adopters evaluating the capabilities include Foxconn, HD Hyundai, KION Group, and PepsiCo.

This expanded alliance positions Siemens and NVIDIA at the forefront of the industrial AI revolution, accelerating innovation while addressing challenges like energy efficiency and resilience in global infrastructure. By making AI accessible and impactful at industrial scale, the Industrial AI Operating System could usher in a new era of smarter, more adaptive manufacturing, bridging the digital and physical worlds like never before.

Also Read:

Automotive Digital Twins Out of The Box and Real Time with PAVE360

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing

3D ESD verification: Tackling new challenges in advanced IC design


CES 2026 and all things Cycling

CES 2026 and all things Cycling
by Daniel Payne on 01-11-2026 at 2:00 pm

segway

I just completed the annual Rapha 500 Challenge on Strava by cycling 869 km in eight days, so it’s time to give you my annual recap of CES 2026 and all things cycling. Similar to previous years the big push again in 2026 are e-bikes and even e-motos. The AI acronym was everywhere too in product names and announcements as physical AI features abound in new products. Oh, EDA and IP vendors like Synopsys, SiemensCadence, MIPS, Ceva , RISC-V , T2M-IP and Dolphin Semiconductor were also present at CES this year.

E-bikes

Segway

There original two-wheel self-balancing devices is hardly mentioned anymore, in favor of two new e-bikes and one e-moto.

Xaber 300, Myon, Muxi

The Xaber 300 is an electric dirt bike with full suspension, the Myon is a Class 3 commuter e-bike capable of a 28 mph top speed, while the Muxi is a more compact Class 2 commuter  e-bike with a top speed of 20 mph.

Yadea

This Chinese company showed two new e-bikes, Fatboy for fat tire lovers, and Flo for urban commuters.

Fatboy, Flo

BGL Bike

They offer a range of six e-bikes: City, Fat, Folding, Mountain, Road, Trike.

BGL Bike: Electric Fat Bike

Kamingo

How about taking your existing bike and converting it into an e-bike? Kamingo has such a conversion kit that assembles in minutes, driving the rear wheel of your bike. They also won a 2026 CES Innovation Award for this product.

Kamingo

Bosch

This German company provides the e-bike drivetrain to 100 bike brands around the world, and they showed the Family Next e-bike, using all of Bosch’s eBike system.

Bosch eBike System

C-Star

This year they showcased several new e-bikes for off-road use.

Alucard: CS-M4A

Hyper Bicycles

This vendor has both traditional bike and e-bikes, even e-bikes for kids and several models for off-road use.

Hyper Bicycles: 16in e-balance for kids

Macfox

Three e-bikes were on display for urban cyclists that prefer fat tires for a comfy ride.

Macfox

Leoguar Bikes

Hailing from Texas, this vendor provides four e-bike styles: Fat tire, Cruiser, Mountain  and Folding.

Leoguar Bikes

Mimbob

Another Chinese supplier with a wide range of e-moto for off-road use.

Mimbob

Heybike

Multiple e-bike models were shown this week: Venus – cruiser, Helio F – foldable, Mars 3.0 – fat tire, Ranger 3.0 Pro – suspension, Polaris – touring, Omega – long distance, Villain – dirt bike.

Heybike: Polaris

Radar

I’ve started using the Garmin Varia radar on my road bike and it alerts me to approaching traffic from behind by beeping and then showing how close it is on my Garmin bike computer display. Several companies have entered the bike radar business.

Segway

For only $99 you get a bare-bones radar to fit on the back of your Segway bike.

Segway Rearview Radar

Seeru

A 2026 Honoree in mobile devices the Seeru stands for See Rear for U, and can be attached to a bicycle or a powered wheelchair to report approaching vehicles.

Seeru

Miscellaneous

Bosch added a new e-bike security features so that a stolen e-bike gets marked in the eBike Flow app, alerting dealers and authorities that it has been stolen.

Bosch e-bike security

EVs have used regenerative braking for several years now and a company called Hello Space showed off a Mag Drive system that charges your e-bike battery while the bike is moving, extending the battery range. I’d love to know how this works on an e-bike, because in my Tesla the regenerative braking slows, then stops my EV.

Hello Space: Mag Drive

Livall returned to CES with their PikaBoost e-bike converter and AI Visual Smart Taillight products.

PikaBoost
AI Smart Light

Hypershell

Tony Stark became Ironman after donning a robotic suite, so Hypershell has the X Ultra Exoskeleton to deliver more power for cyclists.

Hypershell

Blequp brought their AI-powered glasses called Ranger to CES, and they have video recording, intercom and AI sports assistance. I like the idea of keeping my eyes on the road while cycling combined with these features.

BleeqUp: Ranger

Speediance showcased their smart trainer, VeloNix with adjustable controls to fit your body shape and a screen with metrics to keep you informed of your workout progress.

Speediance: VeloNix

Daniel’s Gear

Outdoor riding:

  • 2022 Cervelo R5 road bike, Zipp 404 wheels, SRAM AXS Force components
  • Garmin 1040 bike computer
  • Garmin Dual heart rate monitor
  • Specialized S-Phyre cycling shoes
  • Garmin Varia radar
  • SpeedPlay pedals
  • Continental Grand Prix 5000S TR, tubeless tires

Indoor riding:

  • 2016 Cervelo R3 road bike, Zipp 404 wheels, SRAM Red components
  • Tacx Neo 2T smart trainer
  • Wahoo KICKR Desk
  • Zwift cycling app
  • mac Mini M4, wireless keyboard, wireless trackpad
  • Raycon earbuds for Discord app

Summary

The top road bike brands shun CES, like: Trek, Specialized, Giant, Canyon, Orbea, Cannondale, Cervelo, Scott, Pinarello, Bianchi, Colnago, Factor. Many of these bike brands do offer e-bikes, but CES is just not their focused show.

What I saw virtually at CES this year is a continued strong presence of e-bikes from non-traditional bike companies that are mostly recently founded, so it’s a crowded market for sure. E-bike sales were over 1 million in 2022, and 63% of all new bikes sold from 2019 to 2023 were e-bikes. 2024 sales in the US of e-bikes were 1.7 million units.

Troubling for me when I ride my road bike around in the Portland, Oregon area is the distinct trend of seeing many e-bike riders without any helmets, as protecting your head with a helmet is a must-have for safety. I wouldn’t be alive without a helmet after my 2018 bike crash. e-motos are involved in more crashes than e-bikes, but both categories should have increased safety measures like helmets and the use of hand signals.

Also Read:

Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and ML

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

Revolutionizing Hardware Design Debugging with Time Travel Technology


Podcast EP326: How PhotonDelta is Advancing the Use of Photonic Chip Technology with Jorn Smeets

Podcast EP326: How PhotonDelta is Advancing the Use of Photonic Chip Technology with Jorn Smeets
by Daniel Nenni on 01-09-2026 at 10:00 am

Daniel is joined by Jorn Smeets, Managing Director for North America at PhotonDelta, an industry accelerator for photonic chip technology. Based in Silicon Valley, his mission is to advance the photonic chip industry by fostering collaboration between European and North American entities.

Dan explores the focus of PhotonDelta with Jorn, who describes the organization’s broad charter to support an end-to-end value chain for photonic chips that designs, develops, and manufactures innovative solutions to contribute to a better world. Jorn explains some of the impressive work PhotonDelta has done in collaboration with the worldwide supply chain to enhance the use of photonic chip technology.

Jorn also discusses the upcoming PIC Summit USA event. This event started last year as a small gathering of key players to explore how to better collaborate to expand the impact of photonic chip technology. He explained that this year the event has been expanded to include more topics and more participation from a world-class group of speakers and organizations.

The event will be held in Sunnyvale, CA on January 19, right before the SPIE Photonics West Exhibition in San Francisco. Attendance at PIC Summit USA is by invitation only. You can learn more about the event and request an invitation here.


Webinar: Why AI-Assisted Security Verification For Chip Design is So Important

Webinar: Why AI-Assisted Security Verification For Chip Design is So Important
by Mike Gianfagna on 01-09-2026 at 6:00 am

Why AI Assisted Security Verification For Chip Design is So Important

It is well-known that AI is everywhere, and the incredible power of this new technology is enabled by highly complex, purpose-built silicon. But there is a silent enemy of this substantial, world-changing progress. Something that has the power to steal a bright future from all of us. The hardware root of trust for those advanced custom chips is at the epicenter of the story. Simply put, AI advances have made the hardware root of trust vulnerable to attack. You can see the stories in news headlines. And it’s getting worse.

Thankfully, there are companies focused on this problem. Caspia Technologies is a bright spot among those companies. It is developing an AI platform of tools, a methodology and training to fortify chip design practices against this threat to secure future innovation. The company presented an important webinar on this topic recently. If you’re involved in advanced chip design, you need to see this webinar. A replay link is coming. Let’s first look at some details about why AI-assisted security verification for chip design is so important.

Who’s Presenting

The webinar contains three parts – an overview of the problem and Caspia’s solution, a live demonstration of how to find and fix security flaws in real chip designs and an interactive Q&A session with the webinar audience. There are two well-qualified members of the Caspia team presenting:

Beau Bakken first provides an overview of security risks all design teams face today. He then describes an effective strategy to minimize these risks and illustrates how it works. Beau is VP of Products at Caspia. He works on the definition of new products and the associated go to market strategies. Beau has been with Caspia for over five years. Before that, he spent time at the National Science Foundation.

Dr. Paul Calzada then presents a live demonstration of CODAx, Caspia’s security-aware static verification solution. You will see the analysis of a real design and the identification of security weaknesses. Paul is an R&D Application Engineer at Caspia. He works with customers to ensure effective deployment of Caspia’s solutions. Paul holds a PhD in Computer Engineering from the University of Florida.

What Is Covered

Beau begins the webinar with some eye-opening information regarding the growing vulnerability of the hardware root of trust and its associated firmware and microcode. He shares alarming trends regarding the growth of hardware-focused attacks and presents some real examples of the problem taken from news headlines. 

Beau then explains how AI is making it easier to attack the same hardware that is accelerating AI workloads. He points out that AI is the problem and the solution to this dilemma. He explains that hardware is NOT patchable, and chip security flaws will cost billions of dollars to recall and repair. Security flaws are simply not an option any longer.

Beau then explains the architecture of Caspia’s secure-by-design approach to addressing this important issue. He explains how Caspia’s tools easily integrate into existing design flows, how these tools find security flaws and assist in removing them early in the design process, before a disaster occurs in the field.

Since AI is causing the problem, the solution must also use AI to see what’s coming and remove the threats. Beau also describes Caspia’s generative and agentic technology that makes every designer a security expert.

Paul then demonstrates how to find and fix security flaws in a real open-source design. He uses Caspia’s CODAx static security verification tool to do this. You learn the depth of security checks that CODAx performs so subtle security weaknesses can be found and fixed early.

Watch the Webinar Replay Now!

If you are designing advanced chips that will be part of AI workload acceleration, this is a must-see event. Watch the replay now, you’ll be glad you did.  Here is a link to watch the replay. And that’s how you can find out why AI-assisted security verification for chip design is so important.

Also Read:

A Six-Minute Journey to Secure Chip Design with Caspia

Large Language Models: A New Frontier for SoC Security on DACtv

Caspia Focuses Security Requirements at DAC


CEO Interview with Scott Bibaud of Atomera

CEO Interview with Scott Bibaud of Atomera
by Daniel Nenni on 01-08-2026 at 4:00 pm

Atomera Scott Bibaud headshot

Scott Bibaud has served as President, Chief Executive Officer and a director since October 2015. Mr. Bibaud has been active in the semiconductor industry for over 25 years. He has successfully built a number of businesses in his career which grew to generate over $1 Billion in revenue at some of the world’s largest semiconductor companies. Most recently he was Senior Vice President and General Manager of Altera’s Communications and Broadcast Division. Prior to that he was Executive Vice President and General Manager of the Mobile Platforms Group at Broadcom.

Tell us about your company.

Atomera Incorporated is a semiconductor materials and technology licensing company focused on deploying its proprietary, silicon-proven technology into the semiconductor industry. Our mission is to extend the life and performance of today’s semiconductor technologies through innovation at the materials level. Our Mears Silicon Technology™ (MST ®), a proprietary material/film technology, enhances transistor performance, power efficiency, and scalability, helping chipmakers achieve next-gen results utilizing their existing manufacturing infrastructure.

As AI accelerates demand for more powerful and efficient systems, advancements in materials are becoming the catalyst that makes those gains possible, enabling continued breakthroughs in power, performance, area, and cost (PPAC). Atomera is currently working with several of the world’s top semiconductor producers and playing a hands-on role in areas like advanced logic (GAA), DRAM, power, and wireless/radio frequency (RF), underscoring the industry’s growing recognition of MST’s potential benefits.

What problems are you solving?

Atomera is trying to solve key bottlenecks in the industry, including performance and power to yield and cost through MST.

For decades, Moore’s Law, which accurately predicted that the number of transistors on a chip would double around every two years, kept the industry moving forward. The steady pace of innovation is now being tested. As chips shrink to 3nm and beyond, FinFets can no longer deliver the needed performance and efficiency, so manufacturers are switching to gate-all-around (GAA) technology. However, GAA isn’t perfect. While GAA addresses electrostatic challenges, it also introduces new ones. This is where Atomera’s advanced materials come into play.

This reality is driving a growing consensus across the industry that scaling alone is not enough. In fact, a recent survey saw that 94% of respondents believe simply shrinking nodes will no longer be sufficient. Instead, 99% of those polled are pointing to innovation in new materials and technologies as essential to unlocking PPAC improvements to support AI workloads.

It turns out that materials, such as MST, can continue to improve transistor channel behavior by improving electron mobility, lowering leakage, or improving variability, enabling better performance or lower power, even as the industry carries on to the next node. These breakthroughs in advanced materials are empowering the industry to achieve higher performance with reduced space and energy requirements.

What application areas are your strongest?

One of MST’s most strategic advantages is its ability to be integrated with minimal equipment changes, allowing manufacturers to extract new levels of performance and efficiency from existing manufacturing processes, achieving more without the need for costly new tools.

In power electronics, MST helps push past today’s material and architectural limits by enabling devices to handle higher voltages and currents more efficiently through reduced on-resistance and improved breakdown voltage. This means MST can offer customers a path to better PPAC without major design changes or expensive process overhauls.

RF products are facing increasing signal performance issues with the transition to 5G. Low-noise amplifiers (LNAs) and advanced RF switching technologies are the unsung heroes ensuring signal strength, clarity, and battery life. Innovations like MST can improve signal performance and extend RF-SOI’s platforms, delivering higher gain, lower noise, and faster switching. For a mobile phone user, this translates to clearer signals, higher bandwidth, and longer battery life.

MST also has applications for memory integration. By reducing variability and leakage in memory transistors, MST enhances stability and density in DRAM and SRAM devices. And in the GAA arena, Atomera’s technology can be used to optimize performance in at least four different areas of the transistor.

Meanwhile, in GaN-on-silicon structures, MST enables improved yield for high-performance RF and power devices.

What keeps your customers up at night?

The gap between the needs of AI workloads and the capabilities of today’s silicon-based infrastructure is one of the largest industry challenges right now. In the same survey, more than 200 semiconductor engineers, materials scientists, and technical industry leaders in the United States found that 76% of decision-makers believe data centers will fall short of meeting soaring AI and high-performance computing demands. To keep shrinking and improving chips for massive AI systems, engineers are turning to advanced materials as the lever for PPAC gains and to keep progress on track.

To meet the performance and efficiency demands of the AI market, designers are turning to the most advanced semiconductor processes using GAA transistors. Today, the first wafers with this architecture are entering production, but the power, performance, and yield still have significant room for improvement. Our customers are looking for any compelling solution to help achieve those goals, and Atomera’s technology is one key piece of the puzzle.

What does the competitive landscape look like and how do you differentiate?

It is widely understood in the industry that it takes a new material roughly 18 years from first concept to volume production in the semiconductor supply chain. Atomera’s material technology has successfully navigated this journey. This level of investment and persistence in an independent company is rare, and while there are other providers of advanced materials, we feel that most are complementary, or additive, to our technology, rather than competitors. Many of our largest customers have R&D teams internally who are trying to solve the same problems that we are addressing, and Atomera’s technology provides a very well-developed tool they can leverage, making MST uniquely positioned in the market. It takes a full ecosystem of solution providers to bring the most advanced nodes to market, and Atomera is happy to play our part.

What new features/technology are you working on?

Atomera is continually working to understand the challenges faced by companies across industries we serve and refining our film and integration techniques to deliver compelling solutions to customers. For example, companies making RF front ends for cellular applications are confronted with increasing demands as phones use more bandwidth and communicate over more and more frequency bands. RF components are significant consumers of battery power in mobile phones, but these new frequency bands require radios to scan an even wider area than before, placing pressure to lower the power of their LNAs. Recently, we determined that MST can be very effective in helping them meet this goal while simultaneously improving their RF switch efficiency. We are working on solutions like these in several different markets for a variety of customers.

Another area of growing focus is compound semiconductors. Our work in GaN shows how MST can be leveraged to bring physical improvements to a material that translate well into electrical advantages. We have exploratory projects underway in other compound semiconductors, which we expect will start maturing in the near term as well.

How do customers normally engage with your company?

Today, Atomera and MST are widely recognized across the semiconductor industry, and we collaborate with a broad range of leading companies. Oftentimes, after Atomera validates how MST can be used to help a customer with a known problem or to achieve device improvements, we will meet with the team to show the supporting data. Next, our teams will conduct Technology Computer-Aided Design (TCAD) simulations to understand how MST can be used in their fabs and then wafer demos are run where MST is deposited on their wafers and other tests are conducted. If all goes well, we will license our technology to them, install it on one of their production tools, and go into a period of tuning our film and their integration method to maximize performance. Atomera’s TCAD modeling, epi deposition, and integration engineering teams provide support the whole way, helping our customers transition to mass production. At that stage, our business model is to take a small royalty on every wafer they manufacture while partnering with them to optimize the next process they’re working on.

Contact Atomera

Also Read:

CEO Interview with Masha Petrova of Nullspace

CEO Interview with Eelko Brinkhoff of PhotonDelta

CEO Interview with Pere Llimós Muntal of Skycore Semiconductors


2026 Outlook with Volker Politz of Semidynamics

2026 Outlook with Volker Politz of Semidynamics
by Daniel Nenni on 01-08-2026 at 10:00 am

Volker Politz Semidynamics

Tell us a little bit about yourself and your company.
I am the Chief Sales Officer for Semidynamics and I lead the global sales team and drive the overall sales process.

Semidynamics was founded in 2016 as a design service company with a focus on RISC-V. This was so successful that the CEO decided to pivot the company towards its own IP sales and started licensing IP from 2019.

We provide 64-bit RISC-V processor IP which is complimented with our leading-edge vector unit and tensor unit extensions. We have combined these technologies together to form our All-In-One AI IP that provides a much better way forward for AI projects as it is future-proof, easy to program and easy for us to create the exact hardware needed for a project. In addition, it incorporates our Gazzillion Misses technology for advanced data handling to ensure that the processor is never idle waiting for data. When it comes to handling large amounts of data, we have the fastest, best-in-class solution for big data applications.

In 2025, that positioning sharpened even further: AI is the center of gravity for us, and Cervell (our all-in-one RISC-V NPU) is the clearest expression of that focus.

What was the most exciting high point of 2025 for your company?

The highlight of 2025 was turning our all-in-one story into a complete, developer-ready stack: launching Cervell in May, and then following up with the Inferencing Tools in October to accelerate deployment from trained model to running product.

In parallel, we kept removing friction from AI software enablement—especially through ONNX Runtime integration—because adoption depends on how fast teams can get to first results.

What was the biggest challenge your company faced in 2025?

The overall economic weakness hits big companies and small companies as well delaying spending, cutting budgets and re-thinking projects as they try to adapt to the fast-moving AI landscape and the overall global trade picture.

The AI market is extremely noisy, and serious customers are more disciplined than ever about evaluation, differentiation, and long-term software support before they commit. That raises the bar for any IP vendor, even when interest is strong.

How is your company’s work addressing this challenge?

We liaise closely with our customers to tailor our offering to their precise needs. In addition, we encourage them to engage early with us to avoid gaps in the product plans later on.

We also meet the prove it fast expectation by making AI deployment straightforward (ONNX enablement, and higher-level inferencing tooling), so customers can validate value quickly and de-risk the decision.

What do you think the biggest growth area for 2026 will be, and why?

‘Anything AI’ is still driving a lot of new products – especially generative AI, large language models – because it makes possible a whole new set of features to drive innovation.

We expect increasing demand around AI in segments such as data center appliances, vision processing such as security cameras, mobile base stations and software defined vehicles and we are ideally positioned with our All-In-One AI IP to be the solution of choice.

I think 2026 is going to be a big year for RISC-V itself: you can see major industry players deepening their commitment, and consolidation through M&A is reshaping the landscape, making the remaining independent specialists more visible and important.

From a European perspective, that matters: Semidynamics is one of the few remaining independent RISC-V IP vendors in Europe with a clear AI-first product strategy.

How is your company’s work addressing this growth?

Our All-in-one AI processing element is based on highly configurable IP blocks, that enable us to customize configurations on demand when required. Through dedicated engagement with partners we can also expand the IP with unique instructions and to combine with customer’s own circuits.

We also have a software support strategy for AI that is based on ONNX, which makes the need for dedicated compilers obsolete and enables the customer to run a model they download in ONNX format to run out of the box. This helps them to move quickly to a final product as software and hardware can be developed in parallel.

And we’re extending that practicality with the Inferencing Tools layer on top of our ONNX Runtime Execution Provider for Cervell, so moving from model to deployment takes less specialist effort.

What conferences did you attend in 2025 and how was the traffic?

We attended various RISC-V.org events as well as dedicated events such as ICCAD in China, Computex in Taiwan, Embedded World in Germany and the AI Infra summit in USA.

The RISC-V Summits were especially important in 2025. For example, the RISC-V Summit Europe in Paris was a strong focal point for the EU ecosystem.

Will you participate in conferences in 2026? Same or more as 2025?

We aim to attend some new conferences to spread the word that our RISC-V IP can provide the processor needs for new projects as well as attending some of the events that we have previously attended. There is a huge wave of RISC-V being increasingly used as a viable, exciting alternative to the two processor incumbents and we are surfing that wave.

I expect the same or slightly more in 2026 than 2025, because the interest level is rising and we now have a broader product + tools story to take to customers.

How do customers normally engage with your company?

Customers can engage with our sales force or via contacts on our website and other sites where we post adverts. Once established, we have dedicated resources to facilitate the evaluation process and subsequent product selection and purchase.

Are you incorporating AI into your products?

Yes—AI is the core of our product direction. Cervell is designed as a scalable, all-in-one RISC-V NPU for AI workloads, and the software layer (ONNX enablement and inferencing tools) is explicitly about making AI deployment easier.

Is AI affecting the way you develop your products?

Absolutely. AI is changing the requirements faster than ever—especially around model portability, deployment speed, and workload diversity—so we design for programmability and customization first, and we invest heavily in a software path that keeps pace (ONNX Runtime integration, and tooling that shortens the route from trained model to running application).

Additional comments?

One final observation: between the accelerating RISC-V ecosystem support (including big-name commitments) and the consolidation happening through acquisitions, customers are looking hard at who can still offer true differentiation. Our answer is simple: flexible and scalable IP plus a pragmatic AI software path that helps teams get to working silicon faster and with less risk.

Contact SemiDynamics

Also Read:

Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Vision-Language Models (VLM) – the next big thing in AI?


Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and ML

Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and ML
by Daniel Nenni on 01-08-2026 at 6:00 am

Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and Machine Learning

Verification Futures Conference 2025 Austin (USA). Keynote Challenge Paper – Sohil Sri Mani Yeshwanth Grandhi – NVIDIA Corporation

Hardware verification has always been one of the most demanding phases of system design, but today it faces an unprecedented crisis. As hardware systems grow exponentially in complexity verification resources, time, compute, and human expertise, scale far more slowly. This widening gap has resulted in endless regression cycles, overwhelming debug workloads, and a persistent mismatch between coverage metrics and real-world correctness. In this environment, AI and ML offer powerful new tools, but only if applied with realism, discipline, and a clear strategy.

The core challenge in modern verification is not merely the size of designs, but the volume of data they generate. Simulation logs can reach hundreds of megabytes, regression suites can contain tens of thousands of tests, and subtle bugs may hide behind layers of seemingly unrelated failures. Traditional approaches—manual triage, static thresholds, and brute-force regressions—are increasingly inefficient. Verification engineers often find themselves “drowning in complexity,” spending more time managing data than extracting insight from it.

AI promises a way forward by enabling prediction, optimization, and insight at a scale humans cannot achieve alone. Machine learning models can detect patterns across historical data while LLMs can interpret and summarize unstructured text such as logs, specifications, and bug reports. However, the adoption of AI in verification is frequently hindered by common pitfalls. “Magic Wand” thinking leads teams to expect instant results without sufficient data preparation. Others apply the wrong tool to the problem, such as using an LLM where a simple statistical model would be more reliable. Finally, poor-quality or inconsistent data can undermine even the most sophisticated AI system.

To avoid these traps, a practical framework is needed to decide when to use ML versus LLMs. Traditional machine learning excels at structured, numerical data test results, performance metrics, coverage statistics, and historical trends. It is well suited for tasks like predictive test selection, performance regression detection, and bug triage classification. LLMs, by contrast, shine when dealing with unstructured text. They can parse massive log files, summarize failure causes, correlate error messages across modules, and even generate documentation or coverage models from natural-language specifications. Understanding these complementary strengths is key to building an effective hybrid strategy.

Real-world case studies illustrate this distinction clearly. In compiler verification, for example, a code change may pass all functional tests yet introduce a subtle 2% performance regression on a critical benchmark. Legacy approaches based on static thresholds often fail to catch such issues reliably. A modern ML-based solution uses time-series anomaly detection, learning normal performance behavior over time and flagging deviations with much higher sensitivity and confidence. This approach reduces false positives while catching regressions early, before they reach customers.

Similarly, intelligent log analysis with LLMs addresses one of verification’s most painful bottlenecks: debugging. When a complex simulation fails and produces a 100MB log file with interleaved messages from dozens of modules, manual inspection becomes impractical. LLMs can ingest these logs, identify the most relevant error sequences, summarize likely root causes, and even suggest next debugging steps. Rather than replacing the engineer, the model acts as a force multiplier, accelerating understanding and decision-making.

Building a successful AI-driven verification strategy requires thoughtful execution. Teams should start small by targeting a specific, high-impact problem rather than attempting a full-scale transformation. AI should augment human expertise, not replace it, keeping engineers firmly in the loop for validation and judgment. A solid data foundation—clean, labeled, and consistent—is essential, as AI systems are only as good as the data they learn from.

Bottom line: The verification crisis is fundamentally a data problem, and AI provides a powerful new toolbox to address it. By being strategic, choosing the right tools, and focusing on augmentation rather than automation, verification teams can regain control over complexity. The path forward does not require perfection—only a willingness to start now and evolve incrementally.

Verification Futures Conference

Also Read:

Assertion-First Hardware Design and Formal Verification Services

PDF Solutions’ AI-Driven Collaboration & Smarter Decisions

Reimagining Architectural Exploration in the Age of AI