RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System
by Daniel Nenni on 01-12-2026 at 6:00 am

CES 2026 Jensen Huang founder and CEO of NVIDIA Roland Busch President and CEO of Siemens AG

At CES in Las Vegas, Siemens and NVIDIA announced a major expansion of their long-standing collaboration, aiming to create what they term the “Industrial AI Operating System.” This ambitious initiative seeks to embed artificial intelligence deeply across the entire industrial value chain—from design and engineering to manufacturing, operations, and supply chains—transforming how physical systems are conceived, built, and managed in the real world.

The partnership builds on previous efforts, including integrations between Siemens’ Xcelerator platform and NVIDIA’s Omniverse for photorealistic digital twins. Now, the companies are fusing NVIDIA’s expertise in accelerated computing, generative AI, and simulation with Siemens’ industrial software, automation, and domain knowledge to develop AI-native workflows that turn passive digital twins into active, intelligent systems.

NVIDIA will contribute its AI infrastructure, simulation libraries, models, frameworks like Omniverse and CUDA-X, and blueprints for scalable deployment. Siemens, in turn, is committing hundreds of industrial AI experts along with its leading hardware and software portfolio. As Roland Busch, President and CEO of Siemens AG, stated, “Together, we are building the Industrial AI operating system—redefining how the physical world is designed, built, and run—to scale AI and create real-world impact.”

Jensen Huang, NVIDIA’s founder and CEO, emphasized the revolutionary potential: “Generative AI and accelerated computing have ignited a new industrial revolution, transforming digital twins from passive simulations into the active intelligence of the physical world.” The collaboration closes the loop between virtual simulation and physical execution, allowing industries to model complex systems virtually, optimize in real time, and automate seamlessly.

A key focus is creating fully AI-driven, adaptive manufacturing sites. The blueprint begins in 2026 with Siemens’ Electronics Factory in Erlangen, Germany, serving as the world’s first such facility. Powered by an “AI Brain”—combining software-defined automation, industrial operations software, and NVIDIA Omniverse libraries—these factories will continuously analyze digital twins, test improvements virtually, and apply validated changes directly to the shop floor. This promises reduced commissioning times, higher productivity, lower risks, and more sustainable operations.

The partnership extends to semiconductor design, where Siemens will GPU-accelerate its electronic design automation tools using NVIDIA’s PhysicsNeMo and CUDA-X, targeting 2x to 10x speedups in verification and layout processes. Additionally, the companies are developing blueprints for next-generation AI factories that optimize power, cooling, and automation for high-density computing.

To prove scalability, Siemens and NVIDIA will first implement these technologies in their own operations before rolling them out to customers. Early adopters evaluating the capabilities include Foxconn, HD Hyundai, KION Group, and PepsiCo.

This expanded alliance positions Siemens and NVIDIA at the forefront of the industrial AI revolution, accelerating innovation while addressing challenges like energy efficiency and resilience in global infrastructure. By making AI accessible and impactful at industrial scale, the Industrial AI Operating System could usher in a new era of smarter, more adaptive manufacturing, bridging the digital and physical worlds like never before.

Also Read:

Automotive Digital Twins Out of The Box and Real Time with PAVE360

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing

3D ESD verification: Tackling new challenges in advanced IC design


CES 2026 and all things Cycling

CES 2026 and all things Cycling
by Daniel Payne on 01-11-2026 at 2:00 pm

segway

I just completed the annual Rapha 500 Challenge on Strava by cycling 869 km in eight days, so it’s time to give you my annual recap of CES 2026 and all things cycling. Similar to previous years the big push again in 2026 are e-bikes and even e-motos. The AI acronym was everywhere too in product names and announcements as physical AI features abound in new products. Oh, EDA and IP vendors like Synopsys, SiemensCadence, MIPS, Ceva , RISC-V , T2M-IP and Dolphin Semiconductor were also present at CES this year.

E-bikes

Segway

There original two-wheel self-balancing devices is hardly mentioned anymore, in favor of two new e-bikes and one e-moto.

Xaber 300, Myon, Muxi

The Xaber 300 is an electric dirt bike with full suspension, the Myon is a Class 3 commuter e-bike capable of a 28 mph top speed, while the Muxi is a more compact Class 2 commuter  e-bike with a top speed of 20 mph.

Yadea

This Chinese company showed two new e-bikes, Fatboy for fat tire lovers, and Flo for urban commuters.

Fatboy, Flo

BGL Bike

They offer a range of six e-bikes: City, Fat, Folding, Mountain, Road, Trike.

BGL Bike: Electric Fat Bike

Kamingo

How about taking your existing bike and converting it into an e-bike? Kamingo has such a conversion kit that assembles in minutes, driving the rear wheel of your bike. They also won a 2026 CES Innovation Award for this product.

Kamingo

Bosch

This German company provides the e-bike drivetrain to 100 bike brands around the world, and they showed the Family Next e-bike, using all of Bosch’s eBike system.

Bosch eBike System

C-Star

This year they showcased several new e-bikes for off-road use.

Alucard: CS-M4A

Hyper Bicycles

This vendor has both traditional bike and e-bikes, even e-bikes for kids and several models for off-road use.

Hyper Bicycles: 16in e-balance for kids

Macfox

Three e-bikes were on display for urban cyclists that prefer fat tires for a comfy ride.

Macfox

Leoguar Bikes

Hailing from Texas, this vendor provides four e-bike styles: Fat tire, Cruiser, Mountain  and Folding.

Leoguar Bikes

Mimbob

Another Chinese supplier with a wide range of e-moto for off-road use.

Mimbob

Heybike

Multiple e-bike models were shown this week: Venus – cruiser, Helio F – foldable, Mars 3.0 – fat tire, Ranger 3.0 Pro – suspension, Polaris – touring, Omega – long distance, Villain – dirt bike.

Heybike: Polaris

Radar

I’ve started using the Garmin Varia radar on my road bike and it alerts me to approaching traffic from behind by beeping and then showing how close it is on my Garmin bike computer display. Several companies have entered the bike radar business.

Segway

For only $99 you get a bare-bones radar to fit on the back of your Segway bike.

Segway Rearview Radar

Seeru

A 2026 Honoree in mobile devices the Seeru stands for See Rear for U, and can be attached to a bicycle or a powered wheelchair to report approaching vehicles.

Seeru

Miscellaneous

Bosch added a new e-bike security features so that a stolen e-bike gets marked in the eBike Flow app, alerting dealers and authorities that it has been stolen.

Bosch e-bike security

EVs have used regenerative braking for several years now and a company called Hello Space showed off a Mag Drive system that charges your e-bike battery while the bike is moving, extending the battery range. I’d love to know how this works on an e-bike, because in my Tesla the regenerative braking slows, then stops my EV.

Hello Space: Mag Drive

Livall returned to CES with their PikaBoost e-bike converter and AI Visual Smart Taillight products.

PikaBoost
AI Smart Light

Hypershell

Tony Stark became Ironman after donning a robotic suite, so Hypershell has the X Ultra Exoskeleton to deliver more power for cyclists.

Hypershell

Blequp brought their AI-powered glasses called Ranger to CES, and they have video recording, intercom and AI sports assistance. I like the idea of keeping my eyes on the road while cycling combined with these features.

BleeqUp: Ranger

Speediance showcased their smart trainer, VeloNix with adjustable controls to fit your body shape and a screen with metrics to keep you informed of your workout progress.

Speediance: VeloNix

Daniel’s Gear

Outdoor riding:

  • 2022 Cervelo R5 road bike, Zipp 404 wheels, SRAM AXS Force components
  • Garmin 1040 bike computer
  • Garmin Dual heart rate monitor
  • Specialized S-Phyre cycling shoes
  • Garmin Varia radar
  • SpeedPlay pedals
  • Continental Grand Prix 5000S TR, tubeless tires

Indoor riding:

  • 2016 Cervelo R3 road bike, Zipp 404 wheels, SRAM Red components
  • Tacx Neo 2T smart trainer
  • Wahoo KICKR Desk
  • Zwift cycling app
  • mac Mini M4, wireless keyboard, wireless trackpad
  • Raycon earbuds for Discord app

Summary

The top road bike brands shun CES, like: Trek, Specialized, Giant, Canyon, Orbea, Cannondale, Cervelo, Scott, Pinarello, Bianchi, Colnago, Factor. Many of these bike brands do offer e-bikes, but CES is just not their focused show.

What I saw virtually at CES this year is a continued strong presence of e-bikes from non-traditional bike companies that are mostly recently founded, so it’s a crowded market for sure. E-bike sales were over 1 million in 2022, and 63% of all new bikes sold from 2019 to 2023 were e-bikes. 2024 sales in the US of e-bikes were 1.7 million units.

Troubling for me when I ride my road bike around in the Portland, Oregon area is the distinct trend of seeing many e-bike riders without any helmets, as protecting your head with a helmet is a must-have for safety. I wouldn’t be alive without a helmet after my 2018 bike crash. e-motos are involved in more crashes than e-bikes, but both categories should have increased safety measures like helmets and the use of hand signals.

Also Read:

Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and ML

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

Revolutionizing Hardware Design Debugging with Time Travel Technology


Podcast EP326: How PhotonDelta is Advancing the Use of Photonic Chip Technology with Jorn Smeets

Podcast EP326: How PhotonDelta is Advancing the Use of Photonic Chip Technology with Jorn Smeets
by Daniel Nenni on 01-09-2026 at 10:00 am

Daniel is joined by Jorn Smeets, Managing Director for North America at PhotonDelta, an industry accelerator for photonic chip technology. Based in Silicon Valley, his mission is to advance the photonic chip industry by fostering collaboration between European and North American entities.

Dan explores the focus of PhotonDelta with Jorn, who describes the organization’s broad charter to support an end-to-end value chain for photonic chips that designs, develops, and manufactures innovative solutions to contribute to a better world. Jorn explains some of the impressive work PhotonDelta has done in collaboration with the worldwide supply chain to enhance the use of photonic chip technology.

Jorn also discusses the upcoming PIC Summit USA event. This event started last year as a small gathering of key players to explore how to better collaborate to expand the impact of photonic chip technology. He explained that this year the event has been expanded to include more topics and more participation from a world-class group of speakers and organizations.

The event will be held in Sunnyvale, CA on January 19, right before the SPIE Photonics West Exhibition in San Francisco. Attendance at PIC Summit USA is by invitation only. You can learn more about the event and request an invitation here.


Webinar: Why AI-Assisted Security Verification For Chip Design is So Important

Webinar: Why AI-Assisted Security Verification For Chip Design is So Important
by Mike Gianfagna on 01-09-2026 at 6:00 am

Why AI Assisted Security Verification For Chip Design is So Important

It is well-known that AI is everywhere, and the incredible power of this new technology is enabled by highly complex, purpose-built silicon. But there is a silent enemy of this substantial, world-changing progress. Something that has the power to steal a bright future from all of us. The hardware root of trust for those advanced custom chips is at the epicenter of the story. Simply put, AI advances have made the hardware root of trust vulnerable to attack. You can see the stories in news headlines. And it’s getting worse.

Thankfully, there are companies focused on this problem. Caspia Technologies is a bright spot among those companies. It is developing an AI platform of tools, a methodology and training to fortify chip design practices against this threat to secure future innovation. The company presented an important webinar on this topic recently. If you’re involved in advanced chip design, you need to see this webinar. A replay link is coming. Let’s first look at some details about why AI-assisted security verification for chip design is so important.

Who’s Presenting

The webinar contains three parts – an overview of the problem and Caspia’s solution, a live demonstration of how to find and fix security flaws in real chip designs and an interactive Q&A session with the webinar audience. There are two well-qualified members of the Caspia team presenting:

Beau Bakken first provides an overview of security risks all design teams face today. He then describes an effective strategy to minimize these risks and illustrates how it works. Beau is VP of Products at Caspia. He works on the definition of new products and the associated go to market strategies. Beau has been with Caspia for over five years. Before that, he spent time at the National Science Foundation.

Dr. Paul Calzada then presents a live demonstration of CODAx, Caspia’s security-aware static verification solution. You will see the analysis of a real design and the identification of security weaknesses. Paul is an R&D Application Engineer at Caspia. He works with customers to ensure effective deployment of Caspia’s solutions. Paul holds a PhD in Computer Engineering from the University of Florida.

What Is Covered

Beau begins the webinar with some eye-opening information regarding the growing vulnerability of the hardware root of trust and its associated firmware and microcode. He shares alarming trends regarding the growth of hardware-focused attacks and presents some real examples of the problem taken from news headlines. 

Beau then explains how AI is making it easier to attack the same hardware that is accelerating AI workloads. He points out that AI is the problem and the solution to this dilemma. He explains that hardware is NOT patchable, and chip security flaws will cost billions of dollars to recall and repair. Security flaws are simply not an option any longer.

Beau then explains the architecture of Caspia’s secure-by-design approach to addressing this important issue. He explains how Caspia’s tools easily integrate into existing design flows, how these tools find security flaws and assist in removing them early in the design process, before a disaster occurs in the field.

Since AI is causing the problem, the solution must also use AI to see what’s coming and remove the threats. Beau also describes Caspia’s generative and agentic technology that makes every designer a security expert.

Paul then demonstrates how to find and fix security flaws in a real open-source design. He uses Caspia’s CODAx static security verification tool to do this. You learn the depth of security checks that CODAx performs so subtle security weaknesses can be found and fixed early.

Watch the Webinar Replay Now!

If you are designing advanced chips that will be part of AI workload acceleration, this is a must-see event. Watch the replay now, you’ll be glad you did.  Here is a link to watch the replay. And that’s how you can find out why AI-assisted security verification for chip design is so important.

Also Read:

A Six-Minute Journey to Secure Chip Design with Caspia

Large Language Models: A New Frontier for SoC Security on DACtv

Caspia Focuses Security Requirements at DAC


CEO Interview with Scott Bibaud of Atomera

CEO Interview with Scott Bibaud of Atomera
by Daniel Nenni on 01-08-2026 at 4:00 pm

Atomera Scott Bibaud headshot

Scott Bibaud has served as President, Chief Executive Officer and a director since October 2015. Mr. Bibaud has been active in the semiconductor industry for over 25 years. He has successfully built a number of businesses in his career which grew to generate over $1 Billion in revenue at some of the world’s largest semiconductor companies. Most recently he was Senior Vice President and General Manager of Altera’s Communications and Broadcast Division. Prior to that he was Executive Vice President and General Manager of the Mobile Platforms Group at Broadcom.

Tell us about your company.

Atomera Incorporated is a semiconductor materials and technology licensing company focused on deploying its proprietary, silicon-proven technology into the semiconductor industry. Our mission is to extend the life and performance of today’s semiconductor technologies through innovation at the materials level. Our Mears Silicon Technology™ (MST ®), a proprietary material/film technology, enhances transistor performance, power efficiency, and scalability, helping chipmakers achieve next-gen results utilizing their existing manufacturing infrastructure.

As AI accelerates demand for more powerful and efficient systems, advancements in materials are becoming the catalyst that makes those gains possible, enabling continued breakthroughs in power, performance, area, and cost (PPAC). Atomera is currently working with several of the world’s top semiconductor producers and playing a hands-on role in areas like advanced logic (GAA), DRAM, power, and wireless/radio frequency (RF), underscoring the industry’s growing recognition of MST’s potential benefits.

What problems are you solving?

Atomera is trying to solve key bottlenecks in the industry, including performance and power to yield and cost through MST.

For decades, Moore’s Law, which accurately predicted that the number of transistors on a chip would double around every two years, kept the industry moving forward. The steady pace of innovation is now being tested. As chips shrink to 3nm and beyond, FinFets can no longer deliver the needed performance and efficiency, so manufacturers are switching to gate-all-around (GAA) technology. However, GAA isn’t perfect. While GAA addresses electrostatic challenges, it also introduces new ones. This is where Atomera’s advanced materials come into play.

This reality is driving a growing consensus across the industry that scaling alone is not enough. In fact, a recent survey saw that 94% of respondents believe simply shrinking nodes will no longer be sufficient. Instead, 99% of those polled are pointing to innovation in new materials and technologies as essential to unlocking PPAC improvements to support AI workloads.

It turns out that materials, such as MST, can continue to improve transistor channel behavior by improving electron mobility, lowering leakage, or improving variability, enabling better performance or lower power, even as the industry carries on to the next node. These breakthroughs in advanced materials are empowering the industry to achieve higher performance with reduced space and energy requirements.

What application areas are your strongest?

One of MST’s most strategic advantages is its ability to be integrated with minimal equipment changes, allowing manufacturers to extract new levels of performance and efficiency from existing manufacturing processes, achieving more without the need for costly new tools.

In power electronics, MST helps push past today’s material and architectural limits by enabling devices to handle higher voltages and currents more efficiently through reduced on-resistance and improved breakdown voltage. This means MST can offer customers a path to better PPAC without major design changes or expensive process overhauls.

RF products are facing increasing signal performance issues with the transition to 5G. Low-noise amplifiers (LNAs) and advanced RF switching technologies are the unsung heroes ensuring signal strength, clarity, and battery life. Innovations like MST can improve signal performance and extend RF-SOI’s platforms, delivering higher gain, lower noise, and faster switching. For a mobile phone user, this translates to clearer signals, higher bandwidth, and longer battery life.

MST also has applications for memory integration. By reducing variability and leakage in memory transistors, MST enhances stability and density in DRAM and SRAM devices. And in the GAA arena, Atomera’s technology can be used to optimize performance in at least four different areas of the transistor.

Meanwhile, in GaN-on-silicon structures, MST enables improved yield for high-performance RF and power devices.

What keeps your customers up at night?

The gap between the needs of AI workloads and the capabilities of today’s silicon-based infrastructure is one of the largest industry challenges right now. In the same survey, more than 200 semiconductor engineers, materials scientists, and technical industry leaders in the United States found that 76% of decision-makers believe data centers will fall short of meeting soaring AI and high-performance computing demands. To keep shrinking and improving chips for massive AI systems, engineers are turning to advanced materials as the lever for PPAC gains and to keep progress on track.

To meet the performance and efficiency demands of the AI market, designers are turning to the most advanced semiconductor processes using GAA transistors. Today, the first wafers with this architecture are entering production, but the power, performance, and yield still have significant room for improvement. Our customers are looking for any compelling solution to help achieve those goals, and Atomera’s technology is one key piece of the puzzle.

What does the competitive landscape look like and how do you differentiate?

It is widely understood in the industry that it takes a new material roughly 18 years from first concept to volume production in the semiconductor supply chain. Atomera’s material technology has successfully navigated this journey. This level of investment and persistence in an independent company is rare, and while there are other providers of advanced materials, we feel that most are complementary, or additive, to our technology, rather than competitors. Many of our largest customers have R&D teams internally who are trying to solve the same problems that we are addressing, and Atomera’s technology provides a very well-developed tool they can leverage, making MST uniquely positioned in the market. It takes a full ecosystem of solution providers to bring the most advanced nodes to market, and Atomera is happy to play our part.

What new features/technology are you working on?

Atomera is continually working to understand the challenges faced by companies across industries we serve and refining our film and integration techniques to deliver compelling solutions to customers. For example, companies making RF front ends for cellular applications are confronted with increasing demands as phones use more bandwidth and communicate over more and more frequency bands. RF components are significant consumers of battery power in mobile phones, but these new frequency bands require radios to scan an even wider area than before, placing pressure to lower the power of their LNAs. Recently, we determined that MST can be very effective in helping them meet this goal while simultaneously improving their RF switch efficiency. We are working on solutions like these in several different markets for a variety of customers.

Another area of growing focus is compound semiconductors. Our work in GaN shows how MST can be leveraged to bring physical improvements to a material that translate well into electrical advantages. We have exploratory projects underway in other compound semiconductors, which we expect will start maturing in the near term as well.

How do customers normally engage with your company?

Today, Atomera and MST are widely recognized across the semiconductor industry, and we collaborate with a broad range of leading companies. Oftentimes, after Atomera validates how MST can be used to help a customer with a known problem or to achieve device improvements, we will meet with the team to show the supporting data. Next, our teams will conduct Technology Computer-Aided Design (TCAD) simulations to understand how MST can be used in their fabs and then wafer demos are run where MST is deposited on their wafers and other tests are conducted. If all goes well, we will license our technology to them, install it on one of their production tools, and go into a period of tuning our film and their integration method to maximize performance. Atomera’s TCAD modeling, epi deposition, and integration engineering teams provide support the whole way, helping our customers transition to mass production. At that stage, our business model is to take a small royalty on every wafer they manufacture while partnering with them to optimize the next process they’re working on.

Contact Atomera

Also Read:

CEO Interview with Masha Petrova of Nullspace

CEO Interview with Eelko Brinkhoff of PhotonDelta

CEO Interview with Pere Llimós Muntal of Skycore Semiconductors


2026 Outlook with Volker Politz of Semidynamics

2026 Outlook with Volker Politz of Semidynamics
by Daniel Nenni on 01-08-2026 at 10:00 am

Volker Politz Semidynamics

Tell us a little bit about yourself and your company.
I am the Chief Sales Officer for Semidynamics and I lead the global sales team and drive the overall sales process.

Semidynamics was founded in 2016 as a design service company with a focus on RISC-V. This was so successful that the CEO decided to pivot the company towards its own IP sales and started licensing IP from 2019.

We provide 64-bit RISC-V processor IP which is complimented with our leading-edge vector unit and tensor unit extensions. We have combined these technologies together to form our All-In-One AI IP that provides a much better way forward for AI projects as it is future-proof, easy to program and easy for us to create the exact hardware needed for a project. In addition, it incorporates our Gazzillion Misses technology for advanced data handling to ensure that the processor is never idle waiting for data. When it comes to handling large amounts of data, we have the fastest, best-in-class solution for big data applications.

In 2025, that positioning sharpened even further: AI is the center of gravity for us, and Cervell (our all-in-one RISC-V NPU) is the clearest expression of that focus.

What was the most exciting high point of 2025 for your company?

The highlight of 2025 was turning our all-in-one story into a complete, developer-ready stack: launching Cervell in May, and then following up with the Inferencing Tools in October to accelerate deployment from trained model to running product.

In parallel, we kept removing friction from AI software enablement—especially through ONNX Runtime integration—because adoption depends on how fast teams can get to first results.

What was the biggest challenge your company faced in 2025?

The overall economic weakness hits big companies and small companies as well delaying spending, cutting budgets and re-thinking projects as they try to adapt to the fast-moving AI landscape and the overall global trade picture.

The AI market is extremely noisy, and serious customers are more disciplined than ever about evaluation, differentiation, and long-term software support before they commit. That raises the bar for any IP vendor, even when interest is strong.

How is your company’s work addressing this challenge?

We liaise closely with our customers to tailor our offering to their precise needs. In addition, we encourage them to engage early with us to avoid gaps in the product plans later on.

We also meet the prove it fast expectation by making AI deployment straightforward (ONNX enablement, and higher-level inferencing tooling), so customers can validate value quickly and de-risk the decision.

What do you think the biggest growth area for 2026 will be, and why?

‘Anything AI’ is still driving a lot of new products – especially generative AI, large language models – because it makes possible a whole new set of features to drive innovation.

We expect increasing demand around AI in segments such as data center appliances, vision processing such as security cameras, mobile base stations and software defined vehicles and we are ideally positioned with our All-In-One AI IP to be the solution of choice.

I think 2026 is going to be a big year for RISC-V itself: you can see major industry players deepening their commitment, and consolidation through M&A is reshaping the landscape, making the remaining independent specialists more visible and important.

From a European perspective, that matters: Semidynamics is one of the few remaining independent RISC-V IP vendors in Europe with a clear AI-first product strategy.

How is your company’s work addressing this growth?

Our All-in-one AI processing element is based on highly configurable IP blocks, that enable us to customize configurations on demand when required. Through dedicated engagement with partners we can also expand the IP with unique instructions and to combine with customer’s own circuits.

We also have a software support strategy for AI that is based on ONNX, which makes the need for dedicated compilers obsolete and enables the customer to run a model they download in ONNX format to run out of the box. This helps them to move quickly to a final product as software and hardware can be developed in parallel.

And we’re extending that practicality with the Inferencing Tools layer on top of our ONNX Runtime Execution Provider for Cervell, so moving from model to deployment takes less specialist effort.

What conferences did you attend in 2025 and how was the traffic?

We attended various RISC-V.org events as well as dedicated events such as ICCAD in China, Computex in Taiwan, Embedded World in Germany and the AI Infra summit in USA.

The RISC-V Summits were especially important in 2025. For example, the RISC-V Summit Europe in Paris was a strong focal point for the EU ecosystem.

Will you participate in conferences in 2026? Same or more as 2025?

We aim to attend some new conferences to spread the word that our RISC-V IP can provide the processor needs for new projects as well as attending some of the events that we have previously attended. There is a huge wave of RISC-V being increasingly used as a viable, exciting alternative to the two processor incumbents and we are surfing that wave.

I expect the same or slightly more in 2026 than 2025, because the interest level is rising and we now have a broader product + tools story to take to customers.

How do customers normally engage with your company?

Customers can engage with our sales force or via contacts on our website and other sites where we post adverts. Once established, we have dedicated resources to facilitate the evaluation process and subsequent product selection and purchase.

Are you incorporating AI into your products?

Yes—AI is the core of our product direction. Cervell is designed as a scalable, all-in-one RISC-V NPU for AI workloads, and the software layer (ONNX enablement and inferencing tools) is explicitly about making AI deployment easier.

Is AI affecting the way you develop your products?

Absolutely. AI is changing the requirements faster than ever—especially around model portability, deployment speed, and workload diversity—so we design for programmability and customization first, and we invest heavily in a software path that keeps pace (ONNX Runtime integration, and tooling that shortens the route from trained model to running application).

Additional comments?

One final observation: between the accelerating RISC-V ecosystem support (including big-name commitments) and the consolidation happening through acquisitions, customers are looking hard at who can still offer true differentiation. Our answer is simple: flexible and scalable IP plus a pragmatic AI software path that helps teams get to working silicon faster and with less risk.

Contact SemiDynamics

Also Read:

Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Vision-Language Models (VLM) – the next big thing in AI?


Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and ML

Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and ML
by Daniel Nenni on 01-08-2026 at 6:00 am

Nvidia Overcoming the Challenges of Blending Hardware Verification Expertise with AI and Machine Learning

Verification Futures Conference 2025 Austin (USA). Keynote Challenge Paper – Sohil Sri Mani Yeshwanth Grandhi – NVIDIA Corporation

Hardware verification has always been one of the most demanding phases of system design, but today it faces an unprecedented crisis. As hardware systems grow exponentially in complexity verification resources, time, compute, and human expertise, scale far more slowly. This widening gap has resulted in endless regression cycles, overwhelming debug workloads, and a persistent mismatch between coverage metrics and real-world correctness. In this environment, AI and ML offer powerful new tools, but only if applied with realism, discipline, and a clear strategy.

The core challenge in modern verification is not merely the size of designs, but the volume of data they generate. Simulation logs can reach hundreds of megabytes, regression suites can contain tens of thousands of tests, and subtle bugs may hide behind layers of seemingly unrelated failures. Traditional approaches—manual triage, static thresholds, and brute-force regressions—are increasingly inefficient. Verification engineers often find themselves “drowning in complexity,” spending more time managing data than extracting insight from it.

AI promises a way forward by enabling prediction, optimization, and insight at a scale humans cannot achieve alone. Machine learning models can detect patterns across historical data while LLMs can interpret and summarize unstructured text such as logs, specifications, and bug reports. However, the adoption of AI in verification is frequently hindered by common pitfalls. “Magic Wand” thinking leads teams to expect instant results without sufficient data preparation. Others apply the wrong tool to the problem, such as using an LLM where a simple statistical model would be more reliable. Finally, poor-quality or inconsistent data can undermine even the most sophisticated AI system.

To avoid these traps, a practical framework is needed to decide when to use ML versus LLMs. Traditional machine learning excels at structured, numerical data test results, performance metrics, coverage statistics, and historical trends. It is well suited for tasks like predictive test selection, performance regression detection, and bug triage classification. LLMs, by contrast, shine when dealing with unstructured text. They can parse massive log files, summarize failure causes, correlate error messages across modules, and even generate documentation or coverage models from natural-language specifications. Understanding these complementary strengths is key to building an effective hybrid strategy.

Real-world case studies illustrate this distinction clearly. In compiler verification, for example, a code change may pass all functional tests yet introduce a subtle 2% performance regression on a critical benchmark. Legacy approaches based on static thresholds often fail to catch such issues reliably. A modern ML-based solution uses time-series anomaly detection, learning normal performance behavior over time and flagging deviations with much higher sensitivity and confidence. This approach reduces false positives while catching regressions early, before they reach customers.

Similarly, intelligent log analysis with LLMs addresses one of verification’s most painful bottlenecks: debugging. When a complex simulation fails and produces a 100MB log file with interleaved messages from dozens of modules, manual inspection becomes impractical. LLMs can ingest these logs, identify the most relevant error sequences, summarize likely root causes, and even suggest next debugging steps. Rather than replacing the engineer, the model acts as a force multiplier, accelerating understanding and decision-making.

Building a successful AI-driven verification strategy requires thoughtful execution. Teams should start small by targeting a specific, high-impact problem rather than attempting a full-scale transformation. AI should augment human expertise, not replace it, keeping engineers firmly in the loop for validation and judgment. A solid data foundation—clean, labeled, and consistent—is essential, as AI systems are only as good as the data they learn from.

Bottom line: The verification crisis is fundamentally a data problem, and AI provides a powerful new toolbox to address it. By being strategic, choosing the right tools, and focusing on augmentation rather than automation, verification teams can regain control over complexity. The path forward does not require perfection—only a willingness to start now and evolve incrementally.

Verification Futures Conference

Also Read:

Assertion-First Hardware Design and Formal Verification Services

PDF Solutions’ AI-Driven Collaboration & Smarter Decisions

Reimagining Architectural Exploration in the Age of AI


2026 Outlook with Howard Pakosh of Tekstart

2026 Outlook with Howard Pakosh of Tekstart
by Daniel Nenni on 01-07-2026 at 10:00 am

TekStart Cognitum Processor

Tell us a little bit about yourself and your company.
I founded TekStart Group in Ontario, Canada, in 1998 with a very clear objective: to help innovators turn breakthrough technology concepts into real, market-ready products. Over the past 25-plus years, we have worked across the full lifecycle of technology development, from early concept and architecture through funding, commercialization, and exit.

To date, TekStart has helped develop, fund, and successfully exit more than 120 companies. That experience has given us a deep appreciation for what it really takes to bring complex technologies to market, especially in semiconductors and systems where timelines are long and execution risk is high.

Today, I am most excited about the transformation underway as AI inferencing moves out of centralized data centers and into edge devices. We are seeing a fundamental shift in how intelligence is deployed, and that shift creates both technical and economic opportunities that simply did not exist a few years ago.

What was the most exciting high point of 2025 for your company?

Without question, the most exciting milestone for us in 2025 has been reaching the final stages of tapeout for our latest semiconductor product, Cognitum. This has been a five-year journey, and like most deep-tech efforts, it has involved plenty of unexpected turns along the way.

Bringing a new AI-focused chip to market is never straightforward. There were moments where progress felt incremental and others where challenges stacked up quickly. Still, seeing Cognitum reach this level of maturity has been extremely rewarding. If I ever do write a book, this program would certainly deserve a few chapters.

As we approach launch, the excitement comes not just from completing the silicon but from seeing how customers are already thinking about deploying it to solve real problems at the edge.

What was the biggest challenge your company faced in 2025?

The honest answer is that the biggest challenge in 2025 was everything at once. If there was a category of delay or disruption you could imagine, we likely encountered it.

Market uncertainty created constant pressure on planning and forecasting. We worked closely with customers whose own roadmaps were being affected by factors well beyond their control. On top of that, new tariffs introduced additional complexity around cost structures, supply chains, and contract negotiations.

Each issue on its own would have been manageable. Experiencing them simultaneously required continuous adjustment, clear communication, and a willingness to revisit assumptions more often than usual.

How is your company’s work addressing this challenge?

What I am most proud of is the resilience our team demonstrated throughout the year. There were moments when obstacles felt genuinely insurmountable, yet the team stayed focused on execution and problem-solving rather than distraction.

We concentrated on what we could control: engineering discipline, customer transparency, and forward momentum. That mindset allowed us to navigate uncertainty while continuing to move Cognitum toward final tapeout.
In semiconductors, persistence matters. Staying aligned as a team and maintaining confidence in the long-term vision is often the difference between programs that stall and programs that succeed.

What do you think the biggest growth area for 2026 will be, and why?

I firmly believe that Edge AI will be one of the most important growth areas in 2026. For several years, the industry’s focus has been heavily weighted toward cloud-based large language models and massive data center build-outs. While those investments are necessary, they do not fully address the needs of real-world autonomous systems.

There is a growing gap between what cloud LLMs are optimized for and what edge systems actually require. Latency, bandwidth, cost, power consumption, and data privacy all become critical constraints outside the data center.
In 2026, we will see greater emphasis on pushing intelligence closer to where data is generated, enabling faster, more predictable, and more economical decision-making at the edge.

How is your company’s work addressing this growth?

The demand for edge reasoning is expanding rapidly across robotics, industrial automation, infrastructure, IoT, and enterprise systems, where privacy and determinism are critical. These applications increasingly require local decision-making, yet cannot tolerate the latency, recurring costs, or lack of transparency associated with cloud-based inference.

Cognitum is designed specifically to address this gap. It enables a new class of low-cost devices capable of autonomous reasoning directly on the chip. By moving inference to the edge, customers can significantly reduce or eliminate recurring cloud inference costs.

This changes the economic model from per-token or usage-based billing to a fixed hardware cost, making large-scale deployments practical in scenarios that were previously uneconomical or operationally constrained.

What conferences did you attend in 2025, and how was the traffic?

We sponsored and attended the AI Infra Summit in September 2025, and the difference compared to the prior year was striking. Attendance was significantly higher, and the level of engagement was much stronger.
We saw more informed conversations, more qualified prospects, and a clearer understanding of why edge AI infrastructure matters. Being featured in a success story at the event was also valuable, as it allowed us to share our journey and lessons learned with a broader audience.

Will you participate in conferences in 2026? Same or more than 2025

Conferences will continue to play an important role in our marketing and business development strategy. While digital engagement is effective, there is still no substitute for in-person conversations when discussing complex technologies.

We will kick off 2026 at CES and expect to maintain, if not increase, our level of conference participation throughout the year. The quality of interaction we see at these events continues to justify the investment.

How do customers normally engage with your company?

Our business is highly specialized, and we intentionally focus on a narrow, well-defined customer segment. We have served this market for many years and have built a reputation as a trusted partner rather than a transactional vendor.
That trust works to our advantage as we introduce new products. Being a known entity means customers are already familiar with our approach and capabilities. As a result, maintaining awareness, sharing meaningful updates, and staying engaged with key stakeholders remains one of the most effective ways we work with our customers.

Additional comments?

The world has become a more complex and challenging place, both personally and professionally. Technology continues to advance at an accelerating pace, and it can be difficult to keep up with everything that is changing.

That makes it even more important for technology providers to be thoughtful and deliberate about what they build and how those technologies are brought to market. The decisions made over the next few years are likely to have an impact that extends far beyond the near term.

If we approach this moment with care and responsibility, the opportunities ahead are significant. How we move forward matters.

Also Read:

CEO Interview with Rabin Sugumar of Akeana

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

Quantum Computers: Are We There Yet?


Automotive Digital Twins Out of The Box and Real Time with PAVE360

Automotive Digital Twins Out of The Box and Real Time with PAVE360
by Bernard Murphy on 01-07-2026 at 6:00 am

Digital twin

Digital twins are amazing technology, virtual representations mirroring a real physical system. Twin virtual models span software, electrical/electronic and mechanical subsystems, closing the loop with feedback from real physical counterparts. The virtual model calibrates against real sensing feedback gathered in production and prototype testing spanning braking, cornering, road surfaces, weather and traffic conditions, and other factors, all with high fidelity. Digital twins greatly accelerate development, debug, and quality control for the advanced car systems we expect today, now even more essential as these systems become more complex and more safety critical.

David Fritz (VP of Hybrid Virtual and Physical AI Systems at Siemens Digital Industries) told me that, for all their benefits, building a twin by integrating multiple parts from multiple suppliers used to take large teams years. An immense effort, out of reach for most enterprises except for the largest OEMs and Tier1s, and too slow even for them to adapt in fast-moving markets. To deliver on the real time expectation of this promise, Siemens have developed and announced at CES 2026 their “PAVE360 Digital Twin Blueprint”, a fully integrated platform providing a jumpstart not only for large OEMs and Tier1s but also for other innovators in the automotive ecosystem.

Real-time simulation for realistic workloads

Earlier generations of digital twins apparently would run a couple of orders of magnitude slower than real time. Software developers no longer consider that level of performance sufficient for their needs, given that today they must run millions of lines of code together with realistic AI workloads.

Siemens, working with Arm and AWS, announced a new approach last year based on Arm Zena CSS/cloud-native support. This can deliver near real-time performance for those target workloads, within the full scope of the digital twin. That’s a very big deal because developing and exercising ADAS, autonomous driving (AD) and infotainment (IVI) features in the twin requires that level of performance.

Equally important, SDV Digital Twin Blueprint provides a ready-to-deploy reference platform including virtual reference hardware and software stacks (AI functionality also) for ADAS, AD, and IVI. Running on day one, accessible for cloud-based collaboration. As needed you can swap in your own software stacks, replacing corresponding reference stacks. The platform builds on IPs such as Arm Zena CSS and provides mechanisms to connect to real vehicle hardware.

I asked (and I’m sure you wondered also) how AI models run inside this platform, whether cloud-based or on-prem. David said that if GPUs are available, the system will take advantage of them. If not, it will leverage Zena cores, appearing everywhere these days in cloud servers. David can’t share more details on this topic but I’m looking forward to learning more about Arm’s directions in server AI since they have already announced hardware acceleration options for AI in mobile.

Extending opportunity to the larger ecosystem

David said that they are seeing interesting adoption from organizations outside the traditional supply chain, such as engineering service suppliers. These are often OEM-sponsored ventures with trusted, specialized expertise in advanced capabilities that OEMs are targeting for 2030. Such ventures modify PAVE360 Automotive Blueprint with their own stack, to demonstrate to an OEM the value their solution can offer. Better yet, this can drop straight into the OEM’s own development platform if they also use PAVE360.

As one example, Silicon Auto is part of a joint venture with Stellantis and Foxconn, focused particularly on ADAS systems. They used PAVE360 Automotive Blueprint to model the silicon they wanted to build, putting it in the content of a whole vehicle. Silicon Auto systems are now planned to support Stellantis, Foxconn and other customers.

Another example is SAIC (Shanghai Automotive Industrial Corporation) which David calls “the Volkswagen of China”. SAICEC (EC is Engineering Corporation) is another joint venture providing design, system and applications services to SAIC and to the larger automotive industry in China. According to David, they are using Pave 360 in some interesting ways. Some no doubt like the earlier example, others exploring PAVE360 becoming a certification service to OEMs, inside of cloud.

Partnerships in addition these include AWS, AMD and Microsoft, Wipro and Cognizant. I would not be surprised to hear of OEM, Tier1, Cloud Services Provider and other partnerships in the near future.

CES 2026 demo and availability

This story hinges heavily on real time twin performance in the cloud. As proof that this capability is real, Siemens are demoing this capability at CES 2026, featuring a Volkswagen ID.Buzz in their booth connected to the Internet through Wi-Fi, with the brains of the car running in the cloud and controlling the car. You can (though the car) tell it where you want to go and watch it (virtually) navigate there. You can tell it to turn on the AC while the car is virtually enroute. From command, to cloud, back to the car in real time. Pretty impressive.

SDV Digital Twin Blueprint will be available February 2026. You can learn more about Siemens digital twin technology HERE.

Also Read:

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing

Podcast EP323: How to Address the Challenges of 3DIC Design with John Ferguson

3D ESD verification: Tackling new challenges in advanced IC design


Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation
by Daniel Nenni on 01-06-2026 at 8:00 am

Acceleration of Complex RISC V Processor Verification Using Test Generation Integrated with Hardware Emulation Synopsys

Verification Futures Conference 2025 Austin (USA)

The rapid evolution of RISC-V processors has introduced unprecedented verification challenges. Modern high-end RISC-V cores now incorporate complex features such as vector and hypervisor extensions, virtual memory systems, multi-level caches, advanced interrupt architectures, and multi-hart out-of-order execution. While these capabilities enable powerful and flexible processor designs, they also dramatically increase verification complexity. To ensure correctness and quality, verification methodologies must evolve to handle massive state spaces, long-running workloads, and extreme performance demands.

One of the most striking realities of contemporary RISC-V verification is the sheer scale of execution required. Verifying a typical high-end RISC-V core can demand on the order of 10¹⁵ cycles—far beyond the reach of traditional simulation alone. Effective verification therefore requires three essential components: stimulus derived from deep understanding of the RISC-V specification, complex and lengthy test programs that drive the design into meaningful microarchitectural states, and a fast execution platform capable of achieving verification closure within practical timelines.

The XiangShan RISC-V core exemplifies these challenges. Supporting RV64 with extensive extensions such as RVV 1.0, advanced memory management, large cache hierarchies with ECC, multi-level TLBs, and AIA-compliant interrupt handling, the design represents a realistic, high-complexity target. Verifying such a design requires stress-testing interactions across caches, memory ordering, interrupts, and multi-hart execution—scenarios that cannot be adequately covered by short, isolated tests.

To address these needs, Synopsys introduces STING, a bare-metal test generator purpose-built for RISC-V processor verification. STING employs a software-driven methodology that integrates multiple test generation approaches, including random stimulus, directed tests, workloads, and real-world scenarios. It generates both self-checking and pure stimulus tests that are portable across simulation, emulation, FPGA prototypes, and silicon. With comprehensive support for 32-bit and 64-bit RISC-V specifications, privilege modes, memory protection, virtualization, and multi-hart configurations, STING provides a scalable and reusable verification foundation. Its extensive library of over 100,000 test fragments enables rapid construction of complex test programs tailored to specific microarchitectural goals.

However, generating effective stimulus is only half the solution. Advanced RISC-V features such as cache coherency, memory ordering, atomicity, and synchronization demand long-running workloads to expose subtle corner cases. Scenarios involving true and false sharing, cache evictions, conflicting traffic, and fence ordering require sustained execution under varied conditions. For multi-processor platforms, repeating the same test sequence across different scheduling interleavings is essential to achieve thorough coverage. These demands make fast execution platforms indispensable.

Hardware-assisted verification (HAV) provides the necessary performance boost. By synthesizing the design under test and running it on emulation or prototyping platforms such as Synopsys ZeBu or HAPS, verification teams can execute tests at speeds orders of magnitude faster than simulation. In this approach, STING generates self-checking tests that embed reference model results directly into the executable. Tests are generated in parallel and streamed continuously into the hardware platform, ensuring that execution units remain fully utilized.

The streaming methodology is a key innovation in this solution. By avoiding repeated hardware re-initialization and redundant configuration cycles, and by enabling concurrent test generation and execution, streaming dramatically improves throughput. Results demonstrate performance improvements of up to 6000× per test when moving from simulation to emulation, making large-scale regression feasible for complex RISC-V designs.

Debugging failures discovered in high-speed regressions presents its own challenges. Because failures may depend on accumulated microarchitectural state across multiple tests, simple re-execution may not reproduce the issue. The recommended strategy involves replaying sequences of streaming-enabled tests to reconstruct the failing conditions. Hardware/software debug using tools such as Verdi enables synchronized analysis of CPU traces and waveforms, allowing engineers to step through execution while correlating software behavior with hardware signals.

Verification Futures Conference 2025 Austin (USA)

Bottom line: Accelerating complex RISC-V processor verification requires a tightly integrated strategy combining intelligent test generation, hardware-assisted execution, and advanced debug methodologies. By uniting STING with high-performance emulation platforms, verification teams can achieve comprehensive stimulus coverage, unprecedented execution speed, and effective debug—making verification closure achievable even for today’s most sophisticated RISC-V processors.

Also Read:

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!

CISCO ASIC Success with Synopsys SLM IPs

How PCIe Multistream Architecture Enables AI Connectivity at 64 GT/s and 128 GT/s