Bronco Webinar 800x100 1

CEO Interview with Jerome Paye of TAU Systems

CEO Interview with Jerome Paye of TAU Systems
by Daniel Nenni on 03-08-2026 at 2:00 pm

Jerome Paye CEO TAU Systems LR (1)

Jerome Paye has served as CEO of TAU Systems since late 2025, having joined the company shortly after its founding in 2022 as Chief Operating Officer. In that time, he has helped build TAU Systems into a high-performing team now focused on delivering the ultimate light source for semiconductor lithography.

Paye brings more than 20 years of industry leadership to the role. Most recently, as COO of Achates Power, he managed engine development programs with leading global manufacturers and oversaw all technical operations. Before that, he held senior roles at Renault SAS, leading early-stage electric vehicle development and directing value engineering for the company’s largest vehicle line, with responsibilities spanning global partnerships including Nissan. His career also includes multiple positions at Ford Motor Company, where he served as program management leader for the Mustang.

Earlier in his career, Paye conducted research in ultrashort pulse lasers and femtosecond laser systems, and contributed to the early design of France’s Laser Megajoule facility, work that directly informs TAU Systems’ mission today.

Tell us about your company?

TAU Systems is developing the next generation of light sources for semiconductor manufacturing through compact particle accelerators and X-ray free-electron lasers. Our laser wakefield acceleration technology creates electron beams with energies equivalent to conventional accelerators spanning hundreds of meters, but we achieve this in just centimeters. We then send these high-energy electrons through magnetic undulators to produce tuneable X-ray lasers with wavelengths significantly shorter than current EUV systems.

What makes TAU unique is our two-part strategy to cross the lab-to-fab chasm. We’re demonstrating economic viability today through radiation effects testing for the space industry, opening our Carlsbad, California, facility later this year. We will build manufacturing capacity through electron-based radiotherapy systems for cancer treatment. And we’re investing heavily in lithography R&D, using revenues from near-term activities to support long-term development. This approach refines our core technology with real customers while generating revenue.

What problems are you solving?

Current EUV lithography machines cost around $400 million each, weigh over 300,000 pounds, and are about as evolved as they can get with current tech. Only a few percent of the light reaches the wafer, dramatically limiting throughput. At the 13.5-nanometer EUV wavelength, chipmakers must use multi-patterning to create smaller features, which adds time, decreases throughput, and increases costs. ASML’s High-NA approach of increasing numerical aperture is reaching fundamental physical and economical limits.

We’re taking the alternative path: reducing the wavelength itself. Our X-ray lasers operate at tuneable wavelengths which will be optimized for maximum transmission. Combined with wavelength-matched reflective optics offering higher reflectivity than current EUV mirrors, our technology delivers hundreds of watts of X-ray emission per compact machine. Matching or exceeding ASML’s power but at shorter wavelengths. The result is faster production, reduced multi-patterning, and dramatically improved energy efficiency.

What application areas are your strongest?

Near-term, we are proving the technology by applying it to radiation effects testing for space. Currently, only a handful of global facilities provide testing – totaling just a few thousand hours annually against an estimated 30,000 hours of demand. Our TAU Labs facility will provide 2,000 to 4,000 hours annually per accelerator unit, dramatically expanding critical testing capacity. This operational beachhead generates revenue while validating our technology.

What keeps your customers up at night?

Space customers face testing bottlenecks. Limited capacity creates project delays and introduces risks as satellite constellations and commercial space ventures scale rapidly.

Semiconductor manufacturers confront more fundamental concerns. The extreme appetite for AI has created massive demand for advanced chips, but current EUV technology cannot meet future requirements. Each node becomes exponentially more expensive with just marginal improvements. The industry knows atomic-level control will eventually require X-rays, but questions when viable solutions will emerge. They’re concerned about capital efficiency, throughput, and economic sustainability.

What does the competitive landscape look like and how do you differentiate?

In radiation testing, we compete against national laboratories and established facilities of which there are but a handful. Our differentiation: dramatically expanded capacity through compact accelerator systems deployable at commercial scale.

For lithography, ASML dominates EUV with a virtual monopoly. They’re pursuing higher numerical aperture optics, but this faces fundamental physical limits. We’re taking the alternative approach physics demands: shorter wavelengths through X-ray lasers. ASML machines are expensive and require extraordinary infrastructure. We’re developing systems housed in existing fab spaces with dramatically improved efficiency.

What truly differentiates TAU is our partnership approach. We’re collaborating with global leaders, including The University of Texas at Austin, Lawrence Berkeley National Laboratory and the Extreme Light Infrastructure Nuclear Physics facility, combining their world-leading expertise with our commercial focus.

What new features/technology are you working on?

We recently demonstrated intense coherent light pulses from a free-electron laser driven by laser-plasma acceleration in collaboration with Berkeley Lab. Published in Physical Review Letters , this work confirms compact X-ray FELs are technically viable for advanced lithography. Our accelerator delivers acceleration gradients 2,000 times stronger than conventional systems.

We’re focused on increasing average power to hundreds of watts per machine, wavelength optimization and tunability for maximum optical transmission, and system integration through our radiation testing facility as a technology proving ground. We’re also developing Very-high Energy Electron therapy systems, which share fundamental technology with our lithography platform.

The overarching goal is to demonstrate that compact laser-driven accelerators can deliver the brightness, stability, and wavelength control required for next-generation semiconductor manufacturing while remaining economically viable.

How do customers normally engage with your company?

Our customers either come to us with a problem they’re looking to solve, or as academics pushing the boundaries of research, or partners who wish to leverage our technology and expertise.

Our development and application facility, TAU Labs, is located in Carlsbad, California and will officially open later in 2026 offering single-event effects radiation testing to ensure spacecrafts operate as intended in the future.

CONTACT TAU SYSTEMS

Also Read:

CEO Interview with Echo Yang of CSCERAMIC

CEO Interview with Juniyali Nauriyal of Photonect

CEO Interview with Aftkhar Aslam of yieldWerx


Things From Intel 10K That Make You Go …. Hmmmm

Things From Intel 10K That Make You Go …. Hmmmm
by Mark Webb on 03-08-2026 at 8:00 am

MKW Ventures Semiconductors

INTEL FORM 10-K

☑ ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
For the fiscal year ended December 27, 2025.

1) Intel is constrained on manufacturing. Not by TSMC. But by IFS and mainly by Intel 7, a node from 2021. Normally constraints are good, it means you are running efficiently with lots of demand. But Intel is not growing. CCG (client)+DCAI (Datacenter) revenue is down. Intel is constrained by older technologies, not newer ones, during a time of non-growth…

2) Intel Margins are low (35% Gross Margins). Intel Margins were once the envy of all hardware and chip companies. GAAP operating margins are negative. Non-GAAP is lower than all memory companies, TSMC, NVIDIA, Broadcom, Qualcomm, AMD, etc, etc. This is a year after taking a one time write down of old assets and a change to their depreciation schedule that saves them billions per year right now

3) If Intel Foundry found a magical external customer, that instantly gave them $7B per year in revenue, equivalent to all of Global foundries, 25 times the external revenue they have today, at absolutely no cost at all to Intel…… IFS would still lose money. Let that sink in

4) Intel took write off charges in 2025 on 18A for what I call LCM. Lower of cost or Market. This is where you need cannot claim inventory as a WIP asset because the cost is higher than the value of the chip. Margins are below zero and/or Intel had to throw away a lot of 18A production.

5) Intel’s newest fabs 34 and 52 require that partner companies get 50% of the profit from them. Through two very different contracts, Apollo and Brookfield get half the profits IF the fabs are successful, and get payment from Intel if they do not hit milestones.

Summary

These are all quite scary and the stock was shaken. But in reality, it simplifies the challenges in my mind.

Intel manufacturing does not seem to have line of sight to being break even. If 18A and 14A ramp, the expenses are more headwinds, more losses. If they do not, the current margins are bad for another 4-5 years. Panther lake is widely recognized as a very good performance part…. First leadership product in 5+ years.

What if Intel followed AMD, Nvidia, Apple, Broadcom, Qualcomm, etc and focused on where it can be successful? How would the numbers be different?

Call us for more information and details

Mark Webb

www.mkwventures.com

Industry Expert with 25+ years experience in semiconductor and system engineering and manufacturing. Extensive experience and knowledge in NAND and SSD manufacturing, development, system testing with Industry leaders. Expertise in SSD/NAND/DRAM business models, Cost/pricing models, competitive analysis and supply chain. Leadership and management experience in Contract Manufacturing, ODM, OSAT, and Foundry Operations. Experience in Device, Product, and Process Integration Engineering on Logic, SOC and all Memory technologies.

Also Read:

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete

TSMC vs Intel Foundry vs Samsung Foundry 2026

Intel to Compete with Broadcom and Marvell in the Lucrative ASIC Business

 


Podcast EP334: The Unique Benefits of LightSolver’s Laser Processing Unit Technology with Dr. Chene Tradonsky

Podcast EP334: The Unique Benefits of LightSolver’s Laser Processing Unit Technology with Dr. Chene Tradonsky
by Daniel Nenni on 03-06-2026 at 10:00 am

Daniel is joined by Dr. Chene Tradonsky, a physicist and the CTO and co-founder of LightSolver, where he leads the development of a proprietary physics-based computing system built on coupled laser dynamics to accelerate compute-heavy simulations and other computationally demanding workloads. Before moving into physics, he started in electrical engineering, a combination that helps him bridge advanced computing and complex physical systems.

Dan explores the unique and powerful Laser Processing Unit (LPU) developed by LightSolver with Chene, who explains the technology’s unique benefits for a broad range of applications. Chene describes how the technology can accelerate the solution of partial differential equations (PDEs), which form the basis for many problems in engineering and science. Chene explains how LightSolver technology can be added to current high-performance computing systems as a kind of co-processor to substantially accelerate results.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

CONTACT LIGHTSOLVER


Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete
by Daniel Nenni on 03-06-2026 at 6:00 am

TSMC 2NM Intel 18A Samsung 2nm Rapidus 2nm

The semiconductor industry is in the midst of a structural supply challenge that’s tightly coupled to exploding demand for advanced chips, especially those used in AI, HPC, and next-generation mobile and consumer devices. At the center of this vortex is the 2nm class of manufacturing technology, representing one of the most complex and expensive transitions in semiconductor history due to its reliance on nanosheet or GAA transistor architectures and extremely precise lithography tools.

TSMC and the 2 nm Capacity Crunch

TSMC’s N2 process node officially entered volume production in late 2025, and early estimates of yield and ramp have been strong enough that the company is aggressively increasing capacity. N2 promises up to 15 % performance gains or substantial power reductions versus previous nodes, making it extremely attractive for next-generation AI accelerators and flagship mobile chips.

The demand has been remarkable! Reports from the trenches indicate that much of TSMC’s N2 capacity is effectively sold out through 2026, with major customers like Apple, Nvidia, Qualcomm, and AMD reportedly locking in large shares of the initial output. This is partly because modern AI accelerators require much more wafer real estate per chip than traditional mobile processors, which exacerbates capacity constraints.

To meet this demand, TSMC has outlined plans to expand production aggressively across multiple fabs, including Hsinchu Baoshan and Kaohsiung in Taiwan and other international sites, with targets that could see monthly wafer starts reach well into six figures by 2026–2028. TSMC’s CAPEX is also a tell for things to come. In 2024 it was $29.8 billion, 2025 a 37% increase to $40.9 billion and a record $52-56 billion in 2026. What this tells me is that TSMC will again dominate 2nm as it did 3nm without question.

Intel’s 18A Process: A Competitive Alternative But Not a Complete Buffer

Intel’s 18A node is part of its post-Intel 7 roadmap and is roughly classed in the same generational tier as 2nm class processes. It introduces both RibbonFET (a version of GAA) and PowerVia backside power delivery, which are intended to boost performance and power efficiency. Intel was first to production quality GAA and first to BSPD, semiconductor innovation at its finest.

Intel started production of 18A in 2025 targeting its own processors such as Panther Lake, but its use as a foundry alternative for external customers remains limited compared. While 18A yields have improved as of mid-2025, they are generally considered behind TSMC’s N2 yields and Intel’s own foundry ecosystem is still small relative to TSMC’s global customer base.

Intel’s strategy is two-pronged: support its internal product leadership and expand foundry services but it has historically struggled to win significant external foundry demand, a key reason why it has not yet materially alleviated the broader industry’s 2nm class capacity squeeze. With Lip-Bu Tan as CEO that has changed of course. The semiconductor Made in America brand has never been stronger, Intel will sign wafer agreements for 18A and 14A from the top semiconductor companies, without a doubt.

Samsung’s 2 nm: Efforts Competitive But Challenged

Samsung was one of the first to deploy GAA technology on a smaller scale starting with its 3nm node, and has planned 2nm production (often referred to as SF2) as an extension of this progress. It has invested heavily in facilities such as the Taylor, Texas fab with the goal of hitting mass production timelines in 2026.

Despite this, Samsung has faced challenges around yield stability and customer adoption. While it offers very competitive pricing, the combination of yield issues and weak customer mindshare means that Samsung is not a viable alterative to TSMC for high-volume 2nm orders. Trust is the foundation of the semiconductor industry and without predictable yield there can be no trust.

Rapidus: A New Entrant Trying to Carve Out Niche 2 nm Capacity

One of the most intriguing developments in recent years has been the emergence of Rapidus, a Japan-based foundry backed by government and major corporate investors. Rapidus aims to begin 2 nm class chip production around 2027, with plans to ramp monthly wafer production significantly within a year of launch.

From what I have learned about Rapidus over the last year, there is little doubt in my mind that they will succeed. In fact, Rapidus just raised another $1.7B for a total of $11.3B in combined government subsidies and private investment. While this is a significant sum, it represents about 40% of the $32 billion the company estimates it will need for full-scale mass production of 2-nanometer chips by 2027 so stay tuned.

Unlike the giants, Rapidus is not attempting to directly compete on sheer volume, but rather offering “short turnaround times” and tailored services, which could appeal to custom chip designers, domestic Japanese technology firms, and organizations needing smaller-lot, highly customized silicon.

Though still years behind TSMC in mass production timing and total capacity, Rapidus represents a strategic move by Japan to regain presence in advanced semiconductor manufacturing and create additional supply chain options in a market heavily concentrated among a few players.

The Broader Context: A Global Capacity Tightrope

The combined reality of TSMC’s dominant position, Intel’s internal and emerging foundry efforts, Samsung’s technically capable but constrained 2nm push, and the Rapidus niche entry creates a semiconductor landscape in which demand continues to outrun supply at the highest performance nodes. Even as worldwide fab capacity grows, the pace of AI adoption and the strategic value companies place on leading-edge silicon means securing wafer slots early has become mission-critical for tech giants and a formidable bottleneck for others.

Bottom line: The 2nm capacity crunch isn’t a short-term supply hiccup, it is a fundamental outcome of how advanced computing, AI, and custom silicon strategies are reshaping the global semiconductor ecosystem for years to come. The strength of the foundry business has always been based on mutil-sourcing and we need to get that supply chain strength back, absolutely.

Also Read:

TSMC Process Simplification for Advanced Nodes

TSMC and Cadence Strengthen Partnership to Enable Next-Generation AI and HPC Silicon

TSMC vs Intel Foundry vs Samsung Foundry 2026


Reducing Risk Early: Multi-Die Design Feasibility Exploration

Reducing Risk Early: Multi-Die Design Feasibility Exploration
by Kalar Rajendiran on 03-05-2026 at 10:00 am

Feasibility Thermal Map

The semiconductor industry is entering a new era in system design. As traditional monolithic scaling approaches its economic and physical limits, multi-die architectures are emerging as a primary pathway for delivering continued improvements in performance, power efficiency, and integration density. By distributing system functionality across multiple dies or chiplets and integrating them within advanced packaging technologies, designers can create highly optimized heterogeneous systems tailored for demanding applications such as artificial intelligence, high-performance computing, and advanced automotive platforms.

However, the flexibility offered by multi-die architectures introduces a significant increase in design complexity. Unlike monolithic devices, where most interactions occur within a single silicon environment, multi-die systems introduce new dependencies across packaging technologies, power delivery networks, and interconnect strategies. Decisions made during the earliest stages of system architecture can have profound consequences on manufacturability, reliability, and performance. SemiWiki will be publishing a series of articles based on whitepapers released by Synopsys on the topic of multi-die systems.

This article, the first in a three-part series, examines the critical role of feasibility exploration in evaluating multi-die design architectures. Subsequent articles will explore how bump and Through-Silicon-Via (TSV) planning, followed by automated high-speed routing methodologies, translate architectural concepts into successful implementation.

The Growing Complexity of Multi-Die Design

Multi-die system integration requires designers to consider a broader and more interconnected set of constraints than ever before. System architects must simultaneously evaluate die placement and orientation, packaging configurations, interconnect density, and power delivery topology, while also accounting for thermal management strategies. Each design decision introduces ripple effects across the system, influencing both electrical and physical behavior.

Three interrelated performance metrics dominate early multi-die architectural evaluation. Power integrity, often measured through IR drop, directly affects functional reliability and timing margins. Electromigration introduces long-term reliability risks that can compromise device lifetime. Thermal performance determines whether the system can operate within safe temperature limits under peak workloads. In multi-die environments, these challenges become significantly more difficult to model accurately due to the interaction of multiple materials, stacked geometries, and distributed power and signal paths.

Why Early Feasibility Exploration Is Essential

Traditional design methodologies frequently relied on detailed physical implementation and signoff-level analysis to validate power, thermal, and reliability behavior. While highly accurate, these approaches are impractical during early architectural development because they require complete design data and extensive runtime. Feasibility exploration addresses this limitation by enabling designers to analyze architectural options using simplified but representative models.

Through feasibility workflows, designers can evaluate alternative floorplans, connectivity structures, and power distribution strategies without requiring full process design kits or finalized physical layouts. This abstraction dramatically accelerates iteration cycles, allowing teams to explore a broader design space and identify architectural weaknesses early in development. Early identification of issues such as excessive IR drop or thermal hotspots helps prevent costly redesign efforts later in the project lifecycle.

Feasibility vs. Prototyping: Understanding the Difference

Multi-die development typically progresses through two closely related but distinct exploration stages. The feasibility stage focuses on rapid architectural evaluation using abstract models and simplified floorplans. During this phase, designers assess whether proposed multi-die configurations can meet performance, power, and thermal goals without committing to detailed implementation.

Prototyping follows feasibility exploration and introduces technology-specific data and realistic physical structures. At this stage, interconnect models, packaging details, and implementation constraints become more accurate, providing a bridge between architectural exploration and production-ready design. Feasibility exploration therefore serves as the foundation upon which prototyping and detailed implementation are built.

Modeling Techniques That Enable Rapid Exploration

Successful feasibility exploration depends on modeling techniques that balance speed with predictive accuracy. One widely used approach involves pixel-based modeling of the power delivery network. By dividing the power distribution network (PDN) into regularly spaced elements with defined resistance characteristics across multiple dimensions, designers can efficiently evaluate voltage distribution and electromigration behavior across dies and packaging structures.

Simplified bump and TSV models also play a crucial role in early exploration. Rather than relying on foundry-specific implementations, designers can define key interconnect characteristics such as size, placement, and resistance. These models provide sufficient accuracy to evaluate connectivity density and power distribution behavior while maintaining rapid simulation turnaround times.

Thermal modeling further complements feasibility exploration by enabling designers to define material properties, thermal conductivity boundaries, and cooling strategies. These models allow engineers to quickly assess temperature distribution across complex multi-die stacks and packaging configurations.

Visualization: Turning Analysis into Insight

One of the most valuable aspects of feasibility exploration is the ability to visualize analysis results. Graphical heat maps and power integrity plots provide immediate insight into system behavior, allowing designers to identify high-risk regions and refine architectural choices quickly. Visualization enhances collaboration across design teams and supports data-driven decision-making throughout early development stages.

Synopsys 3DIC Compiler Platform-Driven Feasibility Exploration

Unified multi-die design platforms such as the Synopsys 3DIC compiler platform have significantly improved the efficiency of feasibility exploration by integrating modeling, analysis, and visualization capabilities within a single environment.

The Synopsys 3DIC Compiler platform offers fast, flexible, integrated feasibility and prototyping capabilities to quickly and efficiently create, visualize, and analyze prototype designs. Through fast, simple, straightforward power, PDN, and thermal UI-based models, no foundry process design kit (PDK) or technology data is required to create the prototype designs. Built-in IR drop, electromigration, and thermal analysis technologies designed for fast feasibility exploration allow designers to iterate on possible architectures quickly and easily.

Learn more by accessing the whitepaper from here.

Setting the Stage for Interconnect Planning

Feasibility exploration establishes the architectural blueprint for multi-die implementation by validating die placement, power delivery strategies, and connectivity requirements. Once these architectural parameters are defined, designers must translate them into detailed interconnect structures. The next article in this series will examine how bump and TSV planning provide the physical foundation for scalable multi-die connectivity and prepares designs for implementation and routing.

Also Read:

How Customized Foundation IP Is Redefining Power Efficiency and Semiconductor ROI

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering

Hardware is the Center of the Universe (Again)


From Satellites to 5G: Ceva’s PentaG-NTN™ Lowers Barriers for Terminal Innovators

From Satellites to 5G: Ceva’s PentaG-NTN™ Lowers Barriers for Terminal Innovators
by Daniel Nenni on 03-05-2026 at 8:00 am

unnamed (3)

Ceva, Inc., a leading provider of silicon and software IP for the Smart Edge, has unveiled PentaG-NTN™, its groundbreaking 5G Advanced modem IP subsystem tailored for satellite user terminals in Low Earth Orbit (LEO) and Medium Earth Orbit (MEO) constellations. Announced at Mobile World Congress 2026 in Barcelona on March 3, 2026, this innovation marks a pivotal step in merging satellite and terrestrial cellular networks, enabling faster, more reliable global connectivity.

PentaG-NTN™ is the inaugural offering from Ceva’s third-generation PentaG platform, a robust, production-ready 5G-Advanced baseband architecture. This platform unifies baseband hardware accelerators, Layer 1 (L1) PHY software, and extensive verification tools into a cohesive, reusable subsystem. It serves as a versatile foundation for derivatives like PentaG-Edge™ for terrestrial 5G-Advanced edge and IoT uses. Compared to prior generations, it offers major gains in performance, scalability, and integration efficiency, supporting both satellite and ground-based deployments with up to 400 MHz bandwidth per component carrier across FR1 (sub-7 GHz) and FR2 (mmWave) bands.

The surge in satellite constellations driven by commercial ventures and government efforts for broader coverage and strategic sovereignty has intensified the push to embed 5G standards into space-based systems. Yet, 5G-Non-Terrestrial Networks (NTN) impose cellular-level complexity on an industry long focused on spacecraft, payloads, and orbital management rather than modem engineering. Satellite-native players, experts in constellation design and operations, now face the steep learning curve of 3GPP-compliant cellular basebands.

Ceva addresses this gap with PentaG-NTN™, a fully integrated, plug-and-play modem subsystem. It eliminates much of the complexity, shortens development timelines, and minimizes risks for teams without deep cellular expertise. As Jake Saunders, Vice President at ABI Research, observes, this convergence dismantles traditional satellite silos, harnessing the maturity, scale, and cost advantages of cellular ecosystems. Solutions that ease modem integration while preserving differentiation are essential for scaling 5G-NTN from prototypes to widespread commercial use.

PentaG-NTN™ is engineered for LEO’s demanding conditions, including severe Doppler shifts, timing offsets, and extended propagation delays. Key features include optimized Doppler compensation, frequency-offset mitigation, latency-tuned L1 processing for LEO/MEO channels, full 3GPP Release-18 compliance (with a roadmap to Release-19), Ka/Ku-band support, narrowband proprietary waveforms, and scalable throughput from 10 Mbps to 2 Gbps with 256-QAM modulation.

Crucially, the subsystem balances standards adherence with innovation freedom. It pairs hardware acceleration with programmable DSPs and flexible software interfaces, allowing licensees to layer proprietary algorithms, waveform enhancements, or application-specific optimizations atop the 3GPP foundation. This empowers satellite innovators to stand out in competitive markets.

Delivered as a complete modem subsystem, not mere IP blocks, PentaG-NTN™ includes optimized baseband hardware, L1 PHY software, and a thorough verification suite featuring a Virtual Platform Simulator (VPS), system-level simulations, test benches, and FPGA emulation. This enables early software validation and system testing pre-silicon, boosting predictability and speeding market entry.

Ceva estimates the third-generation PentaG platform slashes modem silicon development time by about 65% and reduces program costs by tens of millions compared to in-house builds requiring specialized R&D teams.

Guy Keshet, Vice President and General Manager of Ceva’s Mobile Broadband Business Unit, emphasizes: “5G-NTN brings cellular standards to the satellite domain, but satellite innovators shouldn’t need to become modem experts. PentaG-NTN™ lowers entry barriers with a proven, compliant foundation while enabling true differentiation.”

Bottom line: This launch underscores Ceva’s role in powering the Smart Edge, where connectivity, sensing, and inference converge. With over 20 billion devices shipped and partnerships spanning wearables, IoT, vehicles, and infrastructure, Ceva’s IP encompassing 5G, Bluetooth, Wi-Fi, UWB, and Edge AI, fuels intelligent, always-connected products. As Physical AI emerges, PentaG-NTN™ positions satellite connectivity as a seamless 5G extension, promising ubiquitous coverage and transformative applications worldwide.

CONTACT CEVA
Also Read:

Ceva IP: Powering the Era of Physical AI

Ceva Wi-Fi 6 and Bluetooth IPs Power Renesas’ First Combo MCUs for IoT and Connected Home

Ceva-XC21 Crowned “Best IP/Processor of the Year”


Siemens Reveals Agentic Questa

Siemens Reveals Agentic Questa
by Bernard Murphy on 03-05-2026 at 6:00 am

Questa Agentic

There’s no denying that verification now leads the field in agentic AI announcements, accelerating the trend around this significant contribution to design automation. Siemens have just announced their Questa One Agentic Toolkit, their response to this trend, building on the core Questa One platform. Questa One provides integrated simulation, static verification, embedded AI driving common management around those tools, also VIP in support of these functions. The Agentic Toolkit adds further automated creation and orchestration of verification tasks to provide end-to-end solutions. Boasting endorsements from NVIDIA and MediaTek among others, this update is worth a look.

Strategy and foundation

Announcements in this space are inevitably similar, so what makes the Siemens approach different? They are leading with a foundation of openness and organic development. Start with openness. The Agentic Toolkit provides MCP interfaces to underlying Questa One functions with the ability to connect to any agentic frameworks. Their own agentic app, Fuse, naturally fully utilizes these interfaces and is the “preferred” option but does not limit other frameworks from connecting.

Sidebar on this point: tool providers have a natural advantage in knowing how best to use their own tools. That expertise can be captured in vectorized knowledge embedded in agentic apps. But how well does this work in a mix-and-match flow? More should be debated here between design teams and verification technology providers.

The organic differentiation is also interesting. Siemens have built their Fuse EDA AI system in-house. Supported workflows leverage the NVIDIA Llama-Nemotron reasoning framework and NVIDIA NIM inference microservices, enabling the platform to understand verification state in real time and maintain comprehensive awareness and contextual intelligence relationships between designs, testbenches, test plans and specifications. No doubt thanks to these foundation frameworks this system apparently also works with main-stream AI coding applications, including GitHub Copilot, Claude Code, Cursor, and Cline, and can be used in command-line mode (for scripting) or through IDEs such as VS-Code.

All this works with a multi-model EDA data lake, capturing baseline manuals and user documentation. An LLM exploits this information in assistants, reasoners, etc., to direct run objectives and orchestration.

They also add that building on the existing connected ecosystem between Questa One, Tessent™ software for DFT and the Veloce™ CS hardware-assisted verification and validation system, the Agentic Toolkit supports a broad range of design and verification objectives.

This system ships today with several pre-built and tested agents for customers who need a quick start in pilot trials: an RTL code agent, a Lint agent, a CDC agent, a verification planning agent and a debug agent. Expanding a little on the values they describe, the Verification Agent organizes tasks, coverage goals, and requirements, offering AI-driven suggestions for efficient resource allocation, from which engineers gain clarity, adaptability, and accelerated closure, ensuring comprehensive verification and faster project success.

The Debug Agent accelerates root-cause analysis by pinpointing issues in RTL and testbenches with AI-driven insights. It offers targeted suggestions, automates error tracing, and guides engineers to efficient resolution. With smart diagnostics, it reduces debug cycles and boosts productivity, helping teams deliver robust designs faster.

What about guardrails and trust?

This a question I am now asking all agentic verification solution providers. The upside of hands-free automation is huge; the downside of unsupervised AI could be even more dramatic. I asked Abhi Kolpekwar (Sr. VP and GM at Siemens Digital Industries) for his views on this challenging balance.

Abhi agreed that while across all industries there is buzz around the potential of AI and agentic methods, there is equally buzz around hype outrunning reality and most pilot programs failing to translate to production. How do successful deployments navigate this challenge? Abhi had a two-part answer. First, while those surveys certainly highlight a problem, we shouldn’t underestimate the success AI/Agentic already enjoys in quietly successful embedded use-cases. Examples include cars (I have written recently about this), in our phones, and in factory automation. Another interesting example is a method to detect what you are saying by watching your facial muscles, even without needing to hear clear speech. Not available yet but presumably coming to a phone, car or other device near you in the not-too-distant future.

Intriguing, but in SoC and system we are very sensitive to reliability and pinpoint accuracy. How can AI/Agentic align with these needs? In Abhi’s view, one way to secure that level of quality is through guardrails implemented using proven and non-AI core EDA technologies: formal methods, simulation, and so on. The second way is to implement processes which require human-in-the-loop judgement at checkpoints. There can still be a big win, even though the whole process isn’t pushbutton. With agentic support, DV engineers graduate from being tool operators, knowing all minutiae of how to run (and debug) scripts and tools, to instead becoming verification scientists, knowing how to judge outcomes at intermediate steps, and what high-level correction they might want to try to correct an outcome.

I like it – what DV engineer wouldn’t want to upgrade their day-to-day workload to become a verification scientist?

Nice positioning. Still, the proof will be in how DV engineers and product managers will react in practice. You can read more about the release HERE and get more insight on product details HERE.

Also Read:

Functional Safety Analysis of Electronic Systems

Perforce and Siemens Collaborate on 3DIC Design at the Chiplet Summit

Siemens to Deliver Industry-Leading PCB Test Engineering Solutions


Functional Safety Analysis of Electronic Systems

Functional Safety Analysis of Electronic Systems
by Daniel Payne on 03-04-2026 at 10:00 am

hyperlynx ams

Safety engineers, hardware designers and reliability specialists in safety-critical industries like automotive, aerospace, medical device and industrial automation use FMEDA (Failure Modes, Effects and Diagnostic Analysis). ISO 26262 compliance for ADAS, braking systems and ECUs require FMEDA in the automotive sector. In case of a failure, the system has to respond with an approach to keep people, property and the surrounding environment safe. Siemens and Modelwise wrote a white paper on this topic as it applies to electronic systems, so I’ll share my findings.

Safety analysis reports can be written in a natural language, but then someone has to interpret them, which can introduce errors. Leaving safety analysis until after a design is done can lead to design rework and stretch out the schedule. The approach from Modelwise is to use Paitron, which formalizes and automates the functional safety process, making for an accurate and efficient way to compute FMEA and FMEDAs. Integrating Modelwise Paitron with the Siemens Hyperlynx AMS creates a methodology for the design, verification and functional safety analysis of electronic systems.

For AMS schematic capture and simulation there are tools from Siemens, Xpedition Designer and HyperLynx AMS, respectively. Designers can simulate using models from components in SPICE, VHDL-AMS or custom.

HyperLynx AMS in the Xpedition Designer environment

HyperLynx AMS integrates PCB design capture with circuit simulation, then Paitron enables automatic functional safety analysis.

The Part Lister in Xpedition Designer generates of bills of materials (BOMs) by extracting component property information directly from the schematic database. This enables Paitron to automatically map the components to the categories of the failure rate and modes databases, an otherwise task for engineers.

Consider an example functional safety analysis using a voltage monitor circuit where an output goes low whenever the input voltage falls outside a reference range.

Schematic
Simulation Results

The voltage divider R3/R4/R5 determine the voltage range, so consider a failure mode where resistor R4 has an increase in resistance. This increase make the output window wider than expected, making it a safety violation.

Paitron can be used to speed up FMEA/FMEDA for this design through automation. Accuracy and standardization is achieved through template-based formalization.

In Paitron for this example the input and output variables, Vin and Vout are defined and a domain is assigned to each one.

Variable

Type

Partition

Value

Vin

Input

[0.1, 6.7)
[6.8, 13.4]
[13.5, 30)

Undervoltage
Valid
Overvoltage

Vout

Output

(-inf, 0.5]
(0.5, inf)

Tripped
Valid

Defined model variables and domains

System effects are next formalized using constraint expression as shown below:

Effect

Description

>Monitor stuck valid

Vout is always HIGH

Monitor stuck tripped

Vout is always LOW

Missing overvoltage

Vout is HIGH for some overvolted input and LOW for others

Missing undervoltage

Vout is HIGH for some undervolted input and LOW for others

Tripped valid

Vout is LOW for some valid input voltages and HIGH for others

In Paitron the effect for “Monitor stuck valid” is defined with the dialog:

The failure rate and failure mode distributions for each PCB component are used to compute the quantitative safety metrics. Paitron can use several sources for the failure mode and rates.

Source name Description

Failure rate

Failure mode
SN 29500 Siemens standard providing failure rates for electrical and electronic components

Yes

No
IEC 61709 Reference conditions for failure rates and stress models for conversion

No

Yes
MIL-HDBK-217F ilitary handbook that predicts reliability of military electronic equipment and systems

Yes

>No
MIL-HDBK-338B Military handbook that provides reliability data and analysis for electronic devices and systems

No

Yes
Birolini Textbook by A. Birolini (Reliability Engineering: Theory and Practice, 8th Edition., Springer Berlin Heidelberg)

Yes

Yes

Failure mode and rate sources available in Paitron

Failure rate and failure mode categories are assigned based on component types, shown next:

Part(s) Reference type Failure rate category Failure mode category
C1, C2 Capacitor Ceramic | LDC (COG, NPO) Ceramic | NPO-COG
R1, R2, R3, R4, R5, R6, R7 Resistor Metal film Metal film
XU1A, XU1B Integrated Circuit | Comparator Bipolar, BIFET | No. of transistors: ≤ 30 Portable and non-stationary use, ground vehicle installation
YA1 Integrated Circuit | Digital | Gate | AND BICMOS | Logic | No. of gates 1-100, No. of transistors 5-500 Portable and non-stationary use, ground vehicle installation

Once analysis is set up, then the automated FMEA/FMEDA is run where Paitron finds the resulting effects for each failure mode, creating a detailed safety report. Here’s the analysis summary per IEC 61508. Each component has a detailed analysis with determined effects for each failure mode, plus the distribution of the failure rates that depend on classification as dangerous/safe and diagnostic coverage.

Summary

FMEDA can be done manually or with automation, and the integration between Siemens and Modelwise provides the fastest functional safety analysis. This automation helps to cut failure rates and speed up the design cycle. Paitron and HyperLynx AMS are a proven tool combination for use on safety critical systems.

Read the 16 page White Paper online after a brief registration.

Related Blogs


RVA23 Ends Speculation’s Monopoly in RISC-V CPUs

RVA23 Ends Speculation’s Monopoly in RISC-V CPUs
by Jonah McLeod on 03-04-2026 at 8:00 am

RVA23 Image

RVA23 marks a turning point in how mainstream CPUs are expected to scale performance. By making the RISC-V Vector Extension (RVV) mandatory, it elevates structured, explicit parallelism to the same architectural status as scalar execution. Vectors are no longer optional accelerators bolted onto speculation-heavy cores. They are baseline capabilities that software can rely on.

RVA23 doesn’t force scalar execution to become deterministic. It simply makes determinism viable because the scalar side is no longer responsible for throughput. The vector unit handles the parallel work explicitly, and the scalar core becomes a coordinator that can be simple, predictable, and low‑power without sacrificing performance.

To understand why this shift matters, it helps to recall how thoroughly speculative execution came to dominate high-performance CPU design. It delivered speed, but at increasing cost—in power, complexity, verification burden, and security exposure. RVA23 does not reject speculation. Instead, it restores balance. It acknowledges that predictable, vector-driven parallelism is now a credible, mainstream path for performance growth.

Mandatory vector support fundamentally changes the software performance contract. Compilers, libraries, and applications can now assume RVV exists on every compliant core. Optimization strategy shifts away from “let the CPU guess” toward explicit, structured parallelism. Toolchains must reliably emit vector code. Math and DSP libraries can reduce or eliminate scalar fallbacks. Application developers gain a predictable model for scaling loops and data-parallel workloads.

The cultural shift is significant: parallelism becomes something software expresses directly, not something hardware attempts to infer. For hardware designers, the shift is different but equally profound. Vector units are now mandatory, yet the specification preserves microarchitectural freedom.

Implementers can choose lane width, pipeline depth, issue policy, and memory design. What changes is the performance center of gravity. Designers are no longer forced to rely exclusively on deeper speculation—larger branch predictors, wider reorder buffers, and increasingly complex recovery mechanisms—to remain competitive.

Instead, area and power can shift toward vector throughput and memory bandwidth. Simpler in-order cores with strong vector engines become viable for workloads that once demanded complex speculative machinery.

How Speculation Came to Dominate

Speculative execution did not appear overnight. It emerged gradually from techniques that loosened strict sequential execution. In 1967, Robert Tomasulo’s work on the IBM System/360 Model 91 introduced dynamic scheduling and register renaming, allowing instructions to execute out of order without violating program semantics. Around the same time, James Thornton’s scoreboard in the CDC 6600 kept pipelines active in the presence of hazards. These mechanisms did not speculate—but they removed structural barriers that once forced processors to stall. Once out-of-order execution became viable, speculation became irresistible.

In the late 1970s and early 1980s, James E. Smith formalized branch prediction, grounding speculation in probability. Memory ceased to be something processors simply waited on; it became something to anticipate. Data was fetched before it was confirmed to be needed. Caches evolved from locality optimizers into buffers that absorbed the turbulence of speculative execution.

Academia reinforced this direction. Instruction-level parallelism research at Stanford and Berkeley treated speculation as the path forward. John Hennessy framed speculation  as a way to increase performance without abandoning sequential programming. David Patterson articulated the “memory wall,” encouraging deeper caching and hierarchical storage.

Industry followed. Intel’s Pentium Pro (P6) crystallized speculative out-of-order execution with deep cache hierarchies into the mainstream CPU template. IBM POWER and AMD Zen reinforced the same model: sustain ever larger volumes of in-flight speculative work by expanding buffering, bandwidth, and memory-level parallelism. Each generation scaled speculation rather than questioning it.

The Growing Costs

Over time, the costs became clearer. In his ISSCC 2014 plenary, Mark Horowitz argued that energy—not transistor density or raw logic speed—had become the primary constraint in computing. Arithmetic consumes only a few picojoules. Cache accesses cost an order of magnitude more. DRAM accesses cost two to three orders more. Data movement, not computation, dominates energy consumption.

Voltage scaling stalled and frequency scaling hit thermal limits. Simply adding cores no longer restored historical performance curves. Meanwhile, last-level caches and register files grew so large that they began consuming energy comparable to—and often exceeding—the cores they served.  Modern memory hierarchies evolved not independently, but in symbiosis with speculative execution. They became the scaffolding required to sustain large volumes of in-flight, uncertain work. Speculation optimizes for the appearance of forward progress. The memory system exists to sustain that appearance—and to clean up when predictions fail.

At the DRAM level, Onur Mutlu showed how modern processors stress memory systems through interference, row conflicts, and unpredictable access patterns—many driven not by committed computation, but by speculation that would ultimately be discarded.

Seen in this light, modern CPU memory hierarchies did not evolve independently. They co-evolved with speculative, out-of-order execution, becoming the physical scaffolding required to sustain it. At its core, speculative execution optimizes for illusion—the illusion that a single sequential thread is progressing faster by guessing ahead.

Deterministic execution, by contrast, optimizes for what is known. It treats latency as schedulable rather than something to hide behind ever-increasing bandwidth. Where speculative architectures grow in complexity to compensate for uncertainty, deterministic architectures grow in predictability and sustained throughput.

The Path Not Taken

Speculation was not inevitable. Seymour Cray’s vector machines demonstrated that speculation was never the only path forward. They rejected it entirely, relying instead on predictable memory stride patterns, explicit vector lengths, and deterministic scheduling. Parallelism was exposed directly to the hardware, not inferred through guesswork, and latency was something to plan around rather than hide.

Their memory systems were engineered for stable, high‑throughput access rather than the guess‑and‑recover behavior that later speculative architectures required. In this sense, Cray’s approach aligns more closely with RVV’s structured, length‑agnostic model than with the speculative superscalar lineage that came to dominate general‑purpose CPUs

Speculation won historically because it preserved sequential programming models and minimized software disruption. But that success created path dependence. Memory hierarchies were optimized for speculative throughput even as power consumption, verification complexity, and architectural opacity escalated.

RVA23 and the Nature of Modern Compute

AI, machine learning, and signal processing workloads are structured and inherently data-parallel. Their access patterns are often knowable rather than probabilistic. These are precisely the domains where explicit parallelism outperforms speculative guessing. By making RVV mandatory, RVA23 guarantees hardware support for such workloads. Structured parallelism moves from optional extension to architectural baseline. This does not eliminate speculation. It eliminates exclusivity.

Architectures such as deterministic, time-based scheduling approaches—like those explored at Simplex Micro—can now assume vector capability as a foundation. Rather than compensating for speculative inefficiency, they coordinate compute and memory explicitly. Performance scales through utilization and predictability rather than speculation depth. For vector and matrix workloads, this is less a revolution than a return to a lineage that speculation once displaced.

Structured Parallelism as First-Class Architecture

The significance of RVA23 goes beyond instruction encoding. Compiler infrastructures can assume vector support. Operating systems can schedule with vector resources in mind. Hardware implementations can optimize for vector efficiency without worrying whether the ecosystem will ignore it. For three decades, speculation received consistent architectural investment. Structured parallelism did not.

RVA23 changes that. It does not mandate abandoning speculation. It mandates architectural parity. Designers may deploy both where appropriate, but structured parallelism is no longer a second-class citizen. The false binary—scale through speculation or accept inferior performance—no longer applies.

Less to Speculate On

With RVA23, there is less uncertainty about vector capability, less doubt that deterministic approaches can achieve first-class performance, and less need to rely exclusively on speculation to scale. Less reliance on speculation as the sole path to scaling. Today’s workloads are parallel by design, not by heroic compiler extraction from sequential code. For these workloads, speculation’s costs increasingly outweigh its benefits.

RVA23 does not end the era of speculation. It ends its monopoly. And that shift—more than any single technical feature—may be its most important contribution to processor architecture.

Also Read:

Reimagining Compute in the Age of Dispersed Intelligence

Two Open RISC-V Projects Chart Divergent Paths to High Performance

The Foundry Model Is Morphing — Again

The AI PC: A New Category Poised to Reignite the PC Market