RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Capability Hardware Enhanced RISC Instructions CHERI Alliance

Capability Hardware Enhanced RISC Instructions CHERI Alliance
by Daniel Nenni on 03-09-2026 at 8:00 am

CHERI Alliance Overview 2026

The CHERI Alliance is a non-profit organization dedicated to accelerating the global adoption of CHERI (Capability Hardware Enhanced RISC Instructions), a technology designed to improve computer security at the hardware level. Established as an independent entity, the Alliance brings together industry leaders, researchers, government bodies, and software communities to promote and support the implementation of CHERI across computing ecosystems. Its overall mission is to unite stakeholders and drive CHERI as an effective and widely adopted security standard.

One of the key motivations behind the creation of the CHERI Alliance is the growing need for stronger cybersecurity mechanisms. Modern systems are frequently vulnerable to memory-related security flaws, which account for a large portion of software vulnerabilities. CHERI technology addresses these issues by introducing capability-based memory protection directly in hardware, allowing systems to enforce fine-grained control over how memory is accessed and used. However, a technology specification alone is not enough to ensure adoption. The Alliance was therefore formed to build an ecosystem around CHERI that includes hardware vendors, software developers, researchers, and policymakers.

The Alliance operates as a non-profit Community Interest Company (CIC) based in Cambridge, United Kingdom. This structure ensures that the organization remains focused on public benefit rather than shareholder profit. The initiative is funded primarily by industry membership fees, providing financial sustainability while maintaining independence. Governance is handled by a governing board composed of elected representatives from member organizations along with directors of the CIC. Board members serve limited terms and make decisions regarding strategy, budgets, initiatives, and the approval of working groups.

A major role of the CHERI Alliance is technical coordination. Because CHERI can potentially be implemented across multiple instruction set architectures (ISAs), including architectures such as Arm, x86, MIPS, and RISC-V, alignment across the ecosystem is essential. The Alliance helps establish best practices, interoperability guidelines, and technical recommendations that ensure consistent and effective implementations. In addition, the organization supports the development and porting of software tools, operating systems, and open-source software so that developers can more easily build and run CHERI-enabled applications.

Promotion and education are also central activities of the Alliance. The organization produces technical and marketing materials that explain the benefits of CHERI and raises awareness among industry stakeholders, regulators, and the general technology community. By engaging with policymakers, the Alliance encourages regulatory frameworks that prioritize stronger security standards in digital products. The Alliance also manages the CHERI brand and communicates the value of capability-based security through media outreach, conferences, and industry events.

Community engagement is another key pillar of the Alliance’s work. The organization provides platforms for collaboration among members, including repositories for shared software projects, educational resources, and networking opportunities. Members can participate in committees and working groups focused on specific topics such as software porting, compliance standards, marketing, and technical alignment. These working groups allow experts from different sectors to collaborate, exchange knowledge, and contribute to the development of CHERI-related technologies and guidelines.

Membership in the CHERI Alliance offers several advantages. Organizations that join demonstrate leadership in cybersecurity and gain visibility within the emerging CHERI ecosystem. Members can influence standards and strategic directions by voting in governance elections or participating in working groups. They also benefit from networking opportunities with other industry leaders, researchers, and government representatives. In addition, members gain access to early technical developments, collaborative projects, and promotional opportunities through conferences and events organized by the Alliance.

The Alliance also organizes conferences and events aimed at promoting CHERI technologies and bringing the community together. These gatherings include technical talks, demonstrations, and workshops that showcase ongoing research and real-world implementations. Such events help foster collaboration between academia and industry while educating developers and decision-makers about the advantages of hardware-assisted memory safety.

Bottom line: The CHERI Alliance plays a crucial role in advancing a new generation of secure computing technologies. By coordinating technical development, promoting awareness, supporting open collaboration, and building an industry-wide ecosystem, the Alliance helps transform CHERI from a promising research concept into a practical and widely adopted security standard. As cybersecurity threats continue to evolve, initiatives like the CHERI Alliance will be increasingly important in shaping the future of safer computing systems.

CONTACT CHERI

Also Read:

CHERI: Hardware-Enforced Capability Architecture for Systematic Memory Safety

Securing RISC-V Third-Party IP: Enabling Comprehensive CWE-Based Assurance Across the Design Supply Chain

Caspia Technologies Unveils A Breakthrough in RTL Security Verification Paving the Way for Agentic Silicon Security


Operationalizing Secure Semiconductor Collaboration: Safely, Globally, and at Scale

Operationalizing Secure Semiconductor Collaboration: Safely, Globally, and at Scale
by Kalar Rajendiran on 03-09-2026 at 6:00 am

Collaboration Framework

Semiconductor manufacturing is among the most complex industrial activities in existence. As device geometries shrink and systems become more interconnected, software has become as critical as process technology itself. Modern fabs depend on extensive automation, real-time analytics, and deep integration between tools, control systems, and external partners. This complexity dramatically expands the cyber-attack surface, making cybersecurity not just an adhoc or adjunct concern, but rather a core operational challenge.

A typical fab environment includes hundreds of thousands of software components sourced from hundreds of suppliers, continuously updated and interdependent. In such an environment, cybersecurity incidents are inevitable. A widely cited malware incident that halted production at a major fab several years ago marked a turning point for the industry, shifting cybersecurity from a theoretical IT concern to an operational imperative with direct revenue and safety implications.

Why Legacy Security Models Are Breaking Down

Historically, fabs relied on isolation, obscurity, and extreme caution around change. Systems were kept static, external connectivity was minimized, and any untested modification was treated as a production risk. While this approach reduced short-term disruption, it created brittle environments poorly suited to modern threat dynamics.

Today, effective cybersecurity requires continuous patching, operating system updates, software lifecycle management, and rapid response to emerging vulnerabilities. These requirements conflict directly with the reality that changes in a fab environment demand extensive validation. The result is long exposure windows at precisely the moment when attackers are becoming faster, more targeted, and more persistent, often aided by AI.

Security by obscurity no longer works. Semiconductor manufacturing is a high-value target, and fragmented VPNs, ad-hoc access paths, and inconsistent controls increase risk by reducing visibility and slowing response.

Standards Define Only the Foundation, Not the Full Solution

SEMI cybersecurity standards such as E187, E188, and E191 provide an essential baseline. They establish expectations for malware-free equipment, procedural cybersecurity practices, and automated software inventory reporting. Importantly, these standards define what must be protected while intentionally avoiding prescriptive architectural mandates.

This is by design. However, risk emerges when standards are treated as a ceiling rather than a floor. Compliance alone does not define a secure system. Architectures must be designed around zero trust principles such as least privilege, segmentation, continuous validation, and “assumed breach”. In practice, systems designed with these principles in mind are often better positioned to meet both current and future requirements.

Collaboration Is the New Security Frontier

The semiconductor industry now operates as a deeply collaborative global ecosystem. Technology development depends on constant interaction between fabs, solution providers, equipment suppliers, designers, OSATs, and advanced packaging partners. This level of collaboration cannot function in air-gapped environments, yet naive use of public internet connectivity introduces unacceptable risk.

Collaboration has therefore become the new security frontier. The challenge is no longer whether data and access should be shared, but rather how to enable collaboration without exposing proprietary process knowledge, yield signatures, or AI models that define competitive advantage.

Why Secure Collaboration Requires More Than VPNs

Point-to-point VPN models do not scale in this environment. As collaboration grows, VPN sprawl leads to operational complexity, inconsistent governance, and expanding attack surfaces. Each additional tunnel increases firewall exposure and administrative burden while reducing overall visibility.

What is required instead is a framework-based approach in which secure connectivity is centrally managed, segmented, and dynamically enabled. In such models, connectivity is simplified structurally while security is strengthened architecturally.

How secureWISE Operationalizes a Purpose-Built Framework

secureWISE, from PDF Solutions, illustrates how a governed connectivity framework can be purpose-built for semiconductor manufacturing rather than adapted from enterprise information technology (IT) or generic operational technology (OT) solutions.

Over more than 20 years of continuous deployment, secureWISE has been embedded into fab operations worldwide, supporting thousands of tools and global supplier ecosystems under real production constraints. That longevity reflects not only adoption, but sustained trust under conditions of high concurrency, strict uptime requirements, and evolving threat models.

secureWISE’s architectural strength lies in simplification as a security strategy. It replaces a dense mesh of OEM-specific VPNs with a single governed entry point per fab. By collapsing connectivity sprawl into a controlled framework, firewall routing complexity is reduced; custom network engineering is simplified for onboarding and offboarding of endpoints, and the overall attack surface shrinks. What appears operationally simpler is, in fact, more secure.

Built around Zero Trust principles, secureWISE enforces identity-centric access with exceptional granularity. Access is not extended merely by network location but is constrained by role, action, context, and purpose.

Every session, file transfer, and operation is encrypted, monitored, and logged, providing continuous visibility and long-retention audit trails aligned with ISO 27001 controls and practical compliance with SEMI E187, E188, and E191.

Zero Trust as a Principle, Not a Product

Zero Trust is best understood as a mindset: always verify, assume a possible compromise, and monitor continuously. Effective implementations focus on containment, visibility, and control rather than the unrealistic goal of perfect prevention.

Private networks play a critical role in this approach. By avoiding public internet traffic exposure and explicitly defining allowed routes and participants, attack surfaces are reduced and anomalous behavior becomes easier to detect. Continuous monitoring of access patterns, behavior, and data flows is essential, not only to counter external threats but to manage insider and supply-chain risks.

Proven Architectures Build Long-Term Trust

Ultimately, security is measured by outcomes. Architectures that have enabled global collaboration for decades without material data breaches demonstrate the value of thoughtful design, layered controls, and disciplined execution. These approaches have proven that security can scale alongside the growing complexity of the semiconductor industry.

Standardized, governed connectivity at scale is not merely a security enhancement; it is a marker of operational maturity. It enables collaboration, visibility, and continuous assurance without disrupting production or slowing innovation.

Summary

The semiconductor industry no longer faces a binary choice between openness and security. Collaboration is already happening; the real decision is whether it continues through fragmented, opaque mechanisms or evolves into governed, auditable frameworks designed for scale.

Standards define the foundation. Compliance confirms intent. Long-term security emerges when architectures are purpose-built around real fab workflows, legacy constraints, and global ecosystems. Time-proven frameworks demonstrate that simplification and hardening are not opposing goals but rather mutually reinforcing goals. This is how secure semiconductor collaboration is operationalized safely, globally, and at scale, while staying ahead of an increasingly complex threat landscape.

Also Read:

Why PDF Solutions Is Positioning Itself at the Center of the Semiconductor Ecosystem

Manufacturing Is Strategy: Leadership Lessons from the Semiconductor Front Lines

PDF Solutions’ AI-Driven Collaboration & Smarter Decisions


Keynote: On-Package Chiplet Innovations with UCIe

Keynote: On-Package Chiplet Innovations with UCIe
by Daniel Nenni on 03-08-2026 at 4:00 pm

Chiplet Summit Keynote UCIe 2026

In the rapidly evolving landscape of semiconductor technology, the Universal Chiplet Interconnect Express (UCIe) emerges as a groundbreaking open standard designed to revolutionize on-package chiplet integrations. Presented by Dr. Debendra Das Sharma, Chair of the UCIe Consortium and Intel Senior Fellow, at the Chiplet Summit 2026, UCIe addresses the limitations of traditional monolithic SoC designs by enabling modular, high-performance chiplet architectures. As Moore’s Law slows, chiplets offer a path to overcome reticle size constraints, reduce development time, lower costs, and optimize yields through smaller dies and process-specific optimizations. UCIe positions System-in-Package as the new SoC, fostering an ecosystem where chiplets from diverse vendors can seamlessly interconnect, much like PCIe or USB at the board level.

The motivation for chiplets and UCIe stems from the need to scale innovation in an era where manufacturing processes lock certain IPs, and bespoke solutions demand flexibility. By mixing and matching dies with a standard interface, UCIe reduces portfolio costs, enables die reuse, and supports heterogeneous integrations across AI, HPC, cloud, edge, enterprise, 5G, automotive, and handheld segments. Founded in March 2022 and incorporated in June, the UCIe Consortium now boasts over 140 members, including industry giants like AMD, Arm, Intel, NVIDIA, Qualcomm, Samsung, and TSMC. It operates on guiding principles of openness, backward compatibility, optimized power-performance-cost metrics, and continuous innovation, drawing from decades of board-level standards experience.

UCIe’s evolution spans three generations of innovations, each building on the last for interoperability. UCIe 1.0, released in 2022, focuses on planar interconnects with a layered stack: physical layer for die-to-die I/O, adapter for reliable multi-protocol support (including PCIe, CXL, and streaming), and form factor definitions. It supports 2D (UCIe-S) for cost-effective longer distances and 2.5D (UCIe-A) for power-efficient high bandwidth density, enabling usages like I/O attachment, memory expansion, and accelerators. UCIe 1.1, from 2023, enhances automotive reliability with preventive monitoring and run-time testability via parity Flit injection, adds full-stack streaming protocol support, and optimizes costs for advanced packaging all while maintaining backward compatibility.

UCIe 2.0 introduces vertical stacking with UCIe-3D, leveraging hybrid bonding for bump pitches under 1µm, delivering areal connectivity that boosts bandwidth density dramatically. This generation emphasizes low power through simple circuits, SoC-frequency operations, and cluster-level repair, achieving performance rivaling monolithic dies. It includes comprehensive testability, manageability, and debug infrastructure, using sideband channels for remote access and a management fabric based on MCTP standards. UCIe 3.0, slated for 2025, doubles bandwidth to 48-64 GT/s, supports continuous transmission protocols for SoC-DSP connectivity, and adds power-saving features like runtime recalibration.

Key metrics underscore UCIe’s superiority. For UCIe-S (2D), bandwidth shoreline reaches 28-224 GB/s/mm (up to 1317 with 3.0), with power efficiency at 0.5-0.75 pJ/b. UCIe-A (2.5D) offers 278-370 GB/s/mm shoreline and 0.25-0.5 pJ/b, while UCIe-3D achieves up to 300,000 GB/s/mm² density and <0.01 pJ/b at 1µm pitches. Reliability targets near-zero FIT rates, with ESD protections scaling down. These KPIs ensure UCIe delivers high-bandwidth, low-latency, cost-effective interconnects across packages.

Demonstrations highlight UCIe’s maturity: The 2023 Synopsys-Intel interoperability test showed successful linkup and data traffic, while Ayar Labs’ 2025 OFC demo featured an 8 Tbps optical chiplet. Adoption is surging, with Synopsys trends indicating most HPC/AI designs are multi-die, projecting a $411B chiplet market by 2035 at 15.7% CAGR.

Looking ahead, UCIe enables rack/pod-level composability via optical retimers carrying CXL protocols, extending on-package innovations off-package. The consortium invites contributor and adopter memberships to drive future enhancements. In conclusion, UCIe is poised to democratize chiplet ecosystems, accelerating innovation and efficiency in computing. Its open, evolving framework ensures long-term investment protection, marking a pivotal shift in semiconductor design.

Also Read:

Reducing Risk Early: Multi-Die Design Feasibility Exploration

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering


CEO Interview with Jerome Paye of TAU Systems

CEO Interview with Jerome Paye of TAU Systems
by Daniel Nenni on 03-08-2026 at 2:00 pm

Jerome Paye CEO TAU Systems LR (1)

Jerome Paye has served as CEO of TAU Systems since late 2025, having joined the company shortly after its founding in 2022 as Chief Operating Officer. In that time, he has helped build TAU Systems into a high-performing team now focused on delivering the ultimate light source for semiconductor lithography.

Paye brings more than 20 years of industry leadership to the role. Most recently, as COO of Achates Power, he managed engine development programs with leading global manufacturers and oversaw all technical operations. Before that, he held senior roles at Renault SAS, leading early-stage electric vehicle development and directing value engineering for the company’s largest vehicle line, with responsibilities spanning global partnerships including Nissan. His career also includes multiple positions at Ford Motor Company, where he served as program management leader for the Mustang.

Earlier in his career, Paye conducted research in ultrashort pulse lasers and femtosecond laser systems, and contributed to the early design of France’s Laser Megajoule facility, work that directly informs TAU Systems’ mission today.

Tell us about your company?

TAU Systems is developing the next generation of light sources for semiconductor manufacturing through compact particle accelerators and X-ray free-electron lasers. Our laser wakefield acceleration technology creates electron beams with energies equivalent to conventional accelerators spanning hundreds of meters, but we achieve this in just centimeters. We then send these high-energy electrons through magnetic undulators to produce tuneable X-ray lasers with wavelengths significantly shorter than current EUV systems.

What makes TAU unique is our two-part strategy to cross the lab-to-fab chasm. We’re demonstrating economic viability today through radiation effects testing for the space industry, opening our Carlsbad, California, facility later this year. We will build manufacturing capacity through electron-based radiotherapy systems for cancer treatment. And we’re investing heavily in lithography R&D, using revenues from near-term activities to support long-term development. This approach refines our core technology with real customers while generating revenue.

What problems are you solving?

Current EUV lithography machines cost around $400 million each, weigh over 300,000 pounds, and are about as evolved as they can get with current tech. Only a few percent of the light reaches the wafer, dramatically limiting throughput. At the 13.5-nanometer EUV wavelength, chipmakers must use multi-patterning to create smaller features, which adds time, decreases throughput, and increases costs. ASML’s High-NA approach of increasing numerical aperture is reaching fundamental physical and economical limits.

We’re taking the alternative path: reducing the wavelength itself. Our X-ray lasers operate at tuneable wavelengths which will be optimized for maximum transmission. Combined with wavelength-matched reflective optics offering higher reflectivity than current EUV mirrors, our technology delivers hundreds of watts of X-ray emission per compact machine. Matching or exceeding ASML’s power but at shorter wavelengths. The result is faster production, reduced multi-patterning, and dramatically improved energy efficiency.

What application areas are your strongest?

Near-term, we are proving the technology by applying it to radiation effects testing for space. Currently, only a handful of global facilities provide testing – totaling just a few thousand hours annually against an estimated 30,000 hours of demand. Our TAU Labs facility will provide 2,000 to 4,000 hours annually per accelerator unit, dramatically expanding critical testing capacity. This operational beachhead generates revenue while validating our technology.

What keeps your customers up at night?

Space customers face testing bottlenecks. Limited capacity creates project delays and introduces risks as satellite constellations and commercial space ventures scale rapidly.

Semiconductor manufacturers confront more fundamental concerns. The extreme appetite for AI has created massive demand for advanced chips, but current EUV technology cannot meet future requirements. Each node becomes exponentially more expensive with just marginal improvements. The industry knows atomic-level control will eventually require X-rays, but questions when viable solutions will emerge. They’re concerned about capital efficiency, throughput, and economic sustainability.

What does the competitive landscape look like and how do you differentiate?

In radiation testing, we compete against national laboratories and established facilities of which there are but a handful. Our differentiation: dramatically expanded capacity through compact accelerator systems deployable at commercial scale.

For lithography, ASML dominates EUV with a virtual monopoly. They’re pursuing higher numerical aperture optics, but this faces fundamental physical limits. We’re taking the alternative approach physics demands: shorter wavelengths through X-ray lasers. ASML machines are expensive and require extraordinary infrastructure. We’re developing systems housed in existing fab spaces with dramatically improved efficiency.

What truly differentiates TAU is our partnership approach. We’re collaborating with global leaders, including The University of Texas at Austin, Lawrence Berkeley National Laboratory and the Extreme Light Infrastructure Nuclear Physics facility, combining their world-leading expertise with our commercial focus.

What new features/technology are you working on?

We recently demonstrated intense coherent light pulses from a free-electron laser driven by laser-plasma acceleration in collaboration with Berkeley Lab. Published in Physical Review Letters , this work confirms compact X-ray FELs are technically viable for advanced lithography. Our accelerator delivers acceleration gradients 2,000 times stronger than conventional systems.

We’re focused on increasing average power to hundreds of watts per machine, wavelength optimization and tunability for maximum optical transmission, and system integration through our radiation testing facility as a technology proving ground. We’re also developing Very-high Energy Electron therapy systems, which share fundamental technology with our lithography platform.

The overarching goal is to demonstrate that compact laser-driven accelerators can deliver the brightness, stability, and wavelength control required for next-generation semiconductor manufacturing while remaining economically viable.

How do customers normally engage with your company?

Our customers either come to us with a problem they’re looking to solve, or as academics pushing the boundaries of research, or partners who wish to leverage our technology and expertise.

Our development and application facility, TAU Labs, is located in Carlsbad, California and will officially open later in 2026 offering single-event effects radiation testing to ensure spacecrafts operate as intended in the future.

CONTACT TAU SYSTEMS

Also Read:

CEO Interview with Echo Yang of CSCERAMIC

CEO Interview with Juniyali Nauriyal of Photonect

CEO Interview with Aftkhar Aslam of yieldWerx


Things From Intel 10K That Make You Go …. Hmmmm

Things From Intel 10K That Make You Go …. Hmmmm
by Mark Webb on 03-08-2026 at 8:00 am

MKW Ventures Semiconductors

INTEL FORM 10-K

☑ ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
For the fiscal year ended December 27, 2025.

1) Intel is constrained on manufacturing. Not by TSMC. But by IFS and mainly by Intel 7, a node from 2021. Normally constraints are good, it means you are running efficiently with lots of demand. But Intel is not growing. CCG (client)+DCAI (Datacenter) revenue is down. Intel is constrained by older technologies, not newer ones, during a time of non-growth…

2) Intel Margins are low (35% Gross Margins). Intel Margins were once the envy of all hardware and chip companies. GAAP operating margins are negative. Non-GAAP is lower than all memory companies, TSMC, NVIDIA, Broadcom, Qualcomm, AMD, etc, etc. This is a year after taking a one time write down of old assets and a change to their depreciation schedule that saves them billions per year right now

3) If Intel Foundry found a magical external customer, that instantly gave them $7B per year in revenue, equivalent to all of Global foundries, 25 times the external revenue they have today, at absolutely no cost at all to Intel…… IFS would still lose money. Let that sink in

4) Intel took write off charges in 2025 on 18A for what I call LCM. Lower of cost or Market. This is where you need cannot claim inventory as a WIP asset because the cost is higher than the value of the chip. Margins are below zero and/or Intel had to throw away a lot of 18A production.

5) Intel’s newest fabs 34 and 52 require that partner companies get 50% of the profit from them. Through two very different contracts, Apollo and Brookfield get half the profits IF the fabs are successful, and get payment from Intel if they do not hit milestones.

Summary

These are all quite scary and the stock was shaken. But in reality, it simplifies the challenges in my mind.

Intel manufacturing does not seem to have line of sight to being break even. If 18A and 14A ramp, the expenses are more headwinds, more losses. If they do not, the current margins are bad for another 4-5 years. Panther lake is widely recognized as a very good performance part…. First leadership product in 5+ years.

What if Intel followed AMD, Nvidia, Apple, Broadcom, Qualcomm, etc and focused on where it can be successful? How would the numbers be different?

Call us for more information and details

Mark Webb

www.mkwventures.com

Industry Expert with 25+ years experience in semiconductor and system engineering and manufacturing. Extensive experience and knowledge in NAND and SSD manufacturing, development, system testing with Industry leaders. Expertise in SSD/NAND/DRAM business models, Cost/pricing models, competitive analysis and supply chain. Leadership and management experience in Contract Manufacturing, ODM, OSAT, and Foundry Operations. Experience in Device, Product, and Process Integration Engineering on Logic, SOC and all Memory technologies.

Also Read:

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete

TSMC vs Intel Foundry vs Samsung Foundry 2026

Intel to Compete with Broadcom and Marvell in the Lucrative ASIC Business

 


Podcast EP334: The Unique Benefits of LightSolver’s Laser Processing Unit Technology with Dr. Chene Tradonsky

Podcast EP334: The Unique Benefits of LightSolver’s Laser Processing Unit Technology with Dr. Chene Tradonsky
by Daniel Nenni on 03-06-2026 at 10:00 am

Daniel is joined by Dr. Chene Tradonsky, a physicist and the CTO and co-founder of LightSolver, where he leads the development of a proprietary physics-based computing system built on coupled laser dynamics to accelerate compute-heavy simulations and other computationally demanding workloads. Before moving into physics, he started in electrical engineering, a combination that helps him bridge advanced computing and complex physical systems.

Dan explores the unique and powerful Laser Processing Unit (LPU) developed by LightSolver with Chene, who explains the technology’s unique benefits for a broad range of applications. Chene describes how the technology can accelerate the solution of partial differential equations (PDEs), which form the basis for many problems in engineering and science. Chene explains how LightSolver technology can be added to current high-performance computing systems as a kind of co-processor to substantially accelerate results.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

CONTACT LIGHTSOLVER


Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete
by Daniel Nenni on 03-06-2026 at 6:00 am

TSMC 2NM Intel 18A Samsung 2nm Rapidus 2nm

The semiconductor industry is in the midst of a structural supply challenge that’s tightly coupled to exploding demand for advanced chips, especially those used in AI, HPC, and next-generation mobile and consumer devices. At the center of this vortex is the 2nm class of manufacturing technology, representing one of the most complex and expensive transitions in semiconductor history due to its reliance on nanosheet or GAA transistor architectures and extremely precise lithography tools.

TSMC and the 2 nm Capacity Crunch

TSMC’s N2 process node officially entered volume production in late 2025, and early estimates of yield and ramp have been strong enough that the company is aggressively increasing capacity. N2 promises up to 15 % performance gains or substantial power reductions versus previous nodes, making it extremely attractive for next-generation AI accelerators and flagship mobile chips.

The demand has been remarkable! Reports from the trenches indicate that much of TSMC’s N2 capacity is effectively sold out through 2026, with major customers like Apple, Nvidia, Qualcomm, and AMD reportedly locking in large shares of the initial output. This is partly because modern AI accelerators require much more wafer real estate per chip than traditional mobile processors, which exacerbates capacity constraints.

To meet this demand, TSMC has outlined plans to expand production aggressively across multiple fabs, including Hsinchu Baoshan and Kaohsiung in Taiwan and other international sites, with targets that could see monthly wafer starts reach well into six figures by 2026–2028. TSMC’s CAPEX is also a tell for things to come. In 2024 it was $29.8 billion, 2025 a 37% increase to $40.9 billion and a record $52-56 billion in 2026. What this tells me is that TSMC will again dominate 2nm as it did 3nm without question.

Intel’s 18A Process: A Competitive Alternative But Not a Complete Buffer

Intel’s 18A node is part of its post-Intel 7 roadmap and is roughly classed in the same generational tier as 2nm class processes. It introduces both RibbonFET (a version of GAA) and PowerVia backside power delivery, which are intended to boost performance and power efficiency. Intel was first to production quality GAA and first to BSPD, semiconductor innovation at its finest.

Intel started production of 18A in 2025 targeting its own processors such as Panther Lake, but its use as a foundry alternative for external customers remains limited compared. While 18A yields have improved as of mid-2025, they are generally considered behind TSMC’s N2 yields and Intel’s own foundry ecosystem is still small relative to TSMC’s global customer base.

Intel’s strategy is two-pronged: support its internal product leadership and expand foundry services but it has historically struggled to win significant external foundry demand, a key reason why it has not yet materially alleviated the broader industry’s 2nm class capacity squeeze. With Lip-Bu Tan as CEO that has changed of course. The semiconductor Made in America brand has never been stronger, Intel will sign wafer agreements for 18A and 14A from the top semiconductor companies, without a doubt.

Samsung’s 2 nm: Efforts Competitive But Challenged

Samsung was one of the first to deploy GAA technology on a smaller scale starting with its 3nm node, and has planned 2nm production (often referred to as SF2) as an extension of this progress. It has invested heavily in facilities such as the Taylor, Texas fab with the goal of hitting mass production timelines in 2026.

Despite this, Samsung has faced challenges around yield stability and customer adoption. While it offers very competitive pricing, the combination of yield issues and weak customer mindshare means that Samsung is not a viable alterative to TSMC for high-volume 2nm orders. Trust is the foundation of the semiconductor industry and without predictable yield there can be no trust.

Rapidus: A New Entrant Trying to Carve Out Niche 2 nm Capacity

One of the most intriguing developments in recent years has been the emergence of Rapidus, a Japan-based foundry backed by government and major corporate investors. Rapidus aims to begin 2 nm class chip production around 2027, with plans to ramp monthly wafer production significantly within a year of launch.

From what I have learned about Rapidus over the last year, there is little doubt in my mind that they will succeed. In fact, Rapidus just raised another $1.7B for a total of $11.3B in combined government subsidies and private investment. While this is a significant sum, it represents about 40% of the $32 billion the company estimates it will need for full-scale mass production of 2-nanometer chips by 2027 so stay tuned.

Unlike the giants, Rapidus is not attempting to directly compete on sheer volume, but rather offering “short turnaround times” and tailored services, which could appeal to custom chip designers, domestic Japanese technology firms, and organizations needing smaller-lot, highly customized silicon.

Though still years behind TSMC in mass production timing and total capacity, Rapidus represents a strategic move by Japan to regain presence in advanced semiconductor manufacturing and create additional supply chain options in a market heavily concentrated among a few players.

The Broader Context: A Global Capacity Tightrope

The combined reality of TSMC’s dominant position, Intel’s internal and emerging foundry efforts, Samsung’s technically capable but constrained 2nm push, and the Rapidus niche entry creates a semiconductor landscape in which demand continues to outrun supply at the highest performance nodes. Even as worldwide fab capacity grows, the pace of AI adoption and the strategic value companies place on leading-edge silicon means securing wafer slots early has become mission-critical for tech giants and a formidable bottleneck for others.

Bottom line: The 2nm capacity crunch isn’t a short-term supply hiccup, it is a fundamental outcome of how advanced computing, AI, and custom silicon strategies are reshaping the global semiconductor ecosystem for years to come. The strength of the foundry business has always been based on mutil-sourcing and we need to get that supply chain strength back, absolutely.

Also Read:

TSMC Process Simplification for Advanced Nodes

TSMC and Cadence Strengthen Partnership to Enable Next-Generation AI and HPC Silicon

TSMC vs Intel Foundry vs Samsung Foundry 2026


Reducing Risk Early: Multi-Die Design Feasibility Exploration

Reducing Risk Early: Multi-Die Design Feasibility Exploration
by Kalar Rajendiran on 03-05-2026 at 10:00 am

Feasibility Thermal Map

The semiconductor industry is entering a new era in system design. As traditional monolithic scaling approaches its economic and physical limits, multi-die architectures are emerging as a primary pathway for delivering continued improvements in performance, power efficiency, and integration density. By distributing system functionality across multiple dies or chiplets and integrating them within advanced packaging technologies, designers can create highly optimized heterogeneous systems tailored for demanding applications such as artificial intelligence, high-performance computing, and advanced automotive platforms.

However, the flexibility offered by multi-die architectures introduces a significant increase in design complexity. Unlike monolithic devices, where most interactions occur within a single silicon environment, multi-die systems introduce new dependencies across packaging technologies, power delivery networks, and interconnect strategies. Decisions made during the earliest stages of system architecture can have profound consequences on manufacturability, reliability, and performance. SemiWiki will be publishing a series of articles based on whitepapers released by Synopsys on the topic of multi-die systems.

This article, the first in a three-part series, examines the critical role of feasibility exploration in evaluating multi-die design architectures. Subsequent articles will explore how bump and Through-Silicon-Via (TSV) planning, followed by automated high-speed routing methodologies, translate architectural concepts into successful implementation.

The Growing Complexity of Multi-Die Design

Multi-die system integration requires designers to consider a broader and more interconnected set of constraints than ever before. System architects must simultaneously evaluate die placement and orientation, packaging configurations, interconnect density, and power delivery topology, while also accounting for thermal management strategies. Each design decision introduces ripple effects across the system, influencing both electrical and physical behavior.

Three interrelated performance metrics dominate early multi-die architectural evaluation. Power integrity, often measured through IR drop, directly affects functional reliability and timing margins. Electromigration introduces long-term reliability risks that can compromise device lifetime. Thermal performance determines whether the system can operate within safe temperature limits under peak workloads. In multi-die environments, these challenges become significantly more difficult to model accurately due to the interaction of multiple materials, stacked geometries, and distributed power and signal paths.

Why Early Feasibility Exploration Is Essential

Traditional design methodologies frequently relied on detailed physical implementation and signoff-level analysis to validate power, thermal, and reliability behavior. While highly accurate, these approaches are impractical during early architectural development because they require complete design data and extensive runtime. Feasibility exploration addresses this limitation by enabling designers to analyze architectural options using simplified but representative models.

Through feasibility workflows, designers can evaluate alternative floorplans, connectivity structures, and power distribution strategies without requiring full process design kits or finalized physical layouts. This abstraction dramatically accelerates iteration cycles, allowing teams to explore a broader design space and identify architectural weaknesses early in development. Early identification of issues such as excessive IR drop or thermal hotspots helps prevent costly redesign efforts later in the project lifecycle.

Feasibility vs. Prototyping: Understanding the Difference

Multi-die development typically progresses through two closely related but distinct exploration stages. The feasibility stage focuses on rapid architectural evaluation using abstract models and simplified floorplans. During this phase, designers assess whether proposed multi-die configurations can meet performance, power, and thermal goals without committing to detailed implementation.

Prototyping follows feasibility exploration and introduces technology-specific data and realistic physical structures. At this stage, interconnect models, packaging details, and implementation constraints become more accurate, providing a bridge between architectural exploration and production-ready design. Feasibility exploration therefore serves as the foundation upon which prototyping and detailed implementation are built.

Modeling Techniques That Enable Rapid Exploration

Successful feasibility exploration depends on modeling techniques that balance speed with predictive accuracy. One widely used approach involves pixel-based modeling of the power delivery network. By dividing the power distribution network (PDN) into regularly spaced elements with defined resistance characteristics across multiple dimensions, designers can efficiently evaluate voltage distribution and electromigration behavior across dies and packaging structures.

Simplified bump and TSV models also play a crucial role in early exploration. Rather than relying on foundry-specific implementations, designers can define key interconnect characteristics such as size, placement, and resistance. These models provide sufficient accuracy to evaluate connectivity density and power distribution behavior while maintaining rapid simulation turnaround times.

Thermal modeling further complements feasibility exploration by enabling designers to define material properties, thermal conductivity boundaries, and cooling strategies. These models allow engineers to quickly assess temperature distribution across complex multi-die stacks and packaging configurations.

Visualization: Turning Analysis into Insight

One of the most valuable aspects of feasibility exploration is the ability to visualize analysis results. Graphical heat maps and power integrity plots provide immediate insight into system behavior, allowing designers to identify high-risk regions and refine architectural choices quickly. Visualization enhances collaboration across design teams and supports data-driven decision-making throughout early development stages.

Synopsys 3DIC Compiler Platform-Driven Feasibility Exploration

Unified multi-die design platforms such as the Synopsys 3DIC compiler platform have significantly improved the efficiency of feasibility exploration by integrating modeling, analysis, and visualization capabilities within a single environment.

The Synopsys 3DIC Compiler platform offers fast, flexible, integrated feasibility and prototyping capabilities to quickly and efficiently create, visualize, and analyze prototype designs. Through fast, simple, straightforward power, PDN, and thermal UI-based models, no foundry process design kit (PDK) or technology data is required to create the prototype designs. Built-in IR drop, electromigration, and thermal analysis technologies designed for fast feasibility exploration allow designers to iterate on possible architectures quickly and easily.

Learn more by accessing the whitepaper from here.

Setting the Stage for Interconnect Planning

Feasibility exploration establishes the architectural blueprint for multi-die implementation by validating die placement, power delivery strategies, and connectivity requirements. Once these architectural parameters are defined, designers must translate them into detailed interconnect structures. The next article in this series will examine how bump and TSV planning provide the physical foundation for scalable multi-die connectivity and prepares designs for implementation and routing.

Also Read:

How Customized Foundation IP Is Redefining Power Efficiency and Semiconductor ROI

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering

Hardware is the Center of the Universe (Again)


From Satellites to 5G: Ceva’s PentaG-NTN™ Lowers Barriers for Terminal Innovators

From Satellites to 5G: Ceva’s PentaG-NTN™ Lowers Barriers for Terminal Innovators
by Daniel Nenni on 03-05-2026 at 8:00 am

unnamed (3)

Ceva, Inc., a leading provider of silicon and software IP for the Smart Edge, has unveiled PentaG-NTN™, its groundbreaking 5G Advanced modem IP subsystem tailored for satellite user terminals in Low Earth Orbit (LEO) and Medium Earth Orbit (MEO) constellations. Announced at Mobile World Congress 2026 in Barcelona on March 3, 2026, this innovation marks a pivotal step in merging satellite and terrestrial cellular networks, enabling faster, more reliable global connectivity.

PentaG-NTN™ is the inaugural offering from Ceva’s third-generation PentaG platform, a robust, production-ready 5G-Advanced baseband architecture. This platform unifies baseband hardware accelerators, Layer 1 (L1) PHY software, and extensive verification tools into a cohesive, reusable subsystem. It serves as a versatile foundation for derivatives like PentaG-Edge™ for terrestrial 5G-Advanced edge and IoT uses. Compared to prior generations, it offers major gains in performance, scalability, and integration efficiency, supporting both satellite and ground-based deployments with up to 400 MHz bandwidth per component carrier across FR1 (sub-7 GHz) and FR2 (mmWave) bands.

The surge in satellite constellations driven by commercial ventures and government efforts for broader coverage and strategic sovereignty has intensified the push to embed 5G standards into space-based systems. Yet, 5G-Non-Terrestrial Networks (NTN) impose cellular-level complexity on an industry long focused on spacecraft, payloads, and orbital management rather than modem engineering. Satellite-native players, experts in constellation design and operations, now face the steep learning curve of 3GPP-compliant cellular basebands.

Ceva addresses this gap with PentaG-NTN™, a fully integrated, plug-and-play modem subsystem. It eliminates much of the complexity, shortens development timelines, and minimizes risks for teams without deep cellular expertise. As Jake Saunders, Vice President at ABI Research, observes, this convergence dismantles traditional satellite silos, harnessing the maturity, scale, and cost advantages of cellular ecosystems. Solutions that ease modem integration while preserving differentiation are essential for scaling 5G-NTN from prototypes to widespread commercial use.

PentaG-NTN™ is engineered for LEO’s demanding conditions, including severe Doppler shifts, timing offsets, and extended propagation delays. Key features include optimized Doppler compensation, frequency-offset mitigation, latency-tuned L1 processing for LEO/MEO channels, full 3GPP Release-18 compliance (with a roadmap to Release-19), Ka/Ku-band support, narrowband proprietary waveforms, and scalable throughput from 10 Mbps to 2 Gbps with 256-QAM modulation.

Crucially, the subsystem balances standards adherence with innovation freedom. It pairs hardware acceleration with programmable DSPs and flexible software interfaces, allowing licensees to layer proprietary algorithms, waveform enhancements, or application-specific optimizations atop the 3GPP foundation. This empowers satellite innovators to stand out in competitive markets.

Delivered as a complete modem subsystem, not mere IP blocks, PentaG-NTN™ includes optimized baseband hardware, L1 PHY software, and a thorough verification suite featuring a Virtual Platform Simulator (VPS), system-level simulations, test benches, and FPGA emulation. This enables early software validation and system testing pre-silicon, boosting predictability and speeding market entry.

Ceva estimates the third-generation PentaG platform slashes modem silicon development time by about 65% and reduces program costs by tens of millions compared to in-house builds requiring specialized R&D teams.

Guy Keshet, Vice President and General Manager of Ceva’s Mobile Broadband Business Unit, emphasizes: “5G-NTN brings cellular standards to the satellite domain, but satellite innovators shouldn’t need to become modem experts. PentaG-NTN™ lowers entry barriers with a proven, compliant foundation while enabling true differentiation.”

Bottom line: This launch underscores Ceva’s role in powering the Smart Edge, where connectivity, sensing, and inference converge. With over 20 billion devices shipped and partnerships spanning wearables, IoT, vehicles, and infrastructure, Ceva’s IP encompassing 5G, Bluetooth, Wi-Fi, UWB, and Edge AI, fuels intelligent, always-connected products. As Physical AI emerges, PentaG-NTN™ positions satellite connectivity as a seamless 5G extension, promising ubiquitous coverage and transformative applications worldwide.

CONTACT CEVA
Also Read:

Ceva IP: Powering the Era of Physical AI

Ceva Wi-Fi 6 and Bluetooth IPs Power Renesas’ First Combo MCUs for IoT and Connected Home

Ceva-XC21 Crowned “Best IP/Processor of the Year”


Siemens Reveals Agentic Questa

Siemens Reveals Agentic Questa
by Bernard Murphy on 03-05-2026 at 6:00 am

Questa Agentic

There’s no denying that verification now leads the field in agentic AI announcements, accelerating the trend around this significant contribution to design automation. Siemens have just announced their Questa One Agentic Toolkit, their response to this trend, building on the core Questa One platform. Questa One provides integrated simulation, static verification, embedded AI driving common management around those tools, also VIP in support of these functions. The Agentic Toolkit adds further automated creation and orchestration of verification tasks to provide end-to-end solutions. Boasting endorsements from NVIDIA and MediaTek among others, this update is worth a look.

Strategy and foundation

Announcements in this space are inevitably similar, so what makes the Siemens approach different? They are leading with a foundation of openness and organic development. Start with openness. The Agentic Toolkit provides MCP interfaces to underlying Questa One functions with the ability to connect to any agentic frameworks. Their own agentic app, Fuse, naturally fully utilizes these interfaces and is the “preferred” option but does not limit other frameworks from connecting.

Sidebar on this point: tool providers have a natural advantage in knowing how best to use their own tools. That expertise can be captured in vectorized knowledge embedded in agentic apps. But how well does this work in a mix-and-match flow? More should be debated here between design teams and verification technology providers.

The organic differentiation is also interesting. Siemens have built their Fuse EDA AI system in-house. Supported workflows leverage the NVIDIA Llama-Nemotron reasoning framework and NVIDIA NIM inference microservices, enabling the platform to understand verification state in real time and maintain comprehensive awareness and contextual intelligence relationships between designs, testbenches, test plans and specifications. No doubt thanks to these foundation frameworks this system apparently also works with main-stream AI coding applications, including GitHub Copilot, Claude Code, Cursor, and Cline, and can be used in command-line mode (for scripting) or through IDEs such as VS-Code.

All this works with a multi-model EDA data lake, capturing baseline manuals and user documentation. An LLM exploits this information in assistants, reasoners, etc., to direct run objectives and orchestration.

They also add that building on the existing connected ecosystem between Questa One, Tessent™ software for DFT and the Veloce™ CS hardware-assisted verification and validation system, the Agentic Toolkit supports a broad range of design and verification objectives.

This system ships today with several pre-built and tested agents for customers who need a quick start in pilot trials: an RTL code agent, a Lint agent, a CDC agent, a verification planning agent and a debug agent. Expanding a little on the values they describe, the Verification Agent organizes tasks, coverage goals, and requirements, offering AI-driven suggestions for efficient resource allocation, from which engineers gain clarity, adaptability, and accelerated closure, ensuring comprehensive verification and faster project success.

The Debug Agent accelerates root-cause analysis by pinpointing issues in RTL and testbenches with AI-driven insights. It offers targeted suggestions, automates error tracing, and guides engineers to efficient resolution. With smart diagnostics, it reduces debug cycles and boosts productivity, helping teams deliver robust designs faster.

What about guardrails and trust?

This a question I am now asking all agentic verification solution providers. The upside of hands-free automation is huge; the downside of unsupervised AI could be even more dramatic. I asked Abhi Kolpekwar (Sr. VP and GM at Siemens Digital Industries) for his views on this challenging balance.

Abhi agreed that while across all industries there is buzz around the potential of AI and agentic methods, there is equally buzz around hype outrunning reality and most pilot programs failing to translate to production. How do successful deployments navigate this challenge? Abhi had a two-part answer. First, while those surveys certainly highlight a problem, we shouldn’t underestimate the success AI/Agentic already enjoys in quietly successful embedded use-cases. Examples include cars (I have written recently about this), in our phones, and in factory automation. Another interesting example is a method to detect what you are saying by watching your facial muscles, even without needing to hear clear speech. Not available yet but presumably coming to a phone, car or other device near you in the not-too-distant future.

Intriguing, but in SoC and system we are very sensitive to reliability and pinpoint accuracy. How can AI/Agentic align with these needs? In Abhi’s view, one way to secure that level of quality is through guardrails implemented using proven and non-AI core EDA technologies: formal methods, simulation, and so on. The second way is to implement processes which require human-in-the-loop judgement at checkpoints. There can still be a big win, even though the whole process isn’t pushbutton. With agentic support, DV engineers graduate from being tool operators, knowing all minutiae of how to run (and debug) scripts and tools, to instead becoming verification scientists, knowing how to judge outcomes at intermediate steps, and what high-level correction they might want to try to correct an outcome.

I like it – what DV engineer wouldn’t want to upgrade their day-to-day workload to become a verification scientist?

Nice positioning. Still, the proof will be in how DV engineers and product managers will react in practice. You can read more about the release HERE and get more insight on product details HERE.

Also Read:

Functional Safety Analysis of Electronic Systems

Perforce and Siemens Collaborate on 3DIC Design at the Chiplet Summit

Siemens to Deliver Industry-Leading PCB Test Engineering Solutions