CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

The Next Hurdle AI Systems Must Clear

The Next Hurdle AI Systems Must Clear
by Bernard Murphy on 03-11-2026 at 6:00 am

datacenter surrounded by power plants

AI isn’t having an easy ride. The media and Wall Street swing wildly between extremes on any hint of a shift in AI sentiment. Dickens saw this coming: “It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair”. Beneath these headlines lurks an important problem for AI inference scaling: a widening gap between theoretical peak performance and what system providers can guarantee. This gap proves to have significant implications for power demand and safety.

What is this gap?

Large semiconductor systems make heavy use of pre-designed subsystems, developed in-house for earlier generation products or sourced externally. This is particularly true for the chiplet-based designs now common in datacenters, also in our cars. Best-in-class chiplets are available from industry experts: CPU server subsystems, AI accelerator subsystems, and high-bandwidth memory (HBM) subsystems, while other chiplets are fashioned by the semiconductor system prime. Connections between chiplets are managed through industry-standard UCIe interfaces.

A system built on these components, each independently rated for high performance, connecting through industry-standard interfaces. Why wouldn’t this deliver near to optimum throughput?  Simple economics dictates that a big expensive semiconductor product must handle multiple inference tasks simultaneously. Individually these chiplets have been designed to do just that, but none of these has responsibility to manage traffic performance between chiplets. UCIe is designed to provide basic connectivity, not system-level traffic management. That management is the responsibility of the network subsystem between these chiplets, a system layer not unlike the internet but optimized for in-chip/in-package performance.

Multi-tenant inference platforms face unique traffic challenges. Traffic is managed through a common network for cost and power efficiency, as in any modern electronic system. However, AI traffic between CPU control, HBM and an AI accelerator is very lumpy, some bursty yet requiring high bandwidth, some very sensitive to latency, and some critical to maintaining forward progress, especially control data (valid, ready, credits, etc).

Lumpy traffic hogs bus bandwidth, not indefinitely but until a transaction is completed. The massively parallel nature of AI processing creates a second problem. A step can’t start until all the data needed for that step has arrived. Until then, the step must stall. When multiple inferences are running at the same time it is not difficult to imagine frequent stalls, inferences sitting idle waiting for complete data before they can move onto the next step.

So far this may not sound too surprising: increasing traffic leads to lower per-inference performance. The shocker is that performance does not degrade gracefully. As traffic contention rises between inferences, just as in rush hour traffic, stalls build up. At some point, performance drops off a cliff. Net utilization of the system plummets from 80% to 45%.

Why not just increase bandwidth in the network? Unfortunately, that alone isn’t enough. Between lumpy traffic and synchronization stalls, the control information critical to manage fairness between inferences is progressively squeezed out and fairness between inferences collapses. Effective multi-tenant management needs more than just increased bandwidth. It needs to provide predictability.

Fixing the gap

High performance AI accelerators, CPU subsystems, HBM, and UCIe interfaces are absolutely necessary for a chiplet-based AI product, but they are not sufficient. The product must also build on a traffic management network able to serve the unique challenges of multi-tenant AI inferencing, requirements well beyond the scope of best-effort networks. Interconnect design must be re-conceived to deliver predictability for these workloads.

Andy Nightingale (VP Product Management and Marketing at Arteris) shared some must-haves to ensure predictability. The network must support isolation between traffic streams from different tenants so that one inference can’t block another. Increasing load will naturally degrade throughput but it should do so gracefully. Coherency guarantees must be maintained, even under load, and behavior must be deterministic under load so that service level agreements can be guaranteed. A network designer can then craft a network fabric to meet their target use-case needs, building on a network IP that can support those guarantees.

Giant datacenters can’t define pricing models against unpredictable performance. Without an inter-chiplet network architecture designed for the task, the only way to guarantee a service level agreement is to add more servers and more power stations. Clearly, the better solution is to use AI systems with network architectures designed to the task, to deliver dependable utilization from servers and power stations already budgeted.

I mentioned safety at the outset of this article. Chiplet-based design is now very popular in automotive systems for a host of reasons. Predictable power is certainly a concern in that domain, but even more important is predictability for safety. In cars, trucks and other vehicles, predictable response is not a performance preference. It is a certification requirement. The same network traffic considerations apply.

You can learn more about Arteris HERE.

Also Read:

Arteris Highlights a Path to Scalable Multi-Die Systems at the Chiplet Summit

The Next Hurdle AI Systems Must Clear

Securing RISC-V Third-Party IP: Enabling Comprehensive CWE-Based Assurance Across the Design Supply Chain

 

 


Why Your LLM-Generated Testbench Compiles But Doesn’t Verify: The Verification Gap Problem

Why Your LLM-Generated Testbench Compiles But Doesn’t Verify: The Verification Gap Problem
by Admin on 03-10-2026 at 10:00 am

fig1 vg chart

By Vikash Kumar, Senior Verification Architect | Arm | IEEE Senior Member. 

The Problem Every Verification Engineer Recognizes

You ask an LLM to generate a UVM testbench. It produces 25 files. Everything compiles. You run the simulation — and nothing happens. The scoreboard reports zero checks. The slave driver stops after 10 transactions. The simulation hangs.

This is not a hypothetical. In a controlled experiment generating a UVM testbench for an AHB2APB bridge using a state-of-the-art commercial LLM, this is exactly what happened — after an automated agentic repair loop had already resolved 37 compile errors across 4 iterations.

The core problem: compile success is nearly uncorrelated with functional correctness at the protocol level. Yet compile success is the dominant evaluation metric in LLM-for-hardware research. This article explains why that is the wrong metric, what the right metrics are, and what it means for verification teams trying to use LLMs in production.

What Compile Success Actually Tells You

A compiler verifies type consistency, scope resolution, and syntactic validity. It does not verify protocol timing, handshake sequencing, interface role semantics, or transaction counting.

Here are three failures from the AHB2APB case study — each catastrophic to verification, none producing a compiler error:

Role confusion: The LLM generated an APB slave driver that drives PADDR, PSEL, and PENABLE — the master’s outputs. An APB slave only drives PRDATA, PREADY, and PSLVERR. The simulation ran without complaint. The slave simply never responded.

Timing phase error: The AHB driver presented HWDATA in the same clock cycle as HADDR. AHB requires a one-cycle offset — HWDATA is valid in the cycle after HADDR. The testbench drove the wrong data on every single transaction.

Response deadlock: The master sequence called get_response() waiting for the driver to call put_response(). The driver never called it. The simulation hung silently at transaction 1.

A controlled taxonomy of eight failure modes from the case study breaks down as follows: one was detectable at compile time (L2: hallucinated sequence item field names), one surfaced at elaboration during VIF port resolution (L1), and six required simulation or waveform analysis to diagnose (L3–L8). The compiler caught one of eight.

Figure 1: Eight LLM failure modes by detection method — 1 at compile time, 1 at elaboration, 6 at simulation.
Three Metrics That Actually Measure the Gap

Repair Efficiency Score (RES)

RES = total compile errors / total repair calls. In the case study, 37 errors resolved in 15 calls gives RES = 2.47. A single repair call that fixed hallucinated sequence item field names collapsed 18 downstream errors simultaneously — demonstrating that errors cluster around shared root causes when an LLM misunderstands a core abstraction.

Verification Gap (VG)

VG is the fraction of functional failures that survive a compile-clean testbench. VG = 0.00 means the testbench is both compile-clean and functionally complete. VG = 0.80 after the automated repair loop means 80% of functional failures remained after full automation — invisible to the compiler throughout. This is the metric the field is not computing.

Specification Coverage Ratio (SCR)

SCR measures what fraction of the protocol specification the testbench actually exercises. A testbench covering only happy-path transactions — missing burst-interrupt termination, error-retry, and maximum-wait-state scenarios — can have SCR well below 1.0 while passing all simulation checks on normal traffic.

Figure 2: VG and SCR progression across configurations. Human expertise closes the gap that automation cannot.
The Fix Is a Better Specification, Not a Bigger Model

The most counterintuitive finding from this study: the highest-leverage investment to improve LLM-based verification automation is not a more capable model. It is a more formal specification schema.

Timing phase failures exist because specifications encode timing in natural language: ‘HWDATA is valid one cycle after HADDR.’ No amount of model scale resolves the ambiguity between that prose and the precise simulator semantics of @(posedge HCLK) sequencing.

A manifest field encoding HWDATA_phase_offset: 1 gives the generation agent an unambiguous directive — the failure becomes preventable rather than debuggable. Role confusion failures become preventable if the manifest classifies interface roles explicitly: apb_slave: {role: reactor, perpetual: true}. In both cases, the fix is upstream specification formalization, not downstream repair.

Eight of approximately 25 generated files required complete expert rewrites to achieve functional correctness. Every one of those rewrites addressed a failure the compiler never flagged.

The Real Bug the Testbench Found

After achieving functional correctness through expert collaboration, 30 randomized AHB transactions detected a previously unknown RTL race condition in the bridge’s xfer_pending clearing logic.

The bridge uses a registered clear that activates one clock cycle too late. The FSM reads stale xfer_pending = 1 and re-enters APB_SETUP, generating a phantom APB transfer with the previous transaction’s latched address. The scoreboard detected 6 PSEL assertions for 5 AHB transfers — a 1:1 AHB-to-APB ratio violation invisible to IP-level simulation.

This is precisely the class of integration bug that protocol-level testbench modeling exists to find — and it is why getting the testbench right matters. A compile-clean testbench with VG = 0.80 would never have run the checks that found it.

What This Means for Your Verification Flow

If you are evaluating LLM-based testbench generation tools, ask the vendor: what is your Verification Gap on a real protocol design? Compile success is not evidence of a working testbench. RES, VG, and SCR are.

If you are integrating LLMs into your verification flow, the eight-failure taxonomy gives you a concrete checklist. Check for role confusion in every driver. Check for timing phase errors at every AHB and APB interface. Check for liveness failures in every sequence that is supposed to run indefinitely. Check the elaboration log — not just the compile log.

If you are writing the specification that feeds the LLM, encode timing constraints, interface roles, and behavioral contracts as structured fields — not prose. The gap between compiles and verifies is the gap that matters. Start measuring it.

About the Author

The author is a Senior Verification Architect at Arm and an IEEE Senior Member, with 15+ years of experience in hardware verification including prior work at Intel. Specializes in subsystem-level verification for chiplet-based designs and protocol verification (AHB, AXI, CHI, PCIe, UCIe). Active in IEEE standards and peer review activities.

Also Read:

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering

Accelerating Static ESD Simulation for Full-Chip and Multi-Die Designs with Synopsys PathFinder-SC


Advanced Architectures for Hybrid III-V/Silicon Quantum Cascade Lasers

Advanced Architectures for Hybrid III-V/Silicon Quantum Cascade Lasers
by Daniel Nenni on 03-10-2026 at 8:00 am

CAE INL 2025

Mid-infrared (MIR) photonic integrated circuits are emerging as a key technology for applications ranging from environmental monitoring and medical diagnostics to defense and industrial process control. The MIR spectral region, often referred to as the molecular “fingerprint” region, exhibits strong absorption features for a wide variety of gases and chemical species. This property enables highly sensitive and selective sensing, provided that compact, efficient, and wavelength-stable laser sources can be integrated with photonic platforms. Among the available MIR sources, quantum cascade lasers (QCLs) stand out due to their broad wavelength coverage, high output power, and room-temperature operation. However, their integration on silicon remains a major technological challenge.

Quantum cascade lasers are unipolar semiconductor devices based on intersubband transitions in III-V heterostructures. Since their first demonstration in 1994, QCLs have undergone continuous improvements, achieving continuous-wave operation at room temperature and wall-plug efficiencies exceeding 20%. Despite these advances, most QCLs are still realized as discrete devices, limiting their scalability and integration with complex photonic systems. Silicon photonics, on the other hand, offers mature fabrication processes, high reproducibility, and large-scale integration capabilities, but silicon itself cannot provide optical gain in the MIR. Hybrid integration of III-V QCL gain regions onto silicon photonic platforms therefore represents a promising route toward compact and functional MIR photonic integrated circuits.

In this work, a high-index-contrast photonic integrated circuit platform is developed to enable the integration of III-V QCLs on silicon waveguides. The proposed architecture relies on molecular bonding of III-V epitaxial layers onto a silicon-on-insulator–based platform, including variants such as SONOI to extend MIR transparency. The platform combines several key functionalities: strong optical confinement for miniaturization, efficient and robust adiabatic coupling between the III-V gain region and silicon waveguides, and high-quality silicon-based distributed feedback structures for wavelength control. This approach enables the realization of distributed feedback (DFB) and distributed Bragg reflector (DBR) QCL architectures directly integrated with silicon photonic circuits.

Efficient optical coupling from the III-V active region into silicon waveguides is achieved using adiabatic tapers. Numerical studies show coupling efficiencies exceeding 95% over a wide range of taper lengths and geometrical parameters, demonstrating robustness against fabrication tolerances. This is a crucial requirement for wafer-scale fabrication and reproducible device performance. The silicon waveguides can further incorporate gratings or couplers for feedback, out-coupling, and on-chip routing toward sensing elements.

The fabrication process is fully compatible with 200-mm silicon wafers and includes silicon waveguide definition, dielectric deposition, III-V/Si molecular bonding, substrate removal, ridge and mesa etching, and metal contact formation. A usable surface ratio above 90% after III-V substrate removal highlights the maturity of the bonding approach. Laser characterization is performed at the wafer level using pulsed electrical injection, Peltier cooling, and automated probing, with optical output collected via grating couplers and analyzed using FTIR spectroscopy.

Experimental results demonstrate laser emission from hybrid DFB QCLs operating around 4.3 µm, a wavelength of particular interest for CO₂ detection. Clear lasing behavior is observed, with a threshold current of approximately 700 mA and a slope efficiency on the order of 0.22 mW/A under pulsed operation. Single-mode emission is achieved in the linear regime, while multimode behavior appears near rollover, mainly due to thermal limitations. These results confirm efficient light transfer from the III-V gain region into the silicon waveguide and validate the distributed feedback approach implemented in silicon.

Beyond DFB and DBR lasers, the platform also enables advanced diffractive and refractive architectures, such as photonic crystal surface-emitting lasers and micro-ring resonator-based QCLs. These concepts offer further reductions in footprint and threshold current, opening the way toward densely integrated MIR photonic systems. Overall, this work demonstrates a versatile and scalable hybrid III-V/silicon platform for quantum cascade lasers, representing a significant step toward fully integrated mid-infrared photonic circuits for sensing and beyond.

Also Read:

NanoIC Extends Its PDK Portfolio with First A14 Logic and eDRAM Memory PDK

TSMC’s 2026 AZ Exclusive Experience Day: Bridging Careers and Semiconductor Innovation

The Foundry Model Is Morphing — Again


Efficient Bump and TSV Planning for Multi-Die Chip Designs

Efficient Bump and TSV Planning for Multi-Die Chip Designs
by Daniel Nenni on 03-10-2026 at 6:00 am

Efficient Bump and TSV Planning for Multi Die Chip Designs

The semiconductor industry has experienced rapid advancements in recent years, particularly with the increasing demand for high-performance computing, artificial intelligence, and advanced automotive systems. Traditional single-die chip designs are often unable to meet modern PPA requirements. As a result, engineers have turned to multi-die architectures, where multiple smaller dies are integrated within a single package. While this approach improves scalability and performance, it also introduces new challenges, especially in interconnect planning. One of the most critical aspects of multi-die integration is the efficient planning of bumps and TSVs that enable communication between different dies.

In multi-die designs, interconnectivity between chips is achieved through microbumps or hybrid bonding pads placed on the surfaces of dies. These bumps act as electrical connection points between dies, interposers, or substrates. Modern designs may require hundreds of thousands or even millions of such connections. As the number of dies and interconnects increases, the complexity of planning and managing these connections also rises dramatically. According to the provided white paper, improper bump planning can negatively affect routability, routing quality, and overall design efficiency.

Traditionally, bump planning was done manually using simple graphical tools such as spreadsheets or diagram software. While this approach worked for earlier single-die flip-chip designs that contained only a few thousand connections, it is no longer practical for today’s large-scale multi-die systems. Manual planning is time-consuming and highly prone to human error. Furthermore, any modification to the bump layout of one die often requires corresponding changes in other dies or the package design. If these updates are not properly synchronized, significant design errors may occur later in the development cycle.

To address these challenges, modern EDA tools provide automated bump planning capabilities. These tools allow designers to define bump regions, which are rectangular or irregular areas on a die where bumps are placed. Within each region, bumps can follow specific patterns based on constraints such as pitch, spacing, and alignment. Once these patterns are defined, the software can automatically generate thousands of bumps quickly and accurately. If the region size or design constraints change, the bump layout automatically updates, saving designers significant time and effort.

Another key aspect of bump planning is signal assignment. Each bump must be connected to a specific signal, power line, or ground network. Designers may assign signals manually or use automated algorithms that optimize placement based on factors such as wire length and routing efficiency. Automatic signal assignment can analyze the entire multi-die system and determine the best possible mapping of signals to bumps, improving overall performance and reducing design complexity.

In addition to bump planning, designers must carefully plan through-silicon vias. TSVs are vertical electrical connections that pass through a silicon die, allowing signals and power to travel from the backside of the die to the frontside routing layers. TSVs are particularly important in 3D stacked chip designs where multiple dies are stacked vertically. However, TSVs are relatively large structures and require significant spacing and keep-out zones to avoid damaging nearby circuitry. Poor TSV placement can reduce the usable area for logic cells and negatively affect timing performance. Therefore, careful planning is necessary to ensure optimal TSV placement without compromising chip functionality.

Modern design platforms integrate bump and TSV planning into a unified workflow. This allows engineers to visualize connections in both two-dimensional and three-dimensional views, track engineering changes, and perform automated design rule checks. By detecting alignment errors, missing connections, or signal mismatches early in the design process, these tools help prevent costly mistakes during later stages of manufacturing.

Bottom line: Efficient bump and TSV planning plays a crucial role in the success of multi-die semiconductor designs. As chip architectures become more complex, manual planning methods are no longer sufficient. Automated design tools and structured planning methodologies enable engineers to manage millions of connections, maintain design accuracy, and accelerate time-to-market. With the continued growth of advanced technologies such as AI and high-performance computing, effective interconnect planning will remain a fundamental requirement in modern semiconductor design.

White Paper Registration

Also Read:

Reducing Risk Early: Multi-Die Design Feasibility Exploration

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems

How Customized Foundation IP Is Redefining Power Efficiency and Semiconductor ROI


The Evolution of RISC-V and the Role of Andes Technology in Building a Global Ecosystem

The Evolution of RISC-V and the Role of Andes Technology in Building a Global Ecosystem
by Daniel Nenni on 03-09-2026 at 10:00 am

RISC V Now Andes Conference

During my frequent trips to Taiwan as a foundry relationship professional I remember meeting Frankwell Lin, CEO of Andes, in Taiwan 15+ years ago. As I walked to TSMC HQ from the Hotel Royal (my second home for many years) Andes was about mid point and Frankwell’s door was always open. Sometimes just tea, sometimes technology, there was always a reason to talk to Frankwell.

Throughout my career I have always been excited about open standards as a platform to accelerate design starts. The semiconductor industry is all about design starts, right? Having been involved with many different open standard initiatives, some they succeeded but failure was quite common. RISC-V however has been a resounding success and I am honored to be a part of it, absolutely.

The evolution of RISC-V, the rise of Andes Technology, and the emergence of the RISC-V Now! Conference illustrate how open hardware architectures are transforming the semiconductor industry. Together, they represent a shift toward open standards, collaborative ecosystems, and new approaches to building processors for modern computing workloads.

The story begins with the development of RISC-V, an open instruction set architecture (ISA) derived from the principles of RISC. Earlier RISC architectures emerged in the 1980s as a simpler and more efficient alternative to complex instruction set computing. While architectures like ARM became highly successful, they remained proprietary. RISC-V, created in 2011 at the University of California, Berkeley, introduced a new concept: an ISA that is open and free for anyone to implement, modify, and extend. This openness allows companies and researchers to build custom processors without paying licensing fees, accelerating innovation across industries.

Over the past decade, the architecture has rapidly grown from an academic project into a major industry platform used in microcontrollers, embedded systems, AI accelerators, and even data-center research. Analysts estimate the architecture is on track to power tens of billions of chips worldwide and has reached significant market penetration across multiple computing sectors.

A key contributor to this ecosystem is Andes Technology, a Taiwanese semiconductor company specializing in RISC-V processor IP. As a founding premier member of the global RISC-V community, Andes has played a central role in commercializing the open ISA. The company designs 32-bit and 64-bit processor cores that can be integrated into SoC designs for applications ranging from consumer electronics to automotive systems and AI computing. Its processor portfolio includes high-efficiency embedded cores and high-performance multiprocessor clusters, many of which support advanced features such as vector processing, digital signal processing, and customizable instruction extensions. These capabilities allow hardware designers to tailor processors to specific workloads, which is one of the most attractive features of the RISC-V model. Over time, Andes-powered processors have been integrated into billions of chips used around the world.

As RISC-V adoption grew, Andes also began investing in community-building events to accelerate collaboration across the ecosystem. One example is the Andes RISC-V CON, a series of conferences designed to bring together engineers, researchers, and technology companies working on RISC-V platforms. These events typically include keynote speeches from industry leaders, technical sessions on processor design, and discussions on emerging applications such as artificial intelligence, automotive electronics, and communications systems. Conferences often feature multiple tracks, including developer sessions where engineers can learn about debugging tools, vector extensions, and custom instruction development. By providing a space for collaboration and knowledge sharing, these events have helped strengthen the RISC-V ecosystem and encourage wider adoption of the architecture.

Building on the success of these earlier events, Andes launched the RISC-V Now! conference series in 2026. Unlike earlier conferences that focused primarily on the technology and ecosystem of the architecture, RISC-V Now! emphasizes real-world deployment and commercial implementation. The conference brings together system architects, semiconductor executives, and engineers who are already building and shipping products based on RISC-V processors. Topics typically include system-level design trade-offs, strategies for integrating CPUs into complex SoCs, software enablement challenges, and lessons learned from production deployments. The first events in the series were scheduled in several global technology hubs—including Silicon Valley, Hsinchu, Shanghai, and Beijing highlighting the global nature of the RISC-V movement.

The emergence of RISC-V Now! reflects a broader transition within the RISC-V ecosystem. Early adoption focused heavily on experimentation and research, but the current phase is centered on building commercial products and scalable computing platforms. As computing workloads grow more complex—especially in fields like artificial intelligence, automotive autonomy, and edge computing—companies are seeking processor architectures that offer flexibility, efficiency, and control. RISC-V provides these advantages by allowing designers to customize instructions, optimize for power or performance, and maintain full control over their hardware roadmap.

Bottom line: The evolution of RISC-V represents one of the most significant shifts in modern processor architecture. Andes Technology has played an important role in advancing this open hardware movement by providing commercial CPU IP and fostering community collaboration through conferences and ecosystem initiatives. The launch of the RISC-V Now! conference marks the next stage of this journey, focusing on real-world deployment and production systems. Together, these developments highlight how open architectures and collaborative innovation are reshaping the future of computing.

Also Read:

The Launch of RISC-V Now! A New Chapter in Open Computing

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

Operationalizing Secure Semiconductor Collaboration: Safely, Globally, and at Scale


Capability Hardware Enhanced RISC Instructions CHERI Alliance

Capability Hardware Enhanced RISC Instructions CHERI Alliance
by Daniel Nenni on 03-09-2026 at 8:00 am

CHERI Alliance Overview 2026

The CHERI Alliance is a non-profit organization dedicated to accelerating the global adoption of CHERI (Capability Hardware Enhanced RISC Instructions), a technology designed to improve computer security at the hardware level. Established as an independent entity, the Alliance brings together industry leaders, researchers, government bodies, and software communities to promote and support the implementation of CHERI across computing ecosystems. Its overall mission is to unite stakeholders and drive CHERI as an effective and widely adopted security standard.

One of the key motivations behind the creation of the CHERI Alliance is the growing need for stronger cybersecurity mechanisms. Modern systems are frequently vulnerable to memory-related security flaws, which account for a large portion of software vulnerabilities. CHERI technology addresses these issues by introducing capability-based memory protection directly in hardware, allowing systems to enforce fine-grained control over how memory is accessed and used. However, a technology specification alone is not enough to ensure adoption. The Alliance was therefore formed to build an ecosystem around CHERI that includes hardware vendors, software developers, researchers, and policymakers.

The Alliance operates as a non-profit Community Interest Company (CIC) based in Cambridge, United Kingdom. This structure ensures that the organization remains focused on public benefit rather than shareholder profit. The initiative is funded primarily by industry membership fees, providing financial sustainability while maintaining independence. Governance is handled by a governing board composed of elected representatives from member organizations along with directors of the CIC. Board members serve limited terms and make decisions regarding strategy, budgets, initiatives, and the approval of working groups.

A major role of the CHERI Alliance is technical coordination. Because CHERI can potentially be implemented across multiple instruction set architectures (ISAs), including architectures such as Arm, x86, MIPS, and RISC-V, alignment across the ecosystem is essential. The Alliance helps establish best practices, interoperability guidelines, and technical recommendations that ensure consistent and effective implementations. In addition, the organization supports the development and porting of software tools, operating systems, and open-source software so that developers can more easily build and run CHERI-enabled applications.

Promotion and education are also central activities of the Alliance. The organization produces technical and marketing materials that explain the benefits of CHERI and raises awareness among industry stakeholders, regulators, and the general technology community. By engaging with policymakers, the Alliance encourages regulatory frameworks that prioritize stronger security standards in digital products. The Alliance also manages the CHERI brand and communicates the value of capability-based security through media outreach, conferences, and industry events.

Community engagement is another key pillar of the Alliance’s work. The organization provides platforms for collaboration among members, including repositories for shared software projects, educational resources, and networking opportunities. Members can participate in committees and working groups focused on specific topics such as software porting, compliance standards, marketing, and technical alignment. These working groups allow experts from different sectors to collaborate, exchange knowledge, and contribute to the development of CHERI-related technologies and guidelines.

Membership in the CHERI Alliance offers several advantages. Organizations that join demonstrate leadership in cybersecurity and gain visibility within the emerging CHERI ecosystem. Members can influence standards and strategic directions by voting in governance elections or participating in working groups. They also benefit from networking opportunities with other industry leaders, researchers, and government representatives. In addition, members gain access to early technical developments, collaborative projects, and promotional opportunities through conferences and events organized by the Alliance.

The Alliance also organizes conferences and events aimed at promoting CHERI technologies and bringing the community together. These gatherings include technical talks, demonstrations, and workshops that showcase ongoing research and real-world implementations. Such events help foster collaboration between academia and industry while educating developers and decision-makers about the advantages of hardware-assisted memory safety.

Bottom line: The CHERI Alliance plays a crucial role in advancing a new generation of secure computing technologies. By coordinating technical development, promoting awareness, supporting open collaboration, and building an industry-wide ecosystem, the Alliance helps transform CHERI from a promising research concept into a practical and widely adopted security standard. As cybersecurity threats continue to evolve, initiatives like the CHERI Alliance will be increasingly important in shaping the future of safer computing systems.

CONTACT CHERI

Also Read:

CHERI: Hardware-Enforced Capability Architecture for Systematic Memory Safety

Securing RISC-V Third-Party IP: Enabling Comprehensive CWE-Based Assurance Across the Design Supply Chain

Caspia Technologies Unveils A Breakthrough in RTL Security Verification Paving the Way for Agentic Silicon Security


Operationalizing Secure Semiconductor Collaboration: Safely, Globally, and at Scale

Operationalizing Secure Semiconductor Collaboration: Safely, Globally, and at Scale
by Kalar Rajendiran on 03-09-2026 at 6:00 am

Collaboration Framework

Semiconductor manufacturing is among the most complex industrial activities in existence. As device geometries shrink and systems become more interconnected, software has become as critical as process technology itself. Modern fabs depend on extensive automation, real-time analytics, and deep integration between tools, control systems, and external partners. This complexity dramatically expands the cyber-attack surface, making cybersecurity not just an adhoc or adjunct concern, but rather a core operational challenge.

A typical fab environment includes hundreds of thousands of software components sourced from hundreds of suppliers, continuously updated and interdependent. In such an environment, cybersecurity incidents are inevitable. A widely cited malware incident that halted production at a major fab several years ago marked a turning point for the industry, shifting cybersecurity from a theoretical IT concern to an operational imperative with direct revenue and safety implications.

Why Legacy Security Models Are Breaking Down

Historically, fabs relied on isolation, obscurity, and extreme caution around change. Systems were kept static, external connectivity was minimized, and any untested modification was treated as a production risk. While this approach reduced short-term disruption, it created brittle environments poorly suited to modern threat dynamics.

Today, effective cybersecurity requires continuous patching, operating system updates, software lifecycle management, and rapid response to emerging vulnerabilities. These requirements conflict directly with the reality that changes in a fab environment demand extensive validation. The result is long exposure windows at precisely the moment when attackers are becoming faster, more targeted, and more persistent, often aided by AI.

Security by obscurity no longer works. Semiconductor manufacturing is a high-value target, and fragmented VPNs, ad-hoc access paths, and inconsistent controls increase risk by reducing visibility and slowing response.

Standards Define Only the Foundation, Not the Full Solution

SEMI cybersecurity standards such as E187, E188, and E191 provide an essential baseline. They establish expectations for malware-free equipment, procedural cybersecurity practices, and automated software inventory reporting. Importantly, these standards define what must be protected while intentionally avoiding prescriptive architectural mandates.

This is by design. However, risk emerges when standards are treated as a ceiling rather than a floor. Compliance alone does not define a secure system. Architectures must be designed around zero trust principles such as least privilege, segmentation, continuous validation, and “assumed breach”. In practice, systems designed with these principles in mind are often better positioned to meet both current and future requirements.

Collaboration Is the New Security Frontier

The semiconductor industry now operates as a deeply collaborative global ecosystem. Technology development depends on constant interaction between fabs, solution providers, equipment suppliers, designers, OSATs, and advanced packaging partners. This level of collaboration cannot function in air-gapped environments, yet naive use of public internet connectivity introduces unacceptable risk.

Collaboration has therefore become the new security frontier. The challenge is no longer whether data and access should be shared, but rather how to enable collaboration without exposing proprietary process knowledge, yield signatures, or AI models that define competitive advantage.

Why Secure Collaboration Requires More Than VPNs

Point-to-point VPN models do not scale in this environment. As collaboration grows, VPN sprawl leads to operational complexity, inconsistent governance, and expanding attack surfaces. Each additional tunnel increases firewall exposure and administrative burden while reducing overall visibility.

What is required instead is a framework-based approach in which secure connectivity is centrally managed, segmented, and dynamically enabled. In such models, connectivity is simplified structurally while security is strengthened architecturally.

How secureWISE Operationalizes a Purpose-Built Framework

secureWISE, from PDF Solutions, illustrates how a governed connectivity framework can be purpose-built for semiconductor manufacturing rather than adapted from enterprise information technology (IT) or generic operational technology (OT) solutions.

Over more than 20 years of continuous deployment, secureWISE has been embedded into fab operations worldwide, supporting thousands of tools and global supplier ecosystems under real production constraints. That longevity reflects not only adoption, but sustained trust under conditions of high concurrency, strict uptime requirements, and evolving threat models.

secureWISE’s architectural strength lies in simplification as a security strategy. It replaces a dense mesh of OEM-specific VPNs with a single governed entry point per fab. By collapsing connectivity sprawl into a controlled framework, firewall routing complexity is reduced; custom network engineering is simplified for onboarding and offboarding of endpoints, and the overall attack surface shrinks. What appears operationally simpler is, in fact, more secure.

Built around Zero Trust principles, secureWISE enforces identity-centric access with exceptional granularity. Access is not extended merely by network location but is constrained by role, action, context, and purpose.

Every session, file transfer, and operation is encrypted, monitored, and logged, providing continuous visibility and long-retention audit trails aligned with ISO 27001 controls and practical compliance with SEMI E187, E188, and E191.

Zero Trust as a Principle, Not a Product

Zero Trust is best understood as a mindset: always verify, assume a possible compromise, and monitor continuously. Effective implementations focus on containment, visibility, and control rather than the unrealistic goal of perfect prevention.

Private networks play a critical role in this approach. By avoiding public internet traffic exposure and explicitly defining allowed routes and participants, attack surfaces are reduced and anomalous behavior becomes easier to detect. Continuous monitoring of access patterns, behavior, and data flows is essential, not only to counter external threats but to manage insider and supply-chain risks.

Proven Architectures Build Long-Term Trust

Ultimately, security is measured by outcomes. Architectures that have enabled global collaboration for decades without material data breaches demonstrate the value of thoughtful design, layered controls, and disciplined execution. These approaches have proven that security can scale alongside the growing complexity of the semiconductor industry.

Standardized, governed connectivity at scale is not merely a security enhancement; it is a marker of operational maturity. It enables collaboration, visibility, and continuous assurance without disrupting production or slowing innovation.

Summary

The semiconductor industry no longer faces a binary choice between openness and security. Collaboration is already happening; the real decision is whether it continues through fragmented, opaque mechanisms or evolves into governed, auditable frameworks designed for scale.

Standards define the foundation. Compliance confirms intent. Long-term security emerges when architectures are purpose-built around real fab workflows, legacy constraints, and global ecosystems. Time-proven frameworks demonstrate that simplification and hardening are not opposing goals but rather mutually reinforcing goals. This is how secure semiconductor collaboration is operationalized safely, globally, and at scale, while staying ahead of an increasingly complex threat landscape.

Also Read:

Why PDF Solutions Is Positioning Itself at the Center of the Semiconductor Ecosystem

Manufacturing Is Strategy: Leadership Lessons from the Semiconductor Front Lines

PDF Solutions’ AI-Driven Collaboration & Smarter Decisions


Keynote: On-Package Chiplet Innovations with UCIe

Keynote: On-Package Chiplet Innovations with UCIe
by Daniel Nenni on 03-08-2026 at 4:00 pm

Chiplet Summit Keynote UCIe 2026

In the rapidly evolving landscape of semiconductor technology, the Universal Chiplet Interconnect Express (UCIe) emerges as a groundbreaking open standard designed to revolutionize on-package chiplet integrations. Presented by Dr. Debendra Das Sharma, Chair of the UCIe Consortium and Intel Senior Fellow, at the Chiplet Summit 2026, UCIe addresses the limitations of traditional monolithic SoC designs by enabling modular, high-performance chiplet architectures. As Moore’s Law slows, chiplets offer a path to overcome reticle size constraints, reduce development time, lower costs, and optimize yields through smaller dies and process-specific optimizations. UCIe positions System-in-Package as the new SoC, fostering an ecosystem where chiplets from diverse vendors can seamlessly interconnect, much like PCIe or USB at the board level.

The motivation for chiplets and UCIe stems from the need to scale innovation in an era where manufacturing processes lock certain IPs, and bespoke solutions demand flexibility. By mixing and matching dies with a standard interface, UCIe reduces portfolio costs, enables die reuse, and supports heterogeneous integrations across AI, HPC, cloud, edge, enterprise, 5G, automotive, and handheld segments. Founded in March 2022 and incorporated in June, the UCIe Consortium now boasts over 140 members, including industry giants like AMD, Arm, Intel, NVIDIA, Qualcomm, Samsung, and TSMC. It operates on guiding principles of openness, backward compatibility, optimized power-performance-cost metrics, and continuous innovation, drawing from decades of board-level standards experience.

UCIe’s evolution spans three generations of innovations, each building on the last for interoperability. UCIe 1.0, released in 2022, focuses on planar interconnects with a layered stack: physical layer for die-to-die I/O, adapter for reliable multi-protocol support (including PCIe, CXL, and streaming), and form factor definitions. It supports 2D (UCIe-S) for cost-effective longer distances and 2.5D (UCIe-A) for power-efficient high bandwidth density, enabling usages like I/O attachment, memory expansion, and accelerators. UCIe 1.1, from 2023, enhances automotive reliability with preventive monitoring and run-time testability via parity Flit injection, adds full-stack streaming protocol support, and optimizes costs for advanced packaging all while maintaining backward compatibility.

UCIe 2.0 introduces vertical stacking with UCIe-3D, leveraging hybrid bonding for bump pitches under 1µm, delivering areal connectivity that boosts bandwidth density dramatically. This generation emphasizes low power through simple circuits, SoC-frequency operations, and cluster-level repair, achieving performance rivaling monolithic dies. It includes comprehensive testability, manageability, and debug infrastructure, using sideband channels for remote access and a management fabric based on MCTP standards. UCIe 3.0, slated for 2025, doubles bandwidth to 48-64 GT/s, supports continuous transmission protocols for SoC-DSP connectivity, and adds power-saving features like runtime recalibration.

Key metrics underscore UCIe’s superiority. For UCIe-S (2D), bandwidth shoreline reaches 28-224 GB/s/mm (up to 1317 with 3.0), with power efficiency at 0.5-0.75 pJ/b. UCIe-A (2.5D) offers 278-370 GB/s/mm shoreline and 0.25-0.5 pJ/b, while UCIe-3D achieves up to 300,000 GB/s/mm² density and <0.01 pJ/b at 1µm pitches. Reliability targets near-zero FIT rates, with ESD protections scaling down. These KPIs ensure UCIe delivers high-bandwidth, low-latency, cost-effective interconnects across packages.

Demonstrations highlight UCIe’s maturity: The 2023 Synopsys-Intel interoperability test showed successful linkup and data traffic, while Ayar Labs’ 2025 OFC demo featured an 8 Tbps optical chiplet. Adoption is surging, with Synopsys trends indicating most HPC/AI designs are multi-die, projecting a $411B chiplet market by 2035 at 15.7% CAGR.

Looking ahead, UCIe enables rack/pod-level composability via optical retimers carrying CXL protocols, extending on-package innovations off-package. The consortium invites contributor and adopter memberships to drive future enhancements. In conclusion, UCIe is poised to democratize chiplet ecosystems, accelerating innovation and efficiency in computing. Its open, evolving framework ensures long-term investment protection, marking a pivotal shift in semiconductor design.

Also Read:

Reducing Risk Early: Multi-Die Design Feasibility Exploration

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems

Designing the Future: AI-Driven Multi-Die Innovation in the Era of Agentic Engineering


CEO Interview with Jerome Paye of TAU Systems

CEO Interview with Jerome Paye of TAU Systems
by Daniel Nenni on 03-08-2026 at 2:00 pm

Jerome Paye CEO TAU Systems LR (1)

Jerome Paye has served as CEO of TAU Systems since late 2025, having joined the company shortly after its founding in 2022 as Chief Operating Officer. In that time, he has helped build TAU Systems into a high-performing team now focused on delivering the ultimate light source for semiconductor lithography.

Paye brings more than 20 years of industry leadership to the role. Most recently, as COO of Achates Power, he managed engine development programs with leading global manufacturers and oversaw all technical operations. Before that, he held senior roles at Renault SAS, leading early-stage electric vehicle development and directing value engineering for the company’s largest vehicle line, with responsibilities spanning global partnerships including Nissan. His career also includes multiple positions at Ford Motor Company, where he served as program management leader for the Mustang.

Earlier in his career, Paye conducted research in ultrashort pulse lasers and femtosecond laser systems, and contributed to the early design of France’s Laser Megajoule facility, work that directly informs TAU Systems’ mission today.

Tell us about your company?

TAU Systems is developing the next generation of light sources for semiconductor manufacturing through compact particle accelerators and X-ray free-electron lasers. Our laser wakefield acceleration technology creates electron beams with energies equivalent to conventional accelerators spanning hundreds of meters, but we achieve this in just centimeters. We then send these high-energy electrons through magnetic undulators to produce tuneable X-ray lasers with wavelengths significantly shorter than current EUV systems.

What makes TAU unique is our two-part strategy to cross the lab-to-fab chasm. We’re demonstrating economic viability today through radiation effects testing for the space industry, opening our Carlsbad, California, facility later this year. We will build manufacturing capacity through electron-based radiotherapy systems for cancer treatment. And we’re investing heavily in lithography R&D, using revenues from near-term activities to support long-term development. This approach refines our core technology with real customers while generating revenue.

What problems are you solving?

Current EUV lithography machines cost around $400 million each, weigh over 300,000 pounds, and are about as evolved as they can get with current tech. Only a few percent of the light reaches the wafer, dramatically limiting throughput. At the 13.5-nanometer EUV wavelength, chipmakers must use multi-patterning to create smaller features, which adds time, decreases throughput, and increases costs. ASML’s High-NA approach of increasing numerical aperture is reaching fundamental physical and economical limits.

We’re taking the alternative path: reducing the wavelength itself. Our X-ray lasers operate at tuneable wavelengths which will be optimized for maximum transmission. Combined with wavelength-matched reflective optics offering higher reflectivity than current EUV mirrors, our technology delivers hundreds of watts of X-ray emission per compact machine. Matching or exceeding ASML’s power but at shorter wavelengths. The result is faster production, reduced multi-patterning, and dramatically improved energy efficiency.

What application areas are your strongest?

Near-term, we are proving the technology by applying it to radiation effects testing for space. Currently, only a handful of global facilities provide testing – totaling just a few thousand hours annually against an estimated 30,000 hours of demand. Our TAU Labs facility will provide 2,000 to 4,000 hours annually per accelerator unit, dramatically expanding critical testing capacity. This operational beachhead generates revenue while validating our technology.

What keeps your customers up at night?

Space customers face testing bottlenecks. Limited capacity creates project delays and introduces risks as satellite constellations and commercial space ventures scale rapidly.

Semiconductor manufacturers confront more fundamental concerns. The extreme appetite for AI has created massive demand for advanced chips, but current EUV technology cannot meet future requirements. Each node becomes exponentially more expensive with just marginal improvements. The industry knows atomic-level control will eventually require X-rays, but questions when viable solutions will emerge. They’re concerned about capital efficiency, throughput, and economic sustainability.

What does the competitive landscape look like and how do you differentiate?

In radiation testing, we compete against national laboratories and established facilities of which there are but a handful. Our differentiation: dramatically expanded capacity through compact accelerator systems deployable at commercial scale.

For lithography, ASML dominates EUV with a virtual monopoly. They’re pursuing higher numerical aperture optics, but this faces fundamental physical limits. We’re taking the alternative approach physics demands: shorter wavelengths through X-ray lasers. ASML machines are expensive and require extraordinary infrastructure. We’re developing systems housed in existing fab spaces with dramatically improved efficiency.

What truly differentiates TAU is our partnership approach. We’re collaborating with global leaders, including The University of Texas at Austin, Lawrence Berkeley National Laboratory and the Extreme Light Infrastructure Nuclear Physics facility, combining their world-leading expertise with our commercial focus.

What new features/technology are you working on?

We recently demonstrated intense coherent light pulses from a free-electron laser driven by laser-plasma acceleration in collaboration with Berkeley Lab. Published in Physical Review Letters , this work confirms compact X-ray FELs are technically viable for advanced lithography. Our accelerator delivers acceleration gradients 2,000 times stronger than conventional systems.

We’re focused on increasing average power to hundreds of watts per machine, wavelength optimization and tunability for maximum optical transmission, and system integration through our radiation testing facility as a technology proving ground. We’re also developing Very-high Energy Electron therapy systems, which share fundamental technology with our lithography platform.

The overarching goal is to demonstrate that compact laser-driven accelerators can deliver the brightness, stability, and wavelength control required for next-generation semiconductor manufacturing while remaining economically viable.

How do customers normally engage with your company?

Our customers either come to us with a problem they’re looking to solve, or as academics pushing the boundaries of research, or partners who wish to leverage our technology and expertise.

Our development and application facility, TAU Labs, is located in Carlsbad, California and will officially open later in 2026 offering single-event effects radiation testing to ensure spacecrafts operate as intended in the future.

CONTACT TAU SYSTEMS

Also Read:

CEO Interview with Echo Yang of CSCERAMIC

CEO Interview with Juniyali Nauriyal of Photonect

CEO Interview with Aftkhar Aslam of yieldWerx


Things From Intel 10K That Make You Go …. Hmmmm

Things From Intel 10K That Make You Go …. Hmmmm
by Mark Webb on 03-08-2026 at 8:00 am

MKW Ventures Semiconductors

INTEL FORM 10-K

☑ ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
For the fiscal year ended December 27, 2025.

1) Intel is constrained on manufacturing. Not by TSMC. But by IFS and mainly by Intel 7, a node from 2021. Normally constraints are good, it means you are running efficiently with lots of demand. But Intel is not growing. CCG (client)+DCAI (Datacenter) revenue is down. Intel is constrained by older technologies, not newer ones, during a time of non-growth…

2) Intel Margins are low (35% Gross Margins). Intel Margins were once the envy of all hardware and chip companies. GAAP operating margins are negative. Non-GAAP is lower than all memory companies, TSMC, NVIDIA, Broadcom, Qualcomm, AMD, etc, etc. This is a year after taking a one time write down of old assets and a change to their depreciation schedule that saves them billions per year right now

3) If Intel Foundry found a magical external customer, that instantly gave them $7B per year in revenue, equivalent to all of Global foundries, 25 times the external revenue they have today, at absolutely no cost at all to Intel…… IFS would still lose money. Let that sink in

4) Intel took write off charges in 2025 on 18A for what I call LCM. Lower of cost or Market. This is where you need cannot claim inventory as a WIP asset because the cost is higher than the value of the chip. Margins are below zero and/or Intel had to throw away a lot of 18A production.

5) Intel’s newest fabs 34 and 52 require that partner companies get 50% of the profit from them. Through two very different contracts, Apollo and Brookfield get half the profits IF the fabs are successful, and get payment from Intel if they do not hit milestones.

Summary

These are all quite scary and the stock was shaken. But in reality, it simplifies the challenges in my mind.

Intel manufacturing does not seem to have line of sight to being break even. If 18A and 14A ramp, the expenses are more headwinds, more losses. If they do not, the current margins are bad for another 4-5 years. Panther lake is widely recognized as a very good performance part…. First leadership product in 5+ years.

What if Intel followed AMD, Nvidia, Apple, Broadcom, Qualcomm, etc and focused on where it can be successful? How would the numbers be different?

Call us for more information and details

Mark Webb

www.mkwventures.com

Industry Expert with 25+ years experience in semiconductor and system engineering and manufacturing. Extensive experience and knowledge in NAND and SSD manufacturing, development, system testing with Industry leaders. Expertise in SSD/NAND/DRAM business models, Cost/pricing models, competitive analysis and supply chain. Leadership and management experience in Contract Manufacturing, ODM, OSAT, and Foundry Operations. Experience in Device, Product, and Process Integration Engineering on Logic, SOC and all Memory technologies.

Also Read:

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete

TSMC vs Intel Foundry vs Samsung Foundry 2026

Intel to Compete with Broadcom and Marvell in the Lucrative ASIC Business