Bronco Webinar 800x100 1

Configurable xSPI memory controller IP core is FuSa-ready

Configurable xSPI memory controller IP core is FuSa-ready
by Don Dingee on 05-13-2026 at 10:00 am

xSPI-MC block diagram

SPI, invented some four decades ago, is so successful as a low-pin-count interface for microcontrollers and processor cores that it spurred memory makers to incorporate both the physical signaling interface and advanced memory command protocols into serial flash and serial pseudo-SRAM (PSRAM) devices. Those protocols, however, fractured into manufacturer-specific versions. A few years ago, JEDEC expanded SPI into xSPI, aiming to gather the most popular proprietary over-SPI serial memory protocol variants into a single superset standard. Now that xSPI adoption is widespread across the industry, CAST’s xSPI-MC IP core is an attractive offering that blends a protocol engine and a serial memory controller in functional safety-ready configurations that are fully synthesizable and suitable for practically any serial flash or serial PSRAM application.

From four wires to plug & play auto-configuration

The power of SPI lies in its ability to communicate with a wide range of attached peripherals, including sensors, displays, control devices, and, of course, serial memory. A basic SPI implementation has four lines: a serial clock, a slave select, master-out-slave-in (MOSI, serial data output by a master), and master-in-slave-out (MISO, serial data output by a slave).

Early implementations simply connected the pins and relied on software to manage transfers on a per-device basis. Eventually, to speed up software development, manufacturers created over-SPI protocols with device command sets that offered some consistency as long as designers were loyal to that manufacturer.

Without firm guidelines, for instance, bit widths and clock rates (which are theoretically not limited, but in practice, signal integrity starts having an effect around 20 MHz), the over-SPI protocols diverged at the whim of manufacturers. While specifics differ, the general form of an over-SPI protocol resembles this:

One command structure is standardized: obtaining the device ID. A master sends a 9Fh to request a device ID, and a device responds with its unique JEDEC-registered ID. That implies a master can maintain a table of supported device IDs and their corresponding protocols, query the attached device, and initiate transfers using the correct commands and parameters.

xSPI defines just such an auto-configuration method for serial memory devices by tabulating support for various over-SPI protocols, selectable by device ID. The same scheme allows adding other over-SPI protocols to a controller, as long as the device IDs are unique.

A quick look at the CAST xSPI-MC IP core

It’s also possible to configure wider interfaces to speed up transfers while still keeping pin counts relatively low using the same scheme. CAST leverages this capability in the xSPI-MC IP core, supporting xSPI, HyperBus (a 12-pin interface originally designed by Cypress), and Xccela (a 12-pin interface created by Micron). Both specifications fall into the octal SPI category with eight data lines tuned for higher-frequency operation. The xSPI-MC SPI controller and PHY scale across single-, dual-, quad-, octal-, and 16x SPI buses.

Run-time configurability enables users to detect a variety of devices. The configuration can be done with zero software overhead to initialize the controller and a memory device, or it can be modified in software. The device ID table is stored off-core, for instance, in ROM or OTP memory so that it can be updated easily. This configurability can be a project-saver – it’s not uncommon to get near the end of a design and discover, for supply chain or pricing reasons, the memory device has to switch to a different part.

The PHY is of particular interest – it’s a soft PHY specifically designed for SPI, implementing only the logic required for the interface with no excess, so it takes considerably less area. Unlike hard PHYs, which typically require qualification in a specific process node, CAST implemented the xSPI-MC soft PHY in fully synthesizable RTL, ready for any foundry, process node, or standard-cell library. This soft PHY also integrates into a standard ASIC or FPGA flow with no analog bring-up procedures, and it can drop directly into FPGA-based prototyping platforms for a smooth design workflow. These soft PHY attributes open up more implementation options, eliminate foundry dependencies, and make designs highly portable, reducing schedule risk.

Use cases include firmware, data, parameter, and log storage

xSPI-MC use cases span a range of application segments for serial flash and serial PSRAM:

  • With automotive applications, a popular use case, the xSPI-MC supports nine ISO 26262-ready core configurations, ranging from ASIL-B to ASIL-D levels. Enhancements such as spatial redundancy and CRC protection enable swifter functional safety (FuSa) certification. Engine control units with firmware storage and XIP boot, displays that store navigation, audio, or user interface data, and EV battery management systems with calibration tables and fault log storage are a few use-case examples.
  • In IoT and edge devices, sensor nodes with firmware and configuration parameters are a common use case. The onset of AI also requires storage of model weights, and wireless communication nodes can store upgradable protocols.
  • Industrial automation controllers often have firmware storage. Robotics platforms also need real-time parameter tables and event logging, while smart meters require non-volatile event and usage logging.
  • Consumer electronics often feature OTA (over-the-air) firmware updating. Small displays may use serial PSRAM for framebuffers and rendering. Earbuds and hearing aids present an ultra-small form factor, and the space saved by the soft PHY can make a difference. Gaming controllers and handheld consoles also need configuration and save-state storage.

xSPI-MC IP core flexibility also includes Verilog parameters that enable feature selection, reset value definition, AHB bus configuration, and SPI controller configuration. More info on the xSPI-MC IP core and its applications is available on the CAST website:

xSPI-MC: xSPI, HyperBus™, & Xccela™ Serial Memory Controller


CEO Interview with Dr. Jekaterina Viktorova of Syenta

CEO Interview with Dr. Jekaterina Viktorova of Syenta
by Daniel Nenni on 05-13-2026 at 8:00 am

Headshot Jeka

Dr. Jekaterina (Jeka) Viktorova is the CEO and Co-Founder of Syenta, an Australian deep-tech company developing breakthrough additive manufacturing technology for the semiconductor industry. With a background in chemistry, electrochemistry, and advanced manufacturing, she is the inventor of the core Syenta technology and leads its mission to solve the AI era’s memory bandwidth bottleneck by enabling high-resolution copper interconnects for next-generation advanced packaging.

Tell us about your company

Syenta is a next-generation semiconductor company focused on solving one of the most pressing challenges in AI infrastructure: how chips connect and communicate at scale. As systems move toward chiplet-based architectures, performance is increasingly limited not by compute, but by interconnect density and packaging constraints.

We’ve developed a proprietary manufacturing approach, Localized Electrochemical Manufacturing (LEM), that enables finer-pitch, high-density chip-to-chip connections using existing fabrication infrastructure. Our goal is to unlock higher performance and scalability for AI and high-performance computing systems while making advanced packaging more accessible and manufacturable at scale.

What problems are you solving?

The industry is hitting a fundamental bottleneck in advanced packaging. As AI systems scale, they rely on more chips working together, but current interconnect technologies can’t keep up with the bandwidth and density requirements.

At the same time, advanced packaging capacity is constrained and concentrated in a small number of facilities, making it difficult to scale production globally.

Syenta addresses both challenges. We enable higher-density interconnects to improve system performance, while also reducing manufacturing complexity by eliminating steps and leveraging existing infrastructure. This helps customers scale faster without needing entirely new fabrication approaches.

What application areas are your strongest?

Our strongest applications are in AI and high-performance computing, where system-level performance depends heavily on how efficiently chips are connected.

This includes hyperscale data centers, AI training and inference systems, and emerging architectures built around chiplets and heterogeneous integration. These environments require extremely high bandwidth, low latency, and scalable manufacturing approaches, all of which align directly with what our technology enables.

What keeps your customers up at night?

Customers are increasingly concerned about how to scale AI systems efficiently. They’re running into limits on interconnect density, which directly impacts bandwidth, performance, and power efficiency.

There’s also growing concern around supply chain constraints in advanced packaging. With capacity concentrated in a few regions and facilities, customers face risk when trying to scale production or bring new systems to market quickly.

Ultimately, they’re looking for solutions that improve performance without adding complexity or requiring entirely new manufacturing ecosystems.

What does the competitive landscape look like and how do you differentiate?

The advanced packaging ecosystem includes large foundries, OSATs, and equipment providers, all working to push the limits of interconnect density and system integration.

Where Syenta differentiates is in our approach. Instead of requiring new infrastructure or entirely new process flows, our LEM technology integrates into existing manufacturing environments.

We’re able to achieve micron-scale interconnects with fewer process steps, improving both performance and manufacturability. This combination of higher density and lower complexity is what sets us apart and makes our approach scalable in a way that many alternatives are not.

What new features or technology are you working on?

We’re focused on advancing LEM toward high-volume manufacturing and continuing to push the limits of interconnect density and performance.

At the same time, we’re working closely with partners across the semiconductor ecosystem to ensure our technology aligns with next-generation packaging roadmaps and evolving AI system requirements.

How do customers normally engage with your company?

Customers typically engage with us through early technical collaboration. We work closely with them to understand their system requirements and evaluate how our technology can be integrated into their packaging and manufacturing flows.

As we scale commercialization, we’re building deeper relationships with customers and partners globally, including through our expansion into the U.S., where we can collaborate more closely with leading semiconductor and AI companies.

Also Read:

CEO Interview with Matt Crowley of Scintil Photonics

CEO Interview with Dave Kelf, CEO of Breker Verification Systems

CEO Interview with Geoffrey Rodgers of Chameleon Semiconductor


Sensing. A Quantum Tech Ready for Market?

Sensing. A Quantum Tech Ready for Market?
by Bernard Murphy on 05-13-2026 at 6:00 am

Quantum sensor analyzing a chip

While the quantum world revolves around quantum computing, (QC) there are a couple of other quantum technologies of note. I covered one of these, quantum communication, in a recent blog. Here I’ll introduce the other, quantum sensing. The goal is to use the high sensitivity of an individual quantum state to external factors such as magnetic fields, able to sense very small variations outside the range of traditional sensors. There are indications that at least some of these applications may be much closer to production-ready than QC.

Origins

Deep research methods aren’t particularly helpful here. Instead, I’ll reference the first implementation example in Building Quantum Computers, based on Nuclear Magnetic Resonance. In or around 1997 attempts were made to build QC’s using NMR to monitor precession frequencies in molecules such as chloroform in liquid suspension. These methods provided proof of concept but were completely unscalable. Nevertheless, NMR and similar techniques such as electron spin resonance are now proving effective in quantum sensing, as illustrated in this paper and this paper.

I certainly can’t claim an exhaustive review of quantum sensing techniques, however examples I have found cluster around nitrogen vacancy centers in synthetic diamond. This name confused me at first. What it means is a nitrogen atom replacing a carbon atom in the diamond lattice plus a vacancy in the lattice adjacent to the nitrogen atom. Valence electrons for a negatively charged state of this structure form a spin-triplet state in both ground and excited primary states (sorry, bit of physics) with 3 possible magnetic quantum states: 0 and ±1. The 0 state is (energetically) lower than the ±1 states and the ±1 states split further under influence of a magnetic field, the difference being proportional to the field strength.

Detecting spin states is accomplished through fluorescence from excited states. The 0 state on relaxing back to the ground state fluoresces brightly, the ±1 states fluoresce less brightly since they often relax through an intermediate state. Frequency sweeping microwave radiation pulses across the diamond changes populations between 0 and ±1, detectable as a dip in fluorescence at a frequency related to the magnetic field strength. Which proves very useful in semiconductor defect detection.

An application for failure analysis

Quantum Diamonds has just installed a production system at Eurofins EAG Labs in Sunnyvale, aimed to support failure analysis for <5nm tech and 3D packaging. This system was first unveiled at Semicon Taiwan in 2025 and is especially valuable for non-destructively testing for semiconductor shorts, leakage and opens in die and in 3DIC. The Quantum Diamonds Microscopy platform images magnetic fields generated by current flow in a chip and can achieve remarkable localization resolution: <1μm in xy and down to 1 μm in depth, with a 3mm x 3mm field of view (which can be stitched to larger areas).

There are several interesting aspects to this technology. The usual constraints on QC don’t apply here. While additional defect centers may be used to amplify light signal, there is no need for superposition or entanglement. This is a simple atomic physics application: pump electrons up to an excited state, observe fluorescence when they relax back to the ground state. Also the technology works at room temperature, no need for special cooling. Diamond is especially stiff, minimizing impact of heat noise, it has a wide bandgap minimizing electrical noise, and if diamonds are grown using isotopically pure C12, there is no nuclear “noise” from C13. This room-temperature capability makes quantum sensing interesting in other areas, such as a hack-proof alternative to GPS, based on detecting variations in the earth’s magnetic field.

You can read the Quantum Diamonds press release HERE and a more detailed paper for a different application HERE.

Also Read:

Panel Discission: Beyond Moore’s Law and the Future of Semiconductor Manufacturing

RISC-V: From Niche Architecture to Strategic Foundation

Bringing mathematical rigour in the world of hardware – a journey into Formal Verification


Beyond Tool Interoperability: The Emerging Governed Convergence Problem in Semiconductor Design

Beyond Tool Interoperability: The Emerging Governed Convergence Problem in Semiconductor Design
by Admin on 05-12-2026 at 10:00 am

Beyond Tool Interoperability The Emerging Governed Convergence Problem in Semiconductor Design

By Dr. Moh Kolbehdari

The semiconductor industry has spent decades optimizing tools. Today, however, the central challenge is no longer whether individual tools are powerful enough. The real question is whether increasingly specialized tools, domains, models, and organizations can still converge coherently into a manufacturable, reliable, high-performance system.

That distinction matters.

Recent discussions around interoperability frameworks such as Siemens’ Calibre Connectivity Interface (CCI) highlight an important and necessary evolution in the industry. By transforming LVS verification output into a structured, queryable data foundation for downstream tools, the industry is moving beyond isolated verification flows toward connected engineering ecosystems. This is a major step forward. But interoperability alone does not guarantee system-level convergence. That distinction is becoming increasingly important as semiconductor systems move from tool-limited complexity to convergence-limited complexity, as illustrated in Figure 1.

Figure 1.

At advanced nodes, particularly below 5nm and within heterogeneous integration architectures, fragmentation is no longer limited to tools alone. It now spans abstraction layers, physics domains, organizations, manufacturing flows, packaging ecosystems, and even decision authority itself. Modern system realization extends across architecture, silicon, package, interposer, PCB, thermal, mechanical, SI/PI, electromagnetic behavior, reliability, validation, and manufacturing integration. Each of these domains is represented by highly specialized tools that are often locally optimized but globally disconnected.

This creates a dangerous illusion: every domain can appear “correct” while the overall system still fails to converge.

The industry is beginning to encounter what can be described as a convergence scaling crisis.

As systems become more tightly coupled, the number of interactions between engineering domains grows nonlinearly. At some point, organizations encounter an “Entropy Wall,” where the complexity of cross-domain coordination grows faster than the organization’s ability to reason about it coherently. Historically, engineering scale was constrained by transistor count, lithography, frequency, or power density. Increasingly, the limiting factor is becoming convergence capacity itself. In other words, the industry may not be running out of compute capability nearly as quickly as it is running out of governed decision-making capability.

Many organizations assume that AI will naturally resolve this fragmentation. In practice, however, uncontrolled AI risks amplifying fragmentation rather than reducing it. Most engineering environments already operate with conflicting models, inconsistent assumptions, disconnected evidence chains, isolated optimization loops, tool-specific abstractions, and incomplete causality tracking. AI systems trained across such fragmented ecosystems can accelerate local optimization while simultaneously increasing global instability.

This highlights a critical distinction: intelligence is not convergence. Prediction is not governance. Automation is not authority.

The next architectural transition therefore extends beyond interoperability toward what can be described as governed convergence. Interoperability ensures that tools can exchange information, but governed convergence ensures that fragmented engineering evidence can be normalized, causally bounded, and used to drive deterministic system-level decisions.

This distinction becomes especially important in chiplet ecosystems, advanced packaging, 2.5D and 3D integration, AI accelerator platforms, high-speed interconnect architectures, and multi-vendor manufacturing environments. In these systems, the challenge is no longer simply moving data between tools. The challenge is orchestrating convergence across the entire engineering stack.

Historically, packaging has been treated primarily as a physical integration layer. That assumption is rapidly changing. At multi-terabit bandwidths and in highly coupled heterogeneous systems, packaging increasingly functions as an active control plane through which power integrity, thermal behavior, signal integrity, manufacturability, and system stability must converge simultaneously. The package is no longer merely carrying the system; it is increasingly governing the system.

One of the least discussed realities in advanced semiconductor development is that governed convergence capacity itself is becoming scarce. The industry already possesses exceptional domain expertise, powerful simulation environments, advanced AI techniques, and enormous compute capability. What remains limited is the ability to coordinate these domains coherently, preserve authoritative evidence, bound uncertainty, and make deterministic cross-domain decisions at scale.

This is fundamentally an orchestration challenge.

Not merely a compute challenge. Not merely a tool challenge. And not merely an AI challenge.

The work being done around interoperable verification ecosystems is therefore extremely important because it establishes the foundation for this next stage. The ability to create trusted, queryable, authoritative engineering evidence layers is a prerequisite for any future convergence architecture.

However, as systems continue to scale in complexity, the industry may increasingly require governance architectures, orchestration frameworks, evidence-driven gate systems, and bounded AI-assisted convergence engines capable of operating across fragmented engineering ecosystems without losing determinism, traceability, or accountability.

The semiconductor industry has solved scaling through abstraction many times before.

The next scaling problem may be convergence itself.

By Dr. Moh Kolbehdari

Dr. Moh Kolbehdari is a Senior Lead Architect at Socionext, where he specializes in the industrialization of high-performance AI chiplets and 1.8-Tb/s interconnects. With over two decades of experience in SI/PI, electromagnetic field theory, and system-level architecture, he has been a pivotal force in bridging the gap between cutting-edge silicon design and high-volume manufacturing (HVM).

Dr. Moh is the creator of the SEGA™ (Systematic Engineering Governance Architecture) framework, a methodology designed to solve the “Crisis of Complexity” in heterogeneous integration. His work focuses on transforming the package into an Active Control Plane, utilizing field-confined EM Corridors and state-aware causality to ensure deterministic yield at 2nm and beyond. He is a frequent contributor to industry-standard committees and is recognized for his “Physics-First” approach to solving the semiconductor industry’s most challenging entropy walls.

Also Read:

Solving the EDA tool fragmentation crisis

Carbon in the Age of AI Chips: What the Semiconductor Industry Needs to Know This Earth Day

Speculation: Silicon’s Most Expensive Compulsion


#DAC2026 Marks Another Pivotal Moment for the Semiconductor Industry

#DAC2026 Marks Another Pivotal Moment for the Semiconductor Industry
by Daniel Nenni on 05-12-2026 at 8:00 am

#DAC2026 SemiWiki

The 2026 Design Automation Conference (DAC 2026) marks another pivotal moment for the semiconductor and electronic systems industry as artificial intelligence, chiplets, heterogeneous integration, and system-level optimization redefine the future of design automation. Held July 26–29, 2026, at the Long Beach Convention Center in California, DAC continues its evolution from a traditional Electronic Design Automation (EDA) conference into a broader “Chips to Systems” platform that connects silicon, software, packaging, AI infrastructure, and system architecture.

For more than six decades, DAC has been the premier venue for presenting advances in semiconductor design methodologies, verification technologies, physical implementation, IP integration, and system design. In 2026, however, the conference reflects a profound transformation occurring across the semiconductor ecosystem. AI is no longer simply an application workload driving demand for compute; it is now fundamentally reshaping the design process itself. DAC 2026 highlights this transition with more than half of the conference program focused on AI-related design topics and a significant increase in engineering and research submissions.

One of the central themes at DAC 2026 is AI-driven EDA. Machine learning and generative AI technologies are rapidly being incorporated into synthesis, floor-planning, verification, timing closure, and design-space exploration. Researchers and commercial EDA vendors are increasingly using large language models (LLMs), graph neural networks (GNNs), and reinforcement learning algorithms to optimize chip development flows and reduce time-to-market. Recent academic research discussed alongside DAC emphasizes autonomous or “agentic” chip design, where AI systems can iteratively refine architectures and physical implementations with minimal human intervention.

Another major focus area is chiplet-based design and heterogeneous integration. As Moore’s Law scaling becomes more economically challenging, semiconductor companies are moving toward modular architectures that combine multiple dies in advanced packages. DAC 2026 features extensive discussion around multi-die systems, 2.5D and 3D integration, die-to-die interconnect standards, thermal management, and co-optimization across packaging and silicon. The conference reflects the growing importance of system technology co-optimization (STCO), where performance, power, cost, and manufacturability are optimized simultaneously at the package and system level rather than purely at the transistor level.

Verification and security also remain critical topics at DAC 2026. Modern AI accelerators, automotive processors, and cloud-scale SoCs have become so complex that verification consumes the majority of design cycle resources. AI-assisted verification techniques are gaining traction, particularly in automated test generation, bug localization, assertion creation, and hardware/software co-validation. At the same time, hardware security concerns—including supply chain trust, side-channel attacks, and secure chiplet integration—are driving new research into resilient architectures and trusted design methodologies.

DAC 2026 also demonstrates the expanding intersection between semiconductor design and system-level applications. Sessions increasingly cover automotive electronics, data center infrastructure, robotics, aerospace, edge AI, and cloud-native design methodologies. This broadening scope reflects the reality that semiconductor innovation is now tightly coupled with software stacks, AI frameworks, networking architectures, and energy-efficient computing platforms. The conference’s “Chips to Systems” branding underscores this industry shift toward holistic platform engineering.

The exhibition floor at DAC 2026 showcases the commercial dimension of these technology trends. More than 120 companies are participating, including established EDA leaders, semiconductor manufacturers, cloud infrastructure providers, AI startups, and emerging design automation firms. The strong participation of AI-focused companies highlights how the EDA landscape is evolving beyond traditional CAD tools into intelligent design ecosystems powered by data analytics and automation.

Keynotes at DAC 2026 further reinforce the industry’s strategic direction. Speakers from quantum computing, advanced wireless systems, and AI infrastructure sectors emphasize the convergence of compute architectures, software intelligence, and semiconductor innovation. Topics such as AI-native infrastructure, quantum system design, and energy-efficient acceleration illustrate how future semiconductor competitiveness will depend on interdisciplinary optimization across the entire technology stack.

Bottom line: DAC 2026 reflects a semiconductor industry at a major inflection point. Traditional EDA methodologies are evolving into AI-augmented and increasingly autonomous design environments, while system complexity is driving unprecedented collaboration between chip designers, software developers, packaging engineers, and cloud architects. As the semiconductor ecosystem moves toward intelligent, heterogeneous, and system-centric computing platforms, DAC remains the industry’s most influential forum for defining the next generation of electronic design innovation.

REGISTER HERE

Also Read:

 


IPLM: Future Forward Webinar May 19th

IPLM: Future Forward Webinar May 19th
by Daniel Nenni on 05-12-2026 at 6:00 am

ads iplm semiwiki future forward 400x400@2x

Step into the future of semiconductor design management with IPLM: Future Forward, a product-led webinar showcasing the latest developments in Perforce IPLMThis focused session is designed to show how modern teams can tackle growing design complexity while still accelerating innovation.

Hosted by IPLM Senior Product Manager, Hassan Shah and Senior Product Owner, Rien Gahlsdorf, this webinar delivers a firsthand look at the latest enhancements, workflow improvements, and what’s coming next for IP lifecycle management.

Whether you’re managing IP, leading engineering teams, or driving product development strategy, you’ll gain practical insights into how to improve efficiency, performance, security, and traceability across your design environments.

What You’ll Learn

In this session, the Perforce product team will explore:

  • How new IPLM capabilities improve workflow efficiency
    Including enhancements like server-side conflict resolution and UI updates designed to reduce friction in daily design tasks.
  • Performance gains from a modernized tech stack
    Learn how updates like the move to Neo4j 5 help reduce latency and improve responsiveness at scale—especially for large, complex datasets.
  • Stronger end-to-end traceability across the design lifecycle
    See how new features give teams clearer visibility into IP status, usage, and history for reduced risk and greater design integrity.
  • A preview of what’s next for Perforce IPLM
    Get an early look at upcoming innovations shaping the future of IP lifecycle management, including MCP server integration.
Who Should Attend
  • Perforce IPLM users
  • Those curious about the Perforce IP management platform
  • Semiconductor design and engineering teams
  • IP management and CAD leaders
  • Product development and operations leaders
  • Anyone responsible for scaling design workflows and reducing complexity
A Practical Look at the Future

The Future Forward webinar on May 19th is ultimately about execution over theory. You’ll see how real product improvements come together via clear discussion and step-by-step demonstration. Attendees looking to modernize their IP lifecycle strategy will come away with fresh perspective and a clear path forward using the Perforce suite of tools.

Register Now

Also Read:

Perforce and Siemens Collaborate on 3DIC Design at the Chiplet Summit

2026 Outlook with Kamal Khan of Perforce

What’s New with IP Lifecycle Management (IPLM)


From Point Solutions to Agentic AI Ecosystems: Semiconductor Process Control Depends on Its Past

From Point Solutions to Agentic AI Ecosystems: Semiconductor Process Control Depends on Its Past
by Kalar Rajendiran on 05-11-2026 at 10:00 am

Agentic AI Natural Language Interaction with Analytics

Agentic AI is often presented as a revolutionary shift in semiconductor manufacturing, driven by large language models and generative AI. However, this framing overlooks an important reality: today’s advances are built on decades of prior work. As Jonathan Holt of PDF Solutions emphasizes in his recent keynote at the APCM 2026 conference, the capabilities associated with agentic AI are the result of 30 to 40 years of development in process control, data infrastructure, and system integration. Rather than a clean break, agentic AI represents the next stage in a long technological evolution.

A 40-Year Evolution: From Tool Control to Intelligent Systems

The journey began with early computer-integrated manufacturing (CIM) efforts in the 1980s, where tools were first digitized for monitoring and control. Over time, this evolved into widespread adoption of statistical process control, advanced process control, and run-to-run systems. By the 2010s, machine learning models were being used to predict yield, detect anomalies, and optimize performance.

Despite these advances, most systems remained isolated, designed to solve specific problems without broader coordination. This fragmentation limited the full potential of the data and intelligence being generated.

The Limitation of “Point Solutions”

Traditional semiconductor systems have been highly specialized, addressing tasks such as fault detection, virtual metrology, and scheduling. While effective, these solutions were often siloed and connected through rigid, hard-coded integrations. This made systems difficult to adapt as processes evolved.

Agentic AI changes this paradigm by enabling systems to interact dynamically rather than through fixed pipelines, allowing for greater flexibility and scalability.

What Actually Makes Agentic AI Different

The key innovation of agentic AI is coordination. Instead of isolated tools, systems are structured as agents that can communicate, share context, and collaborate toward shared goals. These agents can break down complex problems, execute tasks, and adapt based on feedback.

This transforms AI from a passive analytical tool into an active participant in manufacturing workflows, capable of making real-time, goal-driven decisions.

The Role of LLMs and MCP: Accelerating the Transition

Recent advances in large language models (LLMs) and communication protocols such as Model Context Protocol (MCP) have accelerated this shift. They enable natural language interaction and standardized communication between systems, significantly reducing the effort required to build and connect workflows.

However, these technologies are accelerators, not foundations. Their effectiveness depends on the underlying infrastructure developed over decades.

The Hidden Backbone: Infrastructure Built Over Decades

Agentic AI relies on a mature foundation that includes extensive sensor networks, standardized communication protocols, and robust data platforms. Digital twins, knowledge systems, and enterprise integration layers provide the context and structure needed for meaningful coordination.

This infrastructure is what makes agentic AI practical today, allowing systems to operate across tools, processes, and even geographically distributed factories.

Agentic AI in Practice: What’s Real Today

While the vision of fully autonomous manufacturing is compelling, current implementations are still semi-autonomous. The industry is roughly 70 to 80 percent of the way toward full autonomy, with human oversight still playing a critical role.

Today’s applications include automated model development, adaptive testing, and cross-stage decision-making. Systems can analyze data, recommend actions, and refine their behavior over time, but humans remain essential for validation and governance.

The Trade-Off: Intelligence vs. Trust

A key limitation of agentic AI is data sharing. While broader access to data would improve model performance, concerns around intellectual property restrict how information is exchanged. As a result, systems typically share only the necessary outputs rather than raw data.

This approach preserves security but limits the full potential of collaborative intelligence, highlighting a fundamental trade-off between capability and trust.

From Programming to Autonomy: A Useful Analogy

The evolution toward agentic AI mirrors the progression of software development: from low-level programming to object-oriented systems and now to autonomous, self-organizing workflows. In this new paradigm, AI not only uses predefined components but also manages and optimizes them dynamically.

The 8-Layer Stack of Agentic Manufacturing

Agentic manufacturing can be understood as a layered architecture, starting with physical equipment and sensors, followed by control systems, integration layers, data platforms, and multi-agent coordination. At the highest level lies the goal of fully autonomous orchestration.

Today, most organizations operate in the middle to upper layers of this stack, with full autonomy still a future objective.

The Real Bottleneck: Not Technology, But Integration

The primary challenges in adopting agentic AI are not related to algorithms but to integration, standardization, and organizational readiness. Aligning systems, managing data, and ensuring reliability across complex environments remain significant hurdles.

Strategic Implication: The Platform Is the Advantage

The effectiveness of agentic AI depends heavily on the strength of the underlying platform. Organizations with well-developed data infrastructure and integration capabilities are better positioned to scale and realize value. Without this foundation, even advanced AI systems risk remaining isolated.

Summary

Agentic AI is best understood as the culmination of decades of progress rather than a sudden breakthrough. Its value lies in connecting and enhancing existing systems, enabling a more adaptive and collaborative manufacturing environment.

The key question going forward is not what AI can do, but how effectively it can be integrated into the complex ecosystems that define modern semiconductor manufacturing.

To learn more:

“The Evolution of AI in Process Control: From Basic SPC to Agentic AI Systems”

“Evolution of Agentic AI for Process Control”

Also Read:

Two Paths for AI in Semiconductor Manufacturing: Platform Integration vs. Point Solutions

WEBINAR: Beyond Moore’s Law and The Future of Semiconductor Manufacturing Intelligence

Operationalizing Secure Semiconductor Collaboration: Safely, Globally, and at Scale


Panel Discission: Beyond Moore’s Law and the Future of Semiconductor Manufacturing

Panel Discission: Beyond Moore’s Law and the Future of Semiconductor Manufacturing
by Daniel Nenni on 05-11-2026 at 6:00 am

Beyond Moore's Law Future of Manufacturing

The semiconductor industry is entering a post-Moore’s Law era in which scaling transistor density alone is no longer sufficient to sustain historical performance growth. As discussed in the panel Beyond Moore’s Law: The Future of Semiconductor Manufacturing, the industry is increasingly dependent on advanced manufacturing intelligence, heterogeneous integration, AI-driven optimization, and data-centric infrastructure to continue innovation. Rather than relying solely on lithographic shrink, semiconductor progress is now driven by system-level efficiency, packaging technologies, and intelligent manufacturing ecosystems.

Panelists:
  • Dr. Jim Shiely, Director, R&D Calibre Semi Manufacturing, Siemens EDA
  • Dr. Larry Melvin, Sr. Director, Technical Product Management, Synopsys
  • Dr. Christophe Begue, VP, Corporate Strategic Marketing, PDF Solutions
  • Dr. Janhavi Giri, EDA & Semiconductor Industry Vertical Lead, NetApp
  • Daniel Nenni, Founder of SemiWiki.com  (Moderator)

One of the primary technical challenges highlighted is the physical limitation of traditional CMOS scaling. As transistor geometries approach atomic dimensions, issues such as leakage current, thermal dissipation, quantum tunneling, and variability become increasingly difficult to control. Advanced nodes below 3 nm require extreme ultraviolet (EUV) lithography, multi-patterning, and highly precise process controls, significantly increasing manufacturing complexity and cost. Consequently, fabs generate enormous volumes of operational data from process equipment, metrology systems, defect inspection tools, and yield management platforms.

This data explosion has created an opportunity for artificial intelligence and machine learning to become foundational technologies within semiconductor manufacturing. Modern fabs operate thousands of process steps with highly interdependent variables, making traditional rule-based optimization insufficient. Machine learning models can analyze terabytes of sensor data in real time to identify hidden process correlations, predict equipment failures, and optimize wafer yields. Predictive maintenance algorithms, for example, use telemetry data from deposition, etching, and lithography equipment to forecast tool degradation before catastrophic failures occur. This minimizes downtime and improves overall equipment effectiveness (OEE).

Another critical area discussed is design-technology co-optimization (DTCO). Historically, chip design and manufacturing were treated as relatively independent domains. However, advanced nodes now require deep integration between EDA workflows, process technologies, and packaging architectures. AI-assisted EDA tools are increasingly used to accelerate place-and-route optimization, power integrity analysis, and timing closure. Generative AI models are also being explored for automated verification, layout synthesis, and defect prediction. These approaches significantly reduce engineering iteration cycles while improving design quality and manufacturability.

The panel also emphasized the growing importance of advanced packaging technologies as a continuation of Moore’s Law. Chiplet-based architectures, 2.5D integration, and 3D stacking allow manufacturers to improve system performance without relying entirely on transistor shrink. High-bandwidth interconnects, through-silicon vias (TSVs), and heterogeneous integration enable CPUs, GPUs, memory, and accelerators to be combined into tightly integrated systems. This architecture is especially important for AI workloads, where memory bandwidth and interconnect efficiency are often more critical than raw transistor density.

Data infrastructure was identified as another strategic requirement for future semiconductor manufacturing. AI-driven fabs depend on scalable, low-latency storage systems capable of supporting high-throughput analytics pipelines. Semiconductor workflows involve distributed simulation environments, petabyte-scale datasets, and collaborative global engineering teams. Cloud-integrated storage architectures and high-performance parallel file systems are increasingly necessary to support EDA simulations, digital twins, and manufacturing analytics. Digital twins, in particular, allow fabs to model process behavior virtually, enabling rapid experimentation and optimization before physical implementation.

Cybersecurity and supply chain resilience were also addressed as emerging priorities. Semiconductor manufacturing is highly globalized, with dependencies across materials suppliers, foundries, packaging vendors, and equipment manufacturers. AI-enabled visibility into supply chain operations can help predict disruptions, optimize inventory, and improve production planning. At the same time, protecting intellectual property and manufacturing data has become critical as fabs adopt cloud-connected infrastructures.

Bottom line: The future of semiconductor manufacturing intelligence lies in the convergence of AI, data engineering, advanced process technologies, and heterogeneous system integration. The industry is transitioning from pure transistor scaling toward intelligent optimization across the entire semiconductor lifecycle. Success in this next era will depend not only on physics and materials science but also on the ability to extract actionable insights from massive manufacturing datasets. Semiconductor leaders that effectively combine AI-driven analytics with scalable infrastructure and advanced packaging technologies will define the next generation of computing innovation.

See the Replay Here.

Also Read:

GTC 2026: Agentic AI for Semiconductor Design and Manufacturing

Agentic EDA Panel Review Suggests Promise and Near-Term Guidance

Cloud-Accelerated EDA Development


CEO Interview with Matt Crowley of Scintil Photonics

CEO Interview with Matt Crowley of Scintil Photonics
by Daniel Nenni on 05-10-2026 at 2:00 pm

Matt Crowley headshot

Matt Crowley is Chief Executive Officer of Scintil Photonics. A physicist by training, Matt built his career transitioning advanced semiconductor technologies from development to volume manufacturing. Before Scintil, he led MEMS technology company Vesper Technologies to high-volume production and through its acquisition by Qualcomm. He holds a degree from Princeton University.

Tell us about your company?

Scintil is a photonic integration company. We build dense wavelength division multiplexing (DWDM) light engines using a heterogeneous integration process we call SHIP™, for Scintil Heterogeneous Integrated Photonics. SHIP runs in production on Tower Semiconductor’s silicon photonics line. Sylvie Menezo, our founder and CTO, originated the technology out of CEA-Leti in Grenoble, where we are headquartered, with operations now expanding into the US.

The shorter version: photonics is going through the transition that semiconductors made 40 years ago, from discrete components on boards to integrated circuits on wafers. SHIP is how we participate in that transition. By bonding III-V gain material directly onto a silicon photonic wafer in a foundry process, we put lasers, modulators, detectors, and passive photonics on the same die, in the same flow, on the same lines that already produce tens of millions of optical transceivers a year.

LEAF Light™ is our first commercial product on SHIP, a single-chip DWDM laser source for AI scale-up networks. Our $58M Series B last year included NVIDIA as a participant.

What problems are you solving?

The hardest problem in AI infrastructure right now is not compute. It is the network between compute elements. As accelerator clusters scale into thousands of processors, the network determines how much of the compute is actually usable, and the metrics that decide it are energy per bit, latency, and bandwidth at the edge of the package.

Copper is at the wall on all three. Single-wavelength co-packaged optics (CPO) is in production now and works well for scale-out. The next step, where the system gains compound, is DWDM CPO for scale-up. That step needs a multi-wavelength light source that can be manufactured on existing silicon photonics flows, at hyperscale volumes, with the wavelength precision and reliability the architecture demands. That has been the missing piece. Building it is what we do.

What application areas are your strongest?

AI scale-up networks. The economics and the technical requirements both point to DWDM CPO as the destination architecture, and that is where our platform fits most directly. LEAF Light targets the DWDM laser source, which is the highest-value, hardest-to-manufacture element of that architecture.

The same process supports a broader set of integrated photonic devices, including transceivers with integrated lasers, optical circuit switches with semiconductor optical amplifiers, and high-speed modulator arrays. These are not separate process developments. They are products that share a single foundry-resident process flow. We are starting with the application that has the most immediate market pull.

What keeps your customers up at night?

Two things, in this order. First, manufacturing capacity at the volumes hyperscale build-outs already require. AI infrastructure committed capex at a scale that assumes the optical layer will be there, in the bandwidth, density, power, and reliability budgets the architecture needs, when the buildings switch on. The architectural debate is largely settled. The execution debate is wide open, and it lives at the foundry.

Second, headroom. A first-generation specification is a starting line, not a destination. Customers planning multi-rack scale-up clusters need to see a path where bandwidth per fiber scales without re-doing the fiber plant, the package, or the per-channel electronics every two years. They want to know that the supplier they choose for the first step has a process flow that supports the second and third steps on the same lines. That second question is the harder one for most of the industry to answer.

What does the competitive landscape look like and how do you differentiate?

Most of the energy in the sector right now is around three architectural choices for the DWDM light source. Discrete distributed feedback laser arrays externally combined into a fiber. Mode-locked lasers producing a wavelength comb in a single cavity. And heterogeneous integration of III-V gain material onto a silicon photonic wafer.

Each works in the lab. They separate on the manufacturing curve. With discrete arrays or external combiners, every additional wavelength is another assembly step, another alignment, another bill-of-materials line. With heterogeneous integration at the wafer level, the next wavelength is another circuit element. The cost curve follows a semiconductor learning curve, not a linear assembly curve.

SHIP is a backside-on-box flow built on Tower’s PH18M silicon photonics process. We start with standard PH18M, perform a handle exchange to expose the buried oxide as a bonding surface, bond III-V die, and process them into integrated photonic devices in alignment with the silicon waveguides beneath. The flow is foundry-resident, with a process design kit, design rules, and the manufacturability discipline a real product line requires. Our differentiation lives there.

What new features/technology are you working on?

Last month at OFC we launched the LEAF Light Evaluation Kit. It is the first DWDM laser source EVK to move from internal validation into a customer-facing program. Each unit hosts laser optical sub-assemblies in 8-wavelength or 16-wavelength configurations and provides a defined integration path into the ELSFP module form factor the industry is converging on.

A piece of the platform we are pushing hard on is what we call WaveGuard™, on-chip frequency monitoring and trimming that holds DWDM channel spacing within tight tolerances across temperature, ageing, and package stress. Wavelength precision is one of the things that has historically held DWDM back in production, and intelligent on-die control is how we solve it.

Beyond LEAF Light, the same SHIP flow supports integrated transceivers, optical circuit switches with semiconductor optical amplifiers, and other building blocks that hyperscalers will want next. We are walking that roadmap in step with customer evaluations rather than ahead of them.

How do customers normally engage with your company?

Through our website at www.scintil-photonics.com, the contact form, or directly at contact@scintil-photonics.com. Our LEAF Light EVK is available now to qualified customers through an early access program. Teams that want to bring real constraints to a technical discussion are exactly the conversations we are looking for.

Also Read:

CEO Interview with Dave Kelf, CEO of Breker Verification Systems

CEO Interview with Geoffrey Rodgers of Chameleon Semiconductor

CEO Interview with Xianxin Guo of Lumai


Podcast EP345: The Impact of the New proteanTecs PVT Plus Sensors with Nir Sever

Podcast EP345: The Impact of the New proteanTecs PVT Plus Sensors with Nir Sever
by Daniel Nenni on 05-08-2026 at 10:00 am

Daniel is joined by Nir Sever, Senior Director of Business Development at proteanTecs. Nir has over 30 years of experience in advanced VLSI engineering. Before joining proteanTecs, he served for 10 years as the COO of Tehuti Networks, a pioneer in high-speed networking semiconductors. Prior to that, he served for 9 years as Senior Director of VLSI Design and Technologies for Zoran Corporation.

Dan discusses proteanTecs’ unique business model and technology portfolio with Nir. They also explore in some detail the new PVT Plus sensors from protenTecs. Nir describes the enhancements delivered with PVT Plus, which include more sensor measurement capabilities, easier integration with the protenTecs portfolio to create a complete hardware/software solution for in-chip monitoring, and newly designed hardware for advanced processes. Nir then explains how a complete hardware/software in-chip monitoring capability can be built and describes some significant customer benefits that have been achieved.

Contact proteanTecs

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.