Bronco Webinar 800x100 1

UX in Agentic Systems. Innovation in Verification

UX in Agentic Systems. Innovation in Verification
by Bernard Murphy on 04-28-2026 at 6:00 am

Innovation New

A switch this month to principles behind building effective agentic systems, going beyond simply a new way to stitch together tools, agents and orchestration, to deeper consideration of user experience and how we most effectively blend agentic with human-in-the-loop. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Magentic-UI- Towards Human-in-the-loop Agentic Systems. The authors are from Microsoft Research. The paper was published in arXiv in 2025 and has 19 citations.

How can we ensure that agentic system assemblies don’t compound uncertainty and complexity in operation? This paper from Microsoft Research describes a web-based platform (open-source under GitHub) for researching how to optimize user experience (UX) and confidence in control by systematizing collaboration between agents and human-in-the-loop, a very relevant topic these days.

This isn’t a deeply technical paper, but it does offer plenty of interesting ideas on co-planning, co-tasking, action guards, and learning from task execution.

Paul’s view

Intriguing paper this month out of Microsoft Research, exploring how to keep a human “in the loop” during long running complex agentic AI tasks. With 2026 shaping up to be a big year for applying agentic AI to RTL design and verification, how best to keep DV engineers and RTL designers in the loop is a hotly debated topic with our customers.

The paper summarizes various methods to keep a human in the loop based on the underlying workflow-based architecture on which modern agents approach complex tasks. For example, an agent usually begins by asking the LLM to break the task down into a multi-step plan consisting of smaller more manageable sub-tasks which are then swarmed out to sub-agents to complete. A quick summary of the methods proposed in the paper is as follows:

  • Co-planning: check back in with the user after the initial multi-step plan is generated.
  • Co-tasking, part 1: have agents continuously communicate what they are doing to a live stream that the user can watch and intercept if an agent is going off the rails.
  • Co-tasking, part 2: Provide instructions in prompts that guide agents to seek user clarification or confirmation in certain situations.
  • Memory: whenever an agentic session is successful at a task, the user can save that session to a side file by running an agent to review and summarize its live stream traces into a prompt that can guide a future agentic session. What we now call a “skill”. The live stream traces also include all the human interventions, so the saved skill can include instructions on when to prompt the user.

The authors implement their methods in a system called Magentic-UI and benchmark it on a well-known agentic AI benchmark called GAIA by creating a special agent that operates as a surrogate for a real human. This “simulated user” is given a cheat sheet of golden human created reference plans for how to complete each of the tasks in the benchmark. Magentic-UI achieves a 30% score when it’s told never to prompt the user, and a 50% score when allowed to prompt the simulated user. A human performing the tasks entirely on their own with a web browser scores 90%. Hard to know if the 50% to 90% gap is due to limitations in the simulated user agent or the human prompting methods themselves, but either way it’s a big gap, so plenty of room for further innovations here!

Raúl’s view

This month’s paper is about Magentic-UI: Towards Human-in-the-loop Agentic Systems, an open-source prototype interface from Microsoft Research for studying human-in-the-loop agentic systems. The premise is straightforward: today’s agents are not reliable enough to operate autonomously, so productivity comes from combining agent execution with human oversight.

Magentic-UI is built as a multi-agent architecture that explicitly treats the human as part of the agent team. Its main contribution is a set of six interaction mechanisms:

  • Co-planning (joint human–agent plan creation)
  • Co-tasking (shared execution)
  • Action guards (human approval of risky actions)
  • Answer verification (post hoc validation of results)
  • Memory (reuse of prior plans)
  • Multi-tasking (parallel agent execution with human oversight)

Evaluations include benchmarks (GAIA, WebVoyager, etc.), simulated users, and a small qualitative study. The authors conclude that these mechanisms “have the potential to improve task success and reduce oversight burden.”

Two issues stand out.

First, the evaluation is weak. A ~10–12 person, one-hour user study is not statistically meaningful, and simulated users (LLMs) are a poor proxy for real human behavior. The authors themselves position the study as qualitative.

Second, the idea of a standard interface for agentic AI is questionable. History suggests otherwise: different ecosystems optimize for different interaction models. Google tends toward minimalist, search-centric interfaces (one box, increasingly agentic underneath), Microsoft favors feature-rich, layered interfaces (Office, now Copilot everywhere). Agentic systems will likely fragment by context, for consumers, largely invisible automation; for enterprises, audit-heavy workflows; for developers, programmable pipelines.

The system also remains far from human-level performance (roughly 30–50% task success vs. ~90% for humans on some benchmarks), and only 41.7% of users in the study said they would use it frequently. This reinforces the paper’s premise but also highlights that the interface does not solve the core capability gap.

Despite these limitations, the paper is worth reading. It clearly defines human–agent interaction patterns (co-planning, co-tasking, etc.), tightly integrates agents with UI, introduces practical safety ideas like action guards, and argues for a plan-centric interface that improves transparency and control. Most importantly, it shows how humans can collaborate with imperfect agents, rather than assuming near-term full autonomy.

Magentic-UI sits within a broader movement toward agentic interfaces. Google appears to be pushing toward invisible, search-centric agents, while Microsoft is embedding agents into rich, Office-like workflows. My view is that agent interfaces will fragment across use cases rather than converge.


Scalable Network-on-Chip Enables a Modular Chiplet Platform

Scalable Network-on-Chip Enables a Modular Chiplet Platform
by Daniel Nenni on 04-27-2026 at 10:00 am

MOSAICS Block Diagram

The semiconductor industry is undergoing a profound transformation as system complexity, performance expectations, and time-to-market pressures continue to rise. Traditional monolithic system-on-chip (SoC) designs are increasingly giving way to modular, chiplet-based architectures that enable flexibility, scalability, and faster innovation cycles. Within this evolving landscape, the collaboration between Menta and Arteris illustrates how a scalable NoC strategy can serve as the backbone of a modular silicon platform.

Founded in 2007, Menta has established itself as a pioneer in eFPGA IP, delivering highly configurable programmable logic solutions for integration into SoCs and ASICs. Its focus on measurable performance gains, power efficiency, and long-term sustainability has positioned the company strongly in high-value embedded markets such as edge AI, robotics, industrial automation, and smart vision systems. As system requirements expanded, Menta recognized the need for a more advanced integration framework capable of supporting heterogeneous chiplets across multiple generations and configurations.

This vision materialized in the MOSAICS platform, a modular chiplet-based architecture designed to rethink how custom silicon systems are built and deployed. At the center of this platform is the MOSAICS Hub, delivered as a Known Good Die to ensure predictable system integration. The Hub orchestrates communication among diverse chiplets, enabling system-in-package designs with significantly reduced risk. The platform promises up to ten times lower system costs and up to four times faster time-to-market, reflecting a strong emphasis on scalability and ecosystem enablement.

However, realizing this vision required overcoming significant technical challenges. A chiplet-based architecture demands a robust on-chip communication infrastructure capable of managing both bandwidth-intensive data transfers and latency-sensitive control transactions. The interconnect must scale across multiple chiplet generations while maintaining performance, area efficiency, and power constraints. Additionally, it must integrate seamlessly with a wide range of initiators and targets without increasing redesign effort or integration risk.

To address these requirements, Menta selected FlexNoC® interconnect IP from Arteris as the communication backbone of the MOSAICS Hub. FlexNoC is a silicon-proven, highly configurable NoC solution designed to optimize data movement in complex SoCs. By leveraging FlexNoC, Menta implemented a high-performance interconnect capable of supporting more than 30 initiators and targets operating at frequencies exceeding 500 MHz. This architecture enabled the team to balance scalability, performance, and silicon efficiency while accommodating diverse traffic profiles within a single coherent framework.

The configurability of FlexNoC proved particularly valuable. In a heterogeneous chiplet environment, predictable QoS, error detection and correction, and future functional safety capabilities are essential, especially for high-end HPC and AI applications in data centers. FlexNoC’s mature feature set provided these capabilities while integrating smoothly into Menta’s existing design framework. This reduced engineering complexity and minimized integration risk across the broader MOSAICS roadmap.

Another key benefit was accelerated development. The ease of configuration and integration allowed Menta’s engineering team to rapidly prototype, test, and validate the NoC implementation. Faster iteration cycles translated directly into reduced development timelines and improved confidence in meeting both performance and area targets. For a platform aimed at edge AI and other demanding embedded markets, this agility is critical.

The results demonstrate that a well-architected NoC foundation is more than a technical component, it is a strategic enabler. By standardizing on a scalable interconnect solution, Menta established a communication fabric capable of evolving alongside its chiplet ecosystem. This foundation supports current deployments while providing flexibility for next-generation platforms, new customer configurations, and emerging market requirements.

Bottom line: MOSAICS represents more than a single product initiative. It introduces a new model for designing and delivering custom silicon through modularity, ecosystem collaboration, and silicon-proven building blocks. The partnership between Menta and Arteris underscores the importance of strong technology alliances in achieving this vision. By combining programmable logic innovation with a scalable, power-efficient NoC architecture, the companies have laid the groundwork for a new generation of chiplet-based systems that are faster to develop, easier to integrate, and built for long-term scalability.

CONTACT ARTERIS IP

Also Read:

Arteris Smart NoC Automation: Accelerating AI-Ready SoC Design in the Era of Chiplets

WEBINAR: Why Network-on-Chip (NoC) Has Become the Cornerstone of AI-Optimized SoCs

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs


The Shift to System-Level AI Drives Next-Generation Silicon

The Shift to System-Level AI Drives Next-Generation Silicon
by Kalar Rajendiran on 04-27-2026 at 8:00 am

TSMC Advanced Technology Roadmap

At its 2026 Technology Symposium, TSMC delivered a clear message: the AI era has entered a new phase. The primary constraint is no longer model capability, but the systems required to run those models at scale. Addressing this shift will demand significant advances in semiconductor technology, spanning compute, memory, interconnects, and power efficiency.

From Model Scaling to System Scaling

Over the past several years, AI progress was largely driven by scaling models. In other words, expanding parameter counts, improving training methods, and unlocking new reasoning capabilities. That paradigm is now evolving. In 2026, the bottleneck has shifted to system-level challenges such as compute throughput, memory bandwidth, interconnect efficiency, power delivery, and deployment scale. AI is becoming fundamentally a systems problem rather than a purely algorithmic one.

This transition is especially visible in the rise of enterprise AI agents. These systems are moving beyond narrow task assistance to orchestrating workflows, integrating enterprise data, and enabling more autonomous decision-making. As a result, they require high reliability, strong security, and sustained performance, all of which significantly increase infrastructure demands.

Explosive Growth in AI Compute Demand

AI compute demand continues to grow at an extraordinary pace, driven by both training and inference. On the training side, large language models have already driven roughly fivefold annual increases in compute requirements. And the shift toward multimodal AI which combines text, vision, audio, and real-world signals, is accelerating this trend further. Training demand alone is expected to increase by another order of magnitude.

Even more striking is the growth in inference. Token generation has increased more than 500 times between 2022 and 2025, and new techniques such as chain-of-thought reasoning are significantly increasing compute per query. The emergence of agent-based AI systems could multiply this demand again, while large-scale multimodal deployments may push total inference workloads toward million-fold growth. As a result, inference is rapidly becoming the dominant driver of compute infrastructure expansion.

AI Is Expanding Beyond the Cloud

AI is no longer confined to centralized cloud environments; it is rapidly expanding into edge and physical domains. At the edge, inference is increasingly being performed directly on devices such as PCs, smartphones, and wearables. This shift enables lower latency, improved privacy, and real-time responsiveness, and is driving the widespread adoption of dedicated AI accelerators like NPUs in consumer hardware.

At the same time, physical AI is bringing intelligence into the real world through robotics and embodied systems. These applications require tight integration of AI with sensing, actuation, and real-time control, all within strict power and reliability constraints. Together, these trends highlight the growing need for silicon solutions that can balance performance, efficiency, and compact form factors across a wide range of environments.

Data Center Scaling Enters Hyper-Growth

The rapid expansion of AI workloads is fundamentally reshaping data center infrastructure. Annual capacity additions, which previously grew at a steady rate of around 5 to 6 gigawatts, are now expected to reach 30 to 40 gigawatts per year. At the same time, overall data center investment growth has accelerated from roughly 10 percent annually before the rise of generative AI to more than 30 percent per year through the end of the decade.

This growth is not just about adding capacity; it is about delivering efficient, reliable, and scalable systems. Energy efficiency and total cost of ownership are becoming central concerns, making semiconductor-level improvements critical to the sustainability of AI infrastructure.

TSMC’s Technology Roadmap: Key Innovations

A14: Next-Generation Logic Platform (2028)

A14 represents TSMC’s next major step in logic technology, combining second-generation nanosheet transistors with NanoFlex Pro architecture and continued backend scaling innovations. Compared with the N2 node, A14 is expected to deliver a 10 to 15 percent speed improvement at the same power or a 25 to 30 percent power reduction at the same speed, along with approximately 1.2 times the logic density.

A central innovation in A14 is NanoFlex Pro, which enhances standard cell architecture to improve area efficiency and performance per watt. This is complemented by significant backend scaling advancements, including tighter metal pitch and reduced minimum metal area, enabling higher transistor density and improved overall efficiency. Together, these innovations demonstrate that progress at advanced nodes now depends on full-stack optimization rather than transistor scaling alone.

A13 and A12: Extending the Platform

Building on A14, TSMC is extending its roadmap with A13 and A12 technologies, both targeted for production around 2029. A13 further improves density and efficiency while maintaining backward compatibility with A14, enabling smoother design migration for customers. A12 introduces backside power delivery, a major innovation that improves power integrity and performance by separating power and signal routing. These developments reflect a broader shift toward holistic scaling, where power delivery and system-level considerations play an increasingly important role.

N2 Family: Nanosheet Era in Production

The N2 node marks TSMC’s transition from FinFET to nanosheet transistor architecture, delivering improved electrostatic control, reduced leakage, and lower operating voltage. These benefits translate into tangible efficiency gains in real-world applications.

The N2 family includes several variants designed to address different performance needs. The base N2 node entered production in 2025, followed by N2P in 2026 as an enhanced version. N2X, expected in 2027, targets high-performance applications with additional frequency gains, while N2U, planned for 2028, integrates NanoFlex Pro enhancements to further improve performance and power efficiency. This expanding family underscores the importance of offering flexible solutions tailored to diverse workloads.

Advanced Packaging and 3D Integration

As AI workloads continue to scale, advanced packaging technologies are becoming as critical as process nodes themselves. TSMC is advancing its chiplet and 3D integration capabilities with improvements such as second-generation CoWoS technology, which reduces interconnect resistance and enables higher bandwidth through finer I/O pitch.

These innovations allow for denser integration of compute and memory, improving performance and energy efficiency at the system level. In the AI era, packaging is no longer a secondary consideration but a key enabler of overall system performance.

N3: Today’s Workhorse Node

While future nodes attract significant attention, the N3 family remains the backbone of current high-performance computing. It is widely deployed across mobile devices, CPUs, AI accelerators, and networking applications, with multiple variants such as N3P and N3C supporting different use cases. Strong customer adoption and a robust pipeline of new designs highlight the continued importance of mature leading-edge nodes in delivering value across the ecosystem.

Summary

TSMC’s roadmap reflects a fundamental shift in the semiconductor industry. As AI continues to scale, the primary challenge is no longer developing more powerful models, but building the infrastructure required to support them efficiently. This requires innovation across the entire technology stack, from transistors and interconnects to packaging and system architecture.

In this new era, success will depend on the ability to deliver not just better chips, but better systems. The companies that can integrate performance, efficiency, and scalability at every level of the stack will define the future of AI—and increasingly, that future is being shaped at the silicon level.

Also Read:

TSMC Technology Symposium 2026 Overview

TSMC to Elon Musk: There are no Shortcuts in Building Fabs!

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation


All in One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die

All in One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die
by Daniel Nenni on 04-27-2026 at 6:00 am

All in One Bluetooth Audio A Complete Solution on a TSMC 12nm Single Die

The rapid evolution of wireless audio has placed unprecedented demands on system integration, power efficiency, and performance. Against this backdrop, the webinar “All-in-One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die” offers a timely and technically rich exploration of how modern semiconductor design is meeting these challenges. For engineers, architects, and product leaders working in wireless audio, connectivity, or system-on-chip (SoC) design, this session provides both practical insights and a forward-looking perspective on integration trends shaping the industry.

REGISTER HERE

At the heart of the webinar is a detailed examination of a fully integrated Bluetooth audio solution implemented on a single die using advanced 12nm process technology from TSMC. Moving to a single-die architecture represents a significant shift from traditional multi-chip or module-based designs. By consolidating RF front-end, baseband processing, digital signal processing (DSP), memory, and power management into one silicon platform, designers can achieve tighter coupling between subsystems, reduced latency, and improved energy efficiency. This level of integration is particularly critical for applications such as true wireless earbuds, smart headsets, and embedded audio systems, where size, battery life, and performance must be optimized simultaneously.

One of the key reasons to attend this webinar is the opportunity to understand the architectural trade-offs involved in such high levels of integration. Designing on a 12nm node introduces both opportunities and constraints. While the process enables higher transistor density and lower power consumption, it also requires careful attention to analog/RF performance, noise isolation, and thermal considerations. The session is expected to walk through these challenges, offering insights into how designers balance digital scaling benefits with the sensitivities of RF and mixed-signal blocks.

Another compelling aspect of the webinar is its focus on system-level optimization. Bluetooth audio is no longer just about connectivity; it is about delivering high-quality, low-latency audio experiences under strict power budgets. Attendees will gain visibility into how DSP pipelines are structured for efficient audio processing, how coexistence mechanisms are implemented to handle interference, and how power management strategies are designed to extend battery life without compromising performance. These are not abstract concepts but practical considerations that directly impact product success in competitive consumer markets.

The webinar also promises to cover silicon validation and real-world performance metrics. This is particularly valuable because it bridges the gap between theoretical design and deployed systems. Understanding how a single-die solution performs in terms of power consumption, latency, RF robustness, and audio fidelity provides attendees with a benchmark for their own designs. It also offers a clearer picture of what is achievable with current process technology and integration techniques.

Beyond the technical depth, the webinar is relevant because it reflects a broader industry trend toward consolidation and platformization. As wireless audio devices become more ubiquitous, the ability to deliver complete, scalable solutions on a single chip is becoming a competitive differentiator. Engineers who understand these trends will be better positioned to design future-proof systems and make informed decisions about architecture, process nodes, and integration strategies.

Finally, attending this webinar is an efficient way to stay current in a fast-moving field. Instead of piecing together information from disparate sources, participants can gain a cohesive understanding of end-to-end Bluetooth audio system design in a single session. Whether you are an RF engineer looking to understand digital integration impacts, a DSP developer interested in system constraints, or a product engineer evaluating design trade-offs, the content is directly applicable to real-world challenges.

REGISTER HERE

Bottom line: This webinar is more than a product overview; it is a deep technical dive into the future of integrated wireless audio systems. By attending, you gain not only knowledge of a specific implementation but also a framework for thinking about integration, efficiency, and performance in next-generation designs.

Also Read:

From Satellites to 5G: Ceva’s PentaG-NTN™ Lowers Barriers for Terminal Innovators

Ceva IP: Powering the Era of Physical AI

Ceva Wi-Fi 6 and Bluetooth IPs Power Renesas’ First Combo MCUs for IoT and Connected Home


Closing the Reality Gap: A New Architecture for 1.8-Tb/s Chiplet Governance

Closing the Reality Gap: A New Architecture for 1.8-Tb/s Chiplet Governance
by Admin on 04-26-2026 at 4:00 pm

image (8)

By Dr. Moh Kolbehdari

Dr. Moh Kolbehdari is a Senior Lead Architect at Socionext, where he specializes in the industrialization of high-performance AI chiplets and 1.8-Tb/s interconnects. With over two decades of experience in SI/PI, electromagnetic field theory, and system-level architecture, he has been a pivotal force in bridging the gap between cutting-edge silicon design and high-volume manufacturing (HVM).

Dr. Moh is the creator of the SEGA™ (Systematic Engineering Governance Architecture) framework, a methodology designed to solve the “Crisis of Complexity” in heterogeneous integration. His work focuses on transforming the package into an Active Control Plane, utilizing field-confined EM Corridors and state-aware causality to ensure deterministic yield at 2nm and beyond. He is a frequent contributor to industry-standard committees and is recognized for his “Physics-First” approach to solving the semiconductor industry’s most challenging entropy walls.

The Entropy Wall at 2nm

The semiconductor industry is hitting a “Traceability Wall”. As we push toward 1.8-Tb/s interconnects and massive 2.5D/3D AI chiplet systems, the traditional design-then-verify flow is breaking down. We can no longer afford to treat the package as a passive “container” for silicon; at these speeds and densities, the package must be viewed as the Active Control Plane.

The “Reality Gap”—the delta between golden-state simulations and high-volume manufacturing (HVM) yield—is widening. Standard EDA tools excel at predicting nominal performance, but they often fail to account for the stochastic nature of the OSAT environment. To close this gap, we must move beyond “Nominal Design” and embrace Governed Convergence.

Introducing SEGA™: Systematic Engineering Governance Architecture

To address this complexity, I have developed SEGA™. It is a governance layer that sits above the standard EDA ecosystem, enforcing a unified “Readiness Loop” between simulation, lab measurement, and OSAT metrology. SEGA™ ensures that every picosecond of signal performance is backed by admissible evidence from the assembly floor.

Figure 1: The Governed Convergence Pyramid

As shown in the pyramid, SEGA™ establishes a three-tier hierarchy for system success:

  1. The Foundation: Packaging as the Control Plane. This tier treats the substrate as a dynamic hub that governs the convergence of SI, PI, Power, and Thermal stresses. By managing these variables in a unified hub, we prevent late-stage design “blow-ups” that typically occur when these domains are siloed.
  2. The Middle Tier: EM Corridor Architectures. Traditional PCB and package traces rely on “dirt road” routing that becomes chaotic at sub-THz frequencies. We implement field-confined physical pathways—EM Corridors—that ensure electromagnetic field continuity across the BGA transition zone.
  3. The Apex: Evidence Gating. This is the final filter. Only data that passes the State-Aware Causality filter is allowed to proceed to tape-out. This means every simulation result must be “certified” against known physical manufacturing modes.

Confronting the OSAT Reality

The biggest threat to modern chiplet systems isn’t just signal decay; it is the physical variables of the assembly floor. Substrate warpage, solder-bump collapse, and thermal drift create an OSAT Reality that golden simulations often ignore. When a design moves from the lab to HVM, these physical stresses introduce “entropy” that degrades performance.

Figure 2: Governed Convergence – Closing the Reality Gap

By using State-Aware Causality, we link performance decay directly to specific deformation modes. For example, if a 1.8-Tb/s eye diagram closes during stress testing, the SEGA™ framework doesn’t just report a failure; it tells us exactly which manufacturing variable—such as a $30\mu$m substrate warp or lateral misalignment—caused the shift. This transforms “failure analysis” from a reactive guessing game into deterministic governance.

Deep-Dive Case Study: AI Chiplet PDN Impedance Flattening

The power of systematic governance is most visible in the Power Delivery Network (PDN). In high-performance AI systems, suppressing mid-frequency die resonance is critical for maintaining system stability under heavy workloads.

Figure 3: SEGA™ Case Study – PDN Impedance Flattening

Our case study on 2.5D AI Chiplet Power Architecture (CPA) demonstrates how implementing a Localized VRM (PCA) governs the PDN. Traditionally, VRMs placed on the PCB struggle to manage the resonance peaks occurring at the interposer and die levels. By aligning the VRM response directly with the in-package parasitics discovered via our state-mapping, we successfully suppressed the die resonance peak (occurring between 170–280 MHz) to remain consistently below the $0.09 \Omega$ Target.

This level of flattening ensures that the silicon sees a stable voltage environment regardless of the switching activity of adjacent chiplets. This is a result that “golden” simulations can suggest, but only a governed architecture like SEGA™ can guarantee in a mass-production environment.

The Path Forward: Industrializing the Interconnect

The move toward 2nm and beyond is not just a lithography challenge; it is a governance challenge. As we move toward 10 Tb/s UCIe targets and increasingly complex heterogeneous systems, the architects who can bridge the gap between simulation and the factory floor will define the future.

The next era of advanced packaging will be won by governed convergence, not by activity alone. By implementing SEGA™, we move the industry toward a future where “first-time-right” isn’t a goal—it is a deterministic outcome of the architecture itself.

Also Read:

Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology

Synopsys Advances Hardware Assisted Verification for the AI Era

Scaling Multi-Die Connectivity: Automated Routing for High-Speed Interfaces


SemiWiki Q&A with Julie Rogers, Executive Director, ESD Alliance

SemiWiki Q&A with Julie Rogers, Executive Director, ESD Alliance
by Daniel Nenni on 04-26-2026 at 2:00 pm

Julie Rogers

The Electronic System Design Alliance (ESD Alliance), a SEMI Technology Community, an international association of companies providing goods and services throughout the semiconductor design ecosystem, is a forum to address technical, marketing, economic and legislative issues affecting the entire industry. It acts as the central voice to communicate and promote the value of the semiconductor design industry as a vital component of the global electronics industry.

Tell us a little bit about yourself and your path to your new role as Executive Director of the Electronic System Design (ESD) Alliance.

I’m excited for what lies ahead for the ESD Alliance, also known as ESDA, and the design industry. My career has been shaped by curiosity about how technology evolves and reaches the market. Early in my career, I worked with a large hospital system writing grants and securing education funding from medical device companies, which sparked my interest in innovation and ultimately drew me to Silicon Valley.

More than a decade ago, while running my own marketing firm, I added the ESD Alliance—then EDAC—to my client list. When ESDA later merged into SEMI, I became Director of Marketing for SEMI Americas while continuing my work with ESDA, giving me a front-row seat to the evolution of the design ecosystem.

What are your responsibilities and priorities in your new role as Executive Director?

My focus is to support and advance our member companies and strengthen the role of the ESDA within the broader semiconductor ecosystem. Key priorities include driving executive engagement, expanding thought-leadership platforms, global outreach and ensuring our programs deliver value.

A key priority is inspiring leadership participation, bringing executives together to collaborate, share insights, and address challenges that impact the entire design community. Our events appeal to everyone working in and with the design industry bringing topics and companies together, which helps strengthen established and emerging companies as the industry navigates rapid change.

What do you see as the biggest opportunity for the ESD Alliance and the EDA and IP community in 2026?

A major opportunity is helping members navigate complexity, from export regulations to AI-driven innovation, while continuing to support emerging companies through visibility and strategic connections. Design is increasingly central to all semiconductor end products and services, and ESDA plays a critical role in communicating this value.

I see tremendous opportunity to support emerging companies. These innovators are driving new ideas and technologies, and we can help them with strategy, visibility, and the right networking connections to accelerate their growth. I really enjoy this part when it all comes together to up level the design ecosystem.

We have an expanded resource working with our regional offices on a global level as well. Pre-pandemic we had initial meetings worldwide and we plan to enhance visibility of ESDA and increase the design community momentum on a global scale.

How is the ESD Alliance working to address these opportunities?

We’re expanding both the scope and depth of our programs to address the most urgent member needs.

The ESD Alliance 2026 Executive Outlook, scheduled for Wednesday, June 10, at Cadence in San Jose, brings together senior leaders to discuss critical trends focusing on “How will Agentic AI Change Chip Design and Verification.” Panelists will survey the excitement surrounding the innovation in chip design and verification, collaboration between traditional EDA and agentic AI startups and broader implications for technological advancements.

The event will be held at Cadence, 2655 Seely Avenue in San Jose beginning at 5:30 p.m. with networking, dinner and beverages. The panel will follow at 6:30 p.m. Tickets for the event are free for SEMI/ESDA members and $40 per person for non-members. Registration is open.

We’re also continuing to grow our advocacy programs with another webinar “Navigating Export Controls in EDA” Thursday, June 11“Gen-AI for Chip Design and Security: A Look into the Future” will be held Thursday, August 27. Additional events and a webinar on workforce development are also in the works.

SEMICON West 2025 included a design program that was well attended. Will that continue?

Absolutely. The response was incredibly strong, so in 2026 we’re expanding the design program to a full day of content. We’re actively looking for speakers and fresh perspectives, and we want this to be a must-attend forum for the design community. We will be featuring a design keynote as well. It’s coming up October 13-15 at the Moscone Center in San Francisco.

How do companies in the EDA and IP space typically engage with the ESD Alliance?

Engagement happens at many levels. Executives participate in events such as the ESDA/CEDA Phil Kaufman Award Dinner and the ESDA Executive Outlook. Companies also engage their design engineers and other key staff through participation in our webinars, advocacy initiatives, education, networking events, workforce development efforts through our SEMI Foundation, speaking opportunities and working groups. We have groups focused on key industry challenges such as platform interoperability, license management, and anti-piracy and we produce a highly regarded quarterly market report, Electronic Design Market Data (EDMD). Details can be found on the ESDA website.

Or SemiWiki readers can contact me directly:

jrogers@semi.org
(916) 798-9919
Julierogers.200Skype
WeChat ID: JulieARogers

Also Read:

Podcast EP333: A Look at the Broad, Worldwide Impact SEMI Has on the Semiconductor Industry with Ajit Manocha

The Name Changes but the Vision Remains the Same – ESD Alliance Through the Years

Podcast EP340: A Review of the Q4 2025 Electronic Design Market Data Report with Wally Rhines


CEO Interview with Xianxin Guo of Lumai

CEO Interview with Xianxin Guo of Lumai
by Daniel Nenni on 04-25-2026 at 2:00 pm

Dr Xianxin Guo

Xianxin is the CEO and Co-Founder of Lumai, an Oxford University spin-out pioneering disruptive optical computing technologies for Al and data center acceleration. He brings over 15 years of experience in physics and engineering, and was previously an RCE 1851 Research Fellow, a prestigious fellowship whose past awardees include eight Nobel Laureates.

Tell us about your company.

Lumai is an optical compute company changing how AI compute is delivered at scale. Traditional silicon-only approaches are hitting fundamental limits in power efficiency, cost, and scalability. Our team brings together expertise across optics and AI systems with a shared belief: that new AI infrastructure requires a new approach, and that approach will use optical compute.

The technology is based on many years of research at the University of Oxford. At Lumai, we are building optical computing technology designed specifically for AI workloads. By leveraging light instead of electrons for key computational operations, we dramatically improve performance-per-watt and unlock a more sustainable path forward for large-scale AI deployment.

What problems are you solving?

AI has hit a power wall. Due to the limitations of silicon scaling, it is increasingly difficult to deliver a step-change in token generation within the fixed power constraints of a data center. A 1GW data center is limited to 1GW. Yet the goal of AI companies is to generate the maximum number of tokens – more tokens mean more intelligence and more revenue.

The core issue we address is compute inefficiency: data centers need to generate more tokens per Watt. Lumai tackles this through a hybrid optical and electronic approach, performing dense linear algebra (i.e. tensor operations) in light, alongside a standard digital chip where the software runs.

This hybrid design means the processor exposes a standard interface to the software stack and system interfaces, while offloading matrix computations to a far more efficient medium and dramatically reducing energy consumption.

In short, we are breaking through the bottlenecks slowing down AI systems.

What application areas are your strongest?

Our technology is particularly well suited to high-throughput AI inference workloads in data centers. This includes compute-bound applications such as large language models, recommendation systems, and video processing.

Lumai’s processor can be used as a prefill processor in a disaggregated compute architecture, alongside (for example) a GPU used for decode. It is especially effective in applications with long input contexts and large token volumes (e.g. KV cache generation heavy workloads).

What keeps your customers up at night?

Two things consistently come up: cost and scalability. AI is becoming central to business strategy, but the infrastructure required to support it is increasingly expensive and power-constrained.

Customers are concerned about how to scale their AI capabilities without hitting data center power limits or seeing costs spiral. At the same time, they want to achieve this without fundamentally changing their software workflows or models.

Ultimately, they’re asking: how do we continue advancing AI performance without running into a wall on cost and power?

What does the competitive landscape look like and how do you differentiate?

The landscape is evolving quickly, with innovation across GPUs, ASICs, and other AI accelerators. While these approaches deliver incremental improvements, they still rely on electronic architectures.

Lumai differentiates by taking a fundamentally different approach: optical compute. This allows us to bypass many of the inherent limitations of electronic-only systems, where increasing performance drives both higher power consumption and higher total cost of ownership (TCO).

Our focus isn’t just on building a faster processor, it is about redefining how compute is performed for AI workloads in the most efficient way. That enables step-function improvements rather than incremental gains.

We are building an architecture designed to scale generation after generation.

What new features/technology are you working on?

We are continuing to advance our optical compute platform, with a strong focus on integration and scalability. This includes developing the supporting electronics and software stack required for seamless deployment.

We have already proven the core technology; our current focus is on supporting trials, further increasing performance, and ensuring our platform integrates easily into existing AI infrastructure. Compatibility with current frameworks and workflows is key so customers can adopt it without major disruption.

As we move forward, you will see continued progress toward production systems that deliver meaningful performance and efficiency gains at scale.

How do customers normally engage with your company?

Engagement typically begins with collaborative discussions around specific AI workloads and infrastructure challenges. From there, we work closely with customers to evaluate where optical compute can deliver the most value.

We take a partnership-driven approach: whether through early access programs, joint development efforts, or pilot deployments. Close collaboration is key to ensuring successful integration and real-world impact.

Our goal is to meet customers where they are, help them break through the power wall, and transition to a more efficient, scalable AI compute platform that will serve them not only today but well into the future.

Also Read:

CEO Interview with Johan Wadenholt Vrethem of Voxo

CEO Interview with Dr. Hardik Kabaria of Vinci

CEO Interview with Steve Kim of Chips&Media


Podcast EP343: How Ethernet is Enabling Advances in AI with Dr. Mohan Kalkunte

Podcast EP343: How Ethernet is Enabling Advances in AI with Dr. Mohan Kalkunte
by Daniel Nenni on 04-24-2026 at 10:00 am

Daniel is joined by Dr. Mohan Kalkunte, Vice President of Architecture & Technology in the Core Switch Products group at Broadcom, where he leads architecture for Ethernet switching and NIC products across data center, enterprise, and service provider markets. With over 35 years of industry experience his previous stints include AT&T Bell Labs, AMD and Nortel Networks. He has over 150 patents, was named a Broadcom Fellow in 2009, elected IEEE Fellow in 2013, and elected in 2025 as NAE member for his contributions to Ethernet Switching.

Dan explores AI networking architectures with Mohan who explains why AI networking is different. Mohan provides an excellent overview of how AI demands have impacted network architecture. In this very informative discussion, items such as Ultra Ethernet, the Ultra Ethernet Consortium (UEC) and other open standards are discussed. The requirements for scale-up and scale-out are discussed. Mohan explains that the overall goal is to make Ethernet viable across the entire AI network stack. Mohan also discusses how the pace of network innovation is accelerating to fuel next-generation AI technology.

Innovations at Broadcom that are advancing AI networking are reviewed. Mohan also describes how all the work underway will come together to enable future generations of AI technology.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


SemiWiki Acquires IPnest!

SemiWiki Acquires IPnest!
by Daniel Nenni on 04-24-2026 at 8:00 am

AI IP Image

After more than 15 years of collaboration with Dr. Eric Esteve and IPnest, SemiWiki has acquired the famed IP reports with Eric Esteve staying on through 2026 to ease the transition. Not only will SemiWiki provide the industry standard Interface IP and Design IP reports, SemiWiki will be expanding the depth and breadth of the coverage. Semiconductor IP has always been a critical enabler of modern semiconductor design and a great source of traffic for SemiWiki.com so this is a 1+1=3 type of transaction.

When I started SemiWik in 2011 I recruited the top two EDA bloggers; Dr. Paul Mc Lellan and Daniel Payne. Paul has since retired and Daniel Payne is still writing for SemiWiki. I covered the foundries and I recruited Eric Esteve to cover Semiconductor IP. Eric was not a blogger when we met but he turned out to be one of the top bloggers on SemiWiki. Eric wrote 427 IP related blogs garnering more than 6 million views and for this I am very grateful.

Eric Esteve:

After starting on CMOS 2 micron in the 80’s by ASIC design I enjoyed multiple projects like supercomputer and Airbus engine motor control (CFM56), later used on Rafale Aircraft and many others to end my technical career as program manager for one of the first System-on-Chip (SoC) in 1995. We didn’t talk about IP, but we were using it. Later I was ASIC marketing manager in charge of North-American market and visited multiple customers, from Orlando to Salt Lake City, Chicago to Atlanta, learning a lot more about human contact. Finally, since 2005, my focus is IP, working for PLDA then Snowbush before creating IPnest in 2009 and developing a customer base of more than 20 customers: Top 3 EDA and many IP vendors (Alphawave, Rambus, CEVA, Arteris, Silicon Creations…) but also TSMC, Intel and most of the hyperscalars (Amazon, Google, Facebook). Most of them are loyal customers year after year for 17 years!

Just before creating IPnest in 2009, I worked for Snowbush in Toronto were I had to create a 5 year business plan and I learned two key lessons: The first was how important it was to manage SerDes design if you have to support interconnects protocol. The second was that the industry needed a source of intelligence about the interface protocol market, because I had to invent it in 2008 to build Snowbush’s business plan. IPnest’s first Interface IP report built in 2009 and sold to my first customers, Synopsys and Cadence, was certainly not perfect, but the quality has constantly improved, thanks to my customers challenging me and providing a feedback loop.

Just a side comment, in 2009, the Interface IP market was valued at $250 million, last year (2025) it was $2.5 billion. The interconnect IP market is booming since 2010, and will continue if you look at AI. You can improve compute by adding GPU or redesign it, but you have to use more powerful interconnect when developing new system and it’s possible to do it as protocols have been created to support it. PCIe 1.0 speed was 2.5 GT/s in 2005, today if you select PCI 7.0, you enjoy 128 GT/s. Clearly, the future of interconnect IP protocols will be essential for building more powerful AI systems!

The Design IP report is simply a tool needed by the IP industry, it’s a market share ranking, by IP category helping IP customers to select their provider and IP vendors to monitor their progress on a fast growing market. Easy to describe, it’s a little bit more complex to build as many players prefer to keep secret their detailed results. Experience is the key word here.

It’s almost the end of a 43-year journey. Deciding to go into a business that nobody knew in 1983 was challenging, it was also a way to be creative and to learn along the way. A special thanks to SemiWiki founder Daniel Nenni who offered me to blog when I was starting IPnest. It helped to pay the bills and, more importantly, to network and be recognized as an IP expert on such a large platform. So I am more than happy to transfer IPnest to SemiWiki. I am confident and trust SemiWiki to be the right place to develop IPnest reports to an even higher level!

SemiWiki:

The entire SemiWiki team wishes Eric a happy retirement and we look forward to continuing his excellence in IP reporting.

Also Read:

AI Booming is Fueling Interface IP 23.5% YoY Growth

Design IP Market Increased by All-time-high: 20% in 2024!

AI Booming is Fueling Interface IP 17% YoY Growth

Semi Market Decreased by 8% in 2023… When Design IP Sales Grew by 6%!


Elon Musk Needs to Put His Fab Money Where his Mouth is!

Elon Musk Needs to Put His Fab Money Where his Mouth is!
by Daniel Nenni on 04-24-2026 at 6:00 am

Terafab Elon Musk Lip Bu Tan Intel
Terafab: Elon Musk’s Vertically Integrated AI Chip Manufacturing Initiative with Intel

To me this is going to be one of the bigger Chicken Little moments in the history of semiconductors. Mostly because we are now a click driven society and the media is plagued by so called influencers that live and die by clicks. In my 40+ years as a semiconductor professional I have witnessed many of these Chicken Little moments with semiconductor outsiders saying that we cannot scale semiconductors for one reason or another. A common headline starting in the early 2000s was that “Moore’s Law is Dead”, right? Yet here we are making semiconductors that were deemed impossible to make many times over.

This time however the richest man in the world says that we will not have enough 2nm capacity to satisfy the world’s demands due to the AI surge and the coming onslaught of AI driven products. Please note that Elon Musk did not consult the top semiconductor manufacturing company in the world (TSMC) nor did he consult the semiconductor legend that is Intel UNTIL AFTER HE SAID THIS. I’m sure there is a method to his madness and I’m also sure that the semiconductor industry will respond appropriately, we always do.

The picture above is encouraging. If anyone can turn Elon Musk around it is Lip-Bu Tan. Remember when he turned Donald Trump around?

Donald Trump Truth Social (Aug 7, 2025)
“The CEO of INTEL is highly CONFLICTED and must resign, immediately. There is no other solution to this problem. Thank you for your attention to this problem!”

This was on Thursday. Lip-Bu met with Donald Trump the following Monday at the White House which resulted in this:

“I met with Mr. Lip-Bu Tan, the CEO of INTEL. The meeting was a very interesting one. His success and rise is an amazing story. We discussed many things, including the future of Intel and the importance of U.S. chip manufacturing. We will see what happens, but it was an honor to meet him.”

Not only did Lip-Bu Tan turn POTUS around, he got an incredible investment and backing from the US Government. Talk about taking lemons and making a lemon meringue pie!

At the TSMC Technical Symposium this week TSMC made it clear that there will be no issue with TSMC N2 (2nm) availability. In fact, TSMC N2 is ahead of TSMC N3 on the yield learning curve. Remember, TSMC N2 went into HVM in Q4 of 2025 so this is a fact not a forecast.

I remember when Apple turned to TSMC to manufacture the iProduct chips back at 20nm (iPhone 6). Semiconductor outsiders did not think TSMC could deal with Apple’s volume, or Apple would ruin TSMC as they did other partners, or TSMC could not handle Apple’s accelerated schedules. Just about every year the media claims Apple will consume all of TSMC NX so there would not be wafers left for anyone else. They now say that about Nvidia but of course that is also not true. Wafer agreements are legally binding contracts and are the life blood of contract semiconductor manufacturing.

The fact of the matter is that Apple and TSMC have a wafer agreement that spells out how many wafers Apple will get on a specific process node several years in advance so TSMC can build fabs for Apple. Every single TSMC customer has a wafer agreement, that is how we do business. In Apple’s case TSMC made processes tuned just for Apple and Apple had exclusive rights to it. Apple and TSMC would collaborate closely on process development and the process release would be timed for the Apple iPhone launches in the fall. This has been going on for 15 years and today the Apple iPhone is the most successful in the history of smartphones and the A Series SoC inside them is nothing short of miraculous.

Someone needed to explain to Elon Musk how semiconductor design and manufacturing really works and who better than Lip-Bu Tan?

Elon Musk needs to know that it takes longer for companies to design complex AI chips on leading edge processes than it takes TSMC and Intel to build the fabs that will manufacture them. Elon also needs to know that he will need to collaborate very closely with the semiconductor supply chain and make some very big investments to make sure his companies have the chips they need to be successful.

Lip-Bu Tan will explain this to him of course and Elon will tell the world so this is a win-win Chicken Little situation if I have ever seen one.

Intel is the big winner here of course and we can all thank Lip-Bu for that. As I have said many times before, do not bet against Lip-Bu Tan!

 Tesla CEO Elon Musk said on Wednesday the EV maker plans to use Intel’s next-generation 14A manufacturing process to make chips at its Terafab project, an advanced AI chip complex Musk has ‌envisioned in Austin.

TSMC and the semiconductor industry as a whole are both winners here as Intel will become a serious foundry competitor and that will push semiconductor innovation and supply chain strength to an even higher level which is for the greater good for all.

“We either build the Terafab or we don’t have the chips,” Musk had said during a presentation in Austin in March, adding that current global chip production would meet only a small fraction of his companies’ future needs.

The only loser that I can see here is Samsung Foundry. Not long after Samsung and Tesla signed an 8 year $16.5B 2nm wafer agreement Elon Musk announces to the world that not enough 2nm chips can be made? Ouch!

Bottom line: Elon Musk is a disrupter and I am glad to have him inside the semiconductor industry. The first thing I said in the SemiWiki forum after the Terafab announcement was made is that the only chance of success is for Elon to work with Lip-Bu Tan and Intel. So there you have it. I am still waiting for the financial details but I would bet that Elon Musk will in fact be putting his money where his mouth is, absolutely.

Also Read:

Is Intel About to Take Flight?

Disaggregating LLM Inference: Inside the SambaNova Intel Heterogeneous Compute Blueprint

Who’s Buying America’s Foundry Future?