Bronco Webinar 800x100 1

Two Paths for AI in Semiconductor Manufacturing: Platform Integration vs. Point Solutions

Two Paths for AI in Semiconductor Manufacturing: Platform Integration vs. Point Solutions
by Kalar Rajendiran on 04-23-2026 at 10:00 am

Looking Forward Slide

 

Semiconductor manufacturing has become one of the most data-intensive industrial environments in the world, and AI is rapidly becoming central to how fabs operate and optimize. Yet, rather than converging on a single model for AI adoption, the industry is evolving along two distinct paths. One centered on platform-scale integration and the other on fast, point-level deployment. The two approaches reflect differences in market structure, customer expectations, vendor ecosystems, and cultural approaches to manufacturing. This article explores these two models not as competing answers, but as context-driven responses to different operating environments, with the long-term outcome still open.

Global Semiconductor AI Context

The increasing complexity of semiconductor devices and manufacturing processes has led to an explosion in data generation, often reaching millions of parameters per device and petabyte-scale datasets per fab. At the same time, the industry must accelerate new product introduction while maintaining yield and stability in mass production. These pressures have made AI a necessary layer in semiconductor manufacturing, enabling scalable analysis, automation of decision-making, and interpretation of complex process interactions.

Modern AI systems in this space are typically built on a combination of scalable analytics engines, structured workflows, and domain-specific knowledge, often enhanced by natural language interfaces. While these foundational elements are broadly consistent across regions, how they are packaged, deployed, and monetized varies significantly.

Reference Architecture (Platform AI as a Model)

One way to understand these differences is through a reference architecture that represents a platform-based approach to semiconductor manufacturing AI.

At its core is a scalable analytics engine designed to process extremely large and complex datasets using parallel computation. This enables real-time or near-real-time analysis across vast manufacturing data environments.

Above this sits a workflow layer that defines how analytics are structured and executed. In this model, workflows act as a system-level language, capturing logic, enabling reuse, and providing traceability. They also function as a form of long-term system memory, embedding best practices and analytical patterns.

A natural language interface layer allows engineers to interact with the system more intuitively, translating human intent into structured workflows. This improves accessibility but relies on the underlying system for execution.

Finally, a semiconductor domain knowledge layer provides the contextual intelligence needed to interpret data correctly. This layer encodes device physics, process interactions, and historical expertise, ensuring that AI outputs are grounded in real manufacturing behavior rather than purely statistical patterns.

This architecture reflects a platform-oriented philosophy, where AI is treated as infrastructure rather than a collection of isolated tools. At its latest Users Conference PDF Solutions introduced the overall architecture of its platform and AI strategy, and demonstrated how LLMs integrated with this platform can be used to answer complex semiconductor manufacturing analytics questions using natural language.

Core Market Divergence: Platform vs. Point Solutions

In North America and Europe, semiconductor AI has largely evolved around platform-based deployment models. AI systems are designed as foundational layers that integrate with existing manufacturing systems and support multiple use cases over time. This approach emphasizes consistency, scalability, and long-term value creation. Because these environments often involve complex legacy systems, platforms serve as a unifying layer that can bridge data and processes across the fab. Deployment tends to be methodical, with significant emphasis on validation and integration.

In China, a different pattern has emerged. AI adoption is often driven by forward-deployed, point-specific solutions that address immediate manufacturing challenges. Many vendors are smaller and highly specialized and work closely with fabs and equipment providers to develop targeted applications for specific problems such as tool matching, chamber variation, or yield excursions. These solutions are deployed quickly, refined through direct feedback, and expanded incrementally to other problem areas. Rather than building a unified platform upfront, the system evolves through a series of practical implementations.

Structural Drivers Behind the Divergence

These two approaches are shaped by a combination of structural and cultural factors. In China, the prevalence of smaller AI vendors creates a strong need for rapid monetization, which favors solutions that can be developed and deployed quickly with clear, measurable outcomes. The manufacturing environment also places a high value on immediate operational improvements, reinforcing a preference for short deployment cycles and tangible results.

In North America and Europe, semiconductor companies often operate within more mature ecosystems that include extensive legacy infrastructure. This creates a need for integration and consistency across systems, making platform-based approaches more attractive. There is also a greater tolerance for longer deployment timelines when the expected outcome is system-wide optimization and long-term efficiency gains.

These differences do not reflect a gap in capability, but rather different priorities shaped by market conditions and organizational context.

Trade-offs Between the Two Approaches

Each model offers distinct advantages and challenges. Platform-based approaches provide a structured foundation that can support multiple use cases, enable cross-system optimization, and reduce redundancy over time. They are particularly well-suited for large-scale environments where integration and consistency are critical. However, they can require significant upfront investment and may take longer to deliver visible results.

Point-solution approaches, by contrast, are highly effective at delivering rapid impact. By focusing on specific problems, they can be deployed quickly and aligned closely with operational needs. This makes them well-suited for environments where speed and measurable outcomes are prioritized. At the same time, the accumulation of independent solutions can introduce fragmentation, making it more difficult to achieve system-wide optimization or maintain consistency across processes.

Importantly, these trade-offs are not absolute; they reflect different ways of balancing speed, scale, and integration.

Strategic Hybrid Possibility

Rather than converging entirely toward one model, there is growing recognition that elements of both approaches may coexist. A platform layer can provide structure, scalability, and domain intelligence, while localized solutions can continue to address specific operational challenges quickly and effectively. In such a model, platforms may evolve to incorporate or orchestrate point solutions, creating a layered system that combines integration with flexibility.

Whether and how this hybrid model develops will depend on how vendors and customers navigate the balance between standardization and customization, as well as how ecosystems mature over time.

Summary

The current landscape of semiconductor manufacturing AI reflects not a single direction of travel, but a divergence shaped by real-world constraints and priorities. Platform-centric and point-solution approaches each represent viable responses to different environments, and both are likely to continue evolving. Over time, one model may prove more scalable or sustainable, or a hybrid approach may emerge as dominant. For now, understanding the underlying drivers of each model is more important than determining a definitive long term winner.

Learn more at PDF Solutions.

Also Read:

WEBINAR: Beyond Moore’s Law and The Future of Semiconductor Manufacturing Intelligence

Operationalizing Secure Semiconductor Collaboration: Safely, Globally, and at Scale

Why PDF Solutions Is Positioning Itself at the Center of the Semiconductor Ecosystem


Carbon in the Age of AI Chips: What the Semiconductor Industry Needs to Know This Earth Day

Carbon in the Age of AI Chips: What the Semiconductor Industry Needs to Know This Earth Day
by Admin on 04-23-2026 at 6:00 am

Carbon in the Age of AI Chips

Stephen Russell: Senior Technical Fellow, TechInsights

Every April, Earth Day prompts a flurry of corporate sustainability pledges and green-tinted press releases. But for the semiconductor industry in 2026, the conversation has moved well past pledges. Carbon accountability is now a procurement requirement, a regulatory expectation, and increasingly a design constraint. This Earth Day, TechInsights is releasing a new sustainability report, Carbon in the Age of AI Chips. Authored by TechInsights Senior Technical Fellow Stephen Russell and Senior Sustainability Analyst Lara Chamness, the report examines where semiconductor emissions are actually coming from, why AI is accelerating the problem faster than most reporting methods can track, and what engineering, procurement, and sustainability teams can do about it right now.

Here’s a preview of what’s inside:

The Scale of the Problem Is Getting Harder to Ignore

Start with the headline numbers. Fabrication emissions are projected to reach 186 million metric tons of CO₂e in 2026, a record high, rising to approximately 247 million metric tons by 2030. Leading-edge technologies below 4nm will account for 26% of total emissions this year, climbing to 42% by 2030. Those are not abstract figures. They represent real consequences of real decisions: which fab to use, which memory configuration to specify, which supplier to source from.

What makes 2026 feel genuinely different from prior years is the convergence of three forces pushing carbon upstream into product decisions. Advanced manufacturing keeps getting more energy- and resource-intensive, especially at leading-edge logic and high-layer 3D NAND. AI demand is driving unprecedented silicon and memory intensity per system, not just more units but fundamentally heavier systems. And procurement teams are being asked, with increasing urgency, to defend supplier choices with traceable carbon logic rather than slide-deck narratives.

Manufacturing Carbon Is a Strategic Variable, Not a Fixed Cost

One of the report’s central arguments is that manufacturing carbon should not be treated as a black box or a rounding error. It is a strategic variable, and it responds to specific decisions.

The report’s Sustainability Matrix maps carbon hotspots across device types and toolsets. For advanced logic nodes, Scope 2 emissions driven by electricity are concentrated in lithography. For 3D NAND, dry etch can account for nearly half of total manufacturing emissions, driven by high-power plasma processes and high global warming potential gases. Some of those gases carry a 100-year GWP of around 25,000 times that of CO₂.

Perhaps the most striking case study involves backside power delivery (BSPD), a major scaling innovation that many assume carries a straightforward carbon penalty due to added process complexity. The reality is more nuanced. In an illustrative comparison of Intel 18A manufactured in the United States versus TSMC N2 manufactured in Taiwan, the Intel process results in lower manufacturing CO₂e per die. Not because it is simpler, but because the U.S. grid is cleaner. The electricity mix where a chip is fabricated can outweigh the complexity of the manufacturing process itself. That is a finding with immediate implications for anyone making sourcing or fab-selection decisions.

AI Hardware Is Scaling Emissions Faster Than Shipments

The report’s treatment of AI accelerators is where the numbers become genuinely striking. TechInsights’ Global AI GPU Manufacturing Carbon Emissions Forecast shows that by 2030, manufacturing emissions from AI GPU production are projected to rise more than twelvefold, from approximately 1.8 million metric tons CO₂e in 2024 to 21.6 million metric tons CO₂e. AI GPU manufacturing is expected to account for roughly 8.7% of total semiconductor die fabrication emissions by 2030. The average accelerator is expected to exceed one metric ton of CO₂e per unit by 2029.

The driver is not primarily bigger logic dies. It is memory, specifically high-bandwidth memory (HBM). The average AI accelerator is expected to integrate roughly 250 HBM dies by 2030. NVIDIA’s Rubin Ultra-class designs are projected to approach approximately 1 TB of HBM through higher stack counts and heights. As Stephen Russell notes, the AI-driven surge in HBM and advanced memory is likely to raise semiconductor manufacturing emissions in absolute terms, increasing memory wafer starts and adding process complexity even as leading manufacturers improve efficiency per transistor.

There is a subtler dimension the report explores carefully: yield. Stacking dies compounds yield loss, and in tall-stack HBM scenarios, stacking yields above 93% are necessary to prevent emissions per usable stack from rising sharply. That makes yield learning and process control first-order sustainability levers, not just cost levers.

The Client Device Story: Carbon Paid Upfront

The report’s third major focus is consumer and enterprise devices, where on-device AI is often framed as an operational efficiency win. Fewer cloud calls, lower network load, specialized local hardware: all genuine benefits. But the manufacturing emissions for those devices are paid upfront, and they are concentrated in places that might surprise you.

Using teardown-based analysis of AI PCs including recent Microsoft Surface and ASUS Zenbook models, the report finds a consistent pattern: memory and storage, not the headline processor or NPU, account for the majority of embodied carbon in AI PC platforms. Across the examples evaluated, memory accounts for roughly 43% to 57% of packaged-IC CO₂e, while the applications processor accounts for only about 14% to 21%.

The supplier concentration finding is particularly actionable. In one Surface Laptop 7 configuration, three suppliers account for approximately 73% of packaged-IC carbon. In a Zenbook S14 model, three suppliers account for roughly 84%. A small number of parts and vendors determine most of the footprint, which means platform configuration and supplier selection are among the highest-leverage carbon choices a product team can make.

What Can Actually Be Done

The report identifies a clear set of high-leverage actions: pursuing cleaner electricity and power purchase agreements, reducing yield loss and rework especially late in the flow and in stacked memory, substituting low-GWP gases with higher abatement efficiency, improving tool energy and utilization, and optimizing bit density and platform configuration.

Semiconductor sustainability has become a decision problem. The highest-impact choices around fab location, memory configuration, supplier mix, and platform architecture are made before a product ships, carrying carbon consequences that most legacy reporting methods cannot capture. The full report, Carbon in the Age of AI Chips, is available now.

LINK: Carbon in the Age of AI Chips | Earth Day eBook | TechInsights

Stephen Russell: Senior Technical Fellow

As Senior Technical Fellow for Sustainability at TechInsights, Stephen provides expert insight into carbon footprint across the entire technology life cycle, from raw materials through product manufacturing, use and end of life. Stephen also works on unique initiatives to characterize Scope 3 emissions in the use phase of consumer electronics products, with further reaching implications for data center and automotive applications.

Stephen is internationally recognized for technical research contributions and collaborations. These include being awarded best paper 2018 for the IEEE Journal Transactions on Power Electronics paper “High Temperature Electrical and Thermal Aging Performance and Application Considerations for SiC power DMOSFETs”. He led an exploratory research project in gallium oxide for power devices, presenting findings to the Royal Institution, London. While working in industry, he led the development of a new silicon IGBT product line and instigated a research and development project to use silicon carbide JFETs in circuit protection applications.

Also Read:

Kirin 9030 Hints at SMIC’s Possible Paths Toward >300 MTr/mm2 Without EUV

Cost, Cycle Time, and Carbon aware TCAD Development of new Technologies

5 Expectations for the Memory Markets in 2025


TSMC Technology Symposium 2026 Overview

TSMC Technology Symposium 2026 Overview
by Daniel Nenni on 04-22-2026 at 12:00 pm

Semiconductor Revenue $1T Accelleration

Yes, it is that time of year again, the 2026 TSMC Technology Symposium kick-off event in Silicon Valley. TSMC has never been in a better position to forecast the future of semiconductor technology and the industry itself. TSMC closely collaborates with the top semiconductor companies around the world and the top players in the semiconductor ecosystem. Never in the history of TSMC have they been in such a prominent position and the information that comes from that is astounding.

Dr. Kevin Zhang, Senior Vice President and Deputy Co-COO, again honored us with a press briefing before the event which is what we are talking about today. Next week we will talk in more detail about the event itself and some of the announcements. Unlike most of the media I will be there live with SemiWiki blogger Kalar Rajendiran.

Again, this is TSMC’s perspective on the semiconductor industry but it is backed collectively by the entire semiconductor ecosystem, absolutely.

The semiconductor industry is entering a new phase of accelerated growth and architectural transformation, driven primarily by artificial intelligence (AI) and high-performance computing (HPC). Recent projections indicate that semiconductor market growth has significantly outpaced earlier expectations, rising from a forecasted 10% to an actual 23% annual increase, with future projections reaching approximately 45% growth . This rapid expansion is largely attributed to AI-driven demand, which is reshaping both technology development and system-level design.

Could this be the first year that the semiconductor industry outpaces TSMC? Hard to believe but yes. TSMC revenue is expected by me to grow 30-40%. Here is the hitch: While wafer pricing is stable chip pricing is not. The bulk of the 45% revenue growth is due to chip pricing versus chip unit sales. Memory pricing is a big part of this but some of the AI chips (NVIDIA) are also selling at a premium.

A major milestone in this transformation is the advancement of global semiconductor revenue toward the $1 trillion mark, now expected to be achieved earlier than previously projected. As illustrated in the industry trend chart, AI represents the latest inflection point following previous computing waves such as PCs, the internet, and smartphones. By 2030, the semiconductor market is expected to exceed $1.5 trillion, with HPC and AI contributing over 55% of total demand, far surpassing other segments like smartphones (20%), automotive (10%), and IoT (10%) .

At the core of this growth is continuous innovation in semiconductor process technology. The roadmap for advanced nodes demonstrates a steady progression from nanometer-scale fabrication toward angstrom-class technologies. Nodes such as TSMC N2 and its enhanced derivative TSMC N2U focus on improving power, performance, and area (PPA) through design-technology co-optimization (DTCO). According to the technical data presented, N2U offers a 3–4% speed improvement at constant power, up to 10% power reduction at the same speed, and a modest increase in logic density. These incremental improvements are critical for maximizing return on investment for chip designers while maintaining compatibility with previous node designs.

Further advancements are seen in next-generation nodes such as A13, which extend technology leadership through optical shrink techniques. A 97% optical shrink enables approximately 6% area reduction while preserving backward compatibility in design rules. This allows designers to benefit from improved density without requiring extensive redesign, thereby accelerating product deployment.

While transistor scaling remains important, it is no longer sufficient to meet the exponential demands of AI workloads. Consequently, advanced packaging and system integration technologies have become central to performance scaling. Technologies such as CoWoS (Chip-on-Wafer-on-Substrate) and SoIC (System-on-Integrated-Chips) enable heterogeneous integration of logic and memory components. The HPC platform diagram illustrates how advanced logic dies, high-bandwidth memory (HBM), and photonic components are integrated into a single package to maximize compute density and efficiency.

The scaling of interposer size is a key enabler of this integration.  Interposer capacity is expanding from 3.3 reticle sizes to over 14 reticles by 2029, supporting up to 24 HBM stacks. This expansion allows for massive increases in memory bandwidth and compute capability. Additionally, wafer-scale integration technologies such as System-on-Wafer (SoW) extend this concept further, enabling integration at scales exceeding 40 reticles, equivalent to 64 HBM stacks.

Three-dimensional stacking technologies also play a critical role in enhancing interconnect density and power efficiency. SoIC technology enables vertical integration with significantly higher interconnect density—up to 56× compared to traditional 2.5D approaches—and improved power efficiency. This shift from planar to vertical integration reflects a broader industry trend toward system-level optimization rather than purely transistor-level scaling.

The impact of these innovations is evident in system-level performance metrics. The number of compute transistors within a single CoWoS package is projected to increase by up to 48× between 2024 and 2029. Similarly, memory bandwidth is expected to scale by 34× during the same period, driven by advancements in HBM technology and integration techniques.

Another critical innovation is the adoption of co-packaged optics (CPO) for high-speed interconnects. Traditional electrical interconnects face limitations in power efficiency and latency. By integrating optical communication directly into the package, systems can achieve up to 10× improvements in power efficiency and 20× reductions in latency, as shown in a performance comparison chart. This transition from electrical to optical signaling is essential for scaling AI infrastructure, where massive data movement between compute units is required.

Beyond data centers, semiconductor advancements are also enabling the emergence of physical AI applications, particularly in automotive and robotics. Modern vehicles are evolving into compute-centric platforms with significantly increased silicon content, incorporating advanced processors, sensors, and connectivity modules. Looking forward, humanoid robots represent a convergence of digital AI and physical interaction, requiring integrated systems for sensing, computation, motion control, and power management.

Bottom line: The semiconductor industry is transitioning from traditional scaling paradigms to a holistic, system-level approach that integrates advanced process nodes, heterogeneous packaging, photonics, and AI-driven architectures. This convergence is enabling unprecedented growth in computational capability and will define the technological landscape of the next decade.

Also Read:

TSMC to Elon Musk: There are no Shortcuts in Building Fabs!

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete


Could Neutral Atoms Take the Lead in Quantum Computing?

Could Neutral Atoms Take the Lead in Quantum Computing?
by Bernard Murphy on 04-22-2026 at 10:00 am

Neutral atom QC architecture

As of my recent posts on quantum computing (QC), superconducting QC is the leading technology contender, exemplified in systems from IBM, Google, and Fujitsu. Other technologies such as ion trap, neutral atom, photonic and quantum dot have generally been viewed as intriguing but second tier. I just read a very recent paper suggesting that neutral atom methods may have more potential than I thought. The paper is technically very dense and I haven’t yet had a chance to fully digest some of the claims, but what I have read suggests some very interesting shifts in what might be possible through neutral atom-based QC.

A neutral atom QC architecture. Image courtesy Oratomic, Caltech and Berkeley (in paper referenced above)

Backgrounder on neutral atom QCs

Neutral atom technologies suspend an array of atoms in vacuum, each atom acting as a very pure qubit. Atoms with a single valence electron such as rubidium or cesium allow information to be stored in hyperfine states split by interaction between the valence electron spin and the nuclear magnetic moment. Atoms are held in place using laser-based optical tweezers.

Lasers operating at different frequencies perform multiple functions. Laser cooling minimizes thermal motion and initializes qubit states. Simple gating operations (Clifford gates like X, Hadamard and S) are implemented through targeted laser pulses addressing individual qubits. Two qubit gates are implemented by pumping the control atom/qubit to a highly excited (Rydberg) state with a greatly expanded electron wavefunction. This can interact with neighboring qubits, enabling entanglement. Finally, measurement is accomplished through targeted laser-stimulated resonance fluorescence. Lots of lasers, yet research indicates this complexity is becoming manageable.

Neutral atom methods offer a very intriguing advantage over fixed qubit technologies (most if not all other options). Fixed qubit methods (superconducting for example) only allow for communication between nearest-neighbor qubits. Algorithms which need to superpose or entangle geometrically distant qubits require multiple SWAP gates to cross the gap, amplifying error rates. In neutral atom arrays, optical tweezers can be steered to reconfigure qubits dynamically. If qubit A needs to interact with qubit B and they are not adjacent, steer them to be adjacent, perform the operation then steer those qubits back to their original positions.

A major step forward for quantum architectures

Reconfigurability may not seem like such a big deal, but it opens an important option for QC architectures. An architecture can be partitioned into distinct components (see the figure above): a memory, a processor and a resource for magic states (supporting operations beyond Clifford gates, like T-gates and Toffoli-gates). Communication between these functional units is accomplished through teleportation, a proven method to transfer quantum state from one qubit to another using entanglement (this is different from reconfiguring). Lots of quantum weirdness but now starting to look more familiar as a computer architecture.

There is a second benefit, in quantum error correction (QEC). You may remember that QEC is conceptually similar to classical error correction, using additional (ancilla) qubits to check and correct errors. Except that QEC can require 1000 physical qubits for every logical qubit, which is why it is proving so difficult to get to high logical qubit counts in fault-tolerant systems. It appears that reconfigurability in neutral atom systems can dramatically reduce ancilla qubit demand.

The state of the art for QEC in fixed qubit systems uses a mechanism called “surface codes” which has somewhat reduced the physical to logical qubit threshold but still requires hundreds of physical qubits for every logical qubit. The Oratomic paper suggests a Low-Density Parity Code method offering better than two orders of magnitude reduction over those surface codes in required physical qubits. Further, qubits not currently participating in active processing can be moved to a less noisy storage area (memory) with lower error potential until needed again. Overall, greatly reduced QEC impact in necessary ancilla qubits. Potentially making fault-tolerant QC much closer than we had thought.

Caution and some implications

The paper suggests that a few techniques proposed in support of this technology are still in development. That said, Oratomic launched in March this year, founded and staffed by authors of this paper, to deliver a reconfigurable neutral atom computer along these lines. Given how new they are, we should probably expect the usual teething problems.

Still, if they deliver, Shor’s algorithm may be implementable at cryptographically significant levels (RSA-2048 and ECC-256) sooner that we expected, potentially within this decade. That’s the downside, but the upside is that quantum compute for much more interesting application may also be closer than we thought.

One more thought: compiling to, topologically organizing and simulating these systems will become a lot more interesting. Opportunities for EDA and compiler tech development!

Also Read:

Effective Defense Against Hacks at the Edge

Calibrating Quantum Computing Activity in Financial Services

An Upper Bound on Effective Quantum Computation?


Transitioning Voltage Regulator Design From Unidirectional To Bidirectional

Transitioning Voltage Regulator Design From Unidirectional To Bidirectional
by Daniel Nenni on 04-22-2026 at 8:00 am

Alphacore IP

An interesting article by Nazzareno Rossetti, published in How2Power Today, explores the transition from designing traditional unidirectional voltage regulators to bidirectional converters essential for modern energy management systems (EMSs). These systems optimize energy flow and storage among electric vehicles, photovoltaic arrays, home batteries, and the grid, requiring components like regulators and chargers to handle power in both directions seamlessly.

Most power electronics experience stems from unidirectional applications, such as buck converters supplying CPUs or mobile devices. Here, energy flows one way from a variable input to a regulated output powering passive loads. Transitioning to bidirectional operation, exemplified by the phase-shift dual active bridge (DAB) converter, represents a significant shift. Designers accustomed to unidirectional PWM buck control may feel unprepared for DAB complexities. Rossetti bridges this gap by demonstrating that bidirectionality is not entirely foreign—elements already exist in conventional designs.

Synchronous power stages reveal inherent bidirectionality. In a synchronously rectified buck converter (e.g., 12 V to 1.8 V), inductor current ripple at light load can dip negative, causing brief reverse energy flow from output to input. This “annoyance” disrupts regulation and efficiency, often requiring designers to disable synchronous rectification at low loads. Similarly, in PWM full-bridge motor drivers, current recirculates back to the input during off-time, risking capacitor overvoltage.

These examples show synchronous half-bridges naturally support bidirectional flow. Rossetti illustrates this by reconfiguring the same power train: placing feedback at the low-voltage side yields a buck converter (current from high to low), while placing it at the high-voltage side creates a boost (current from low to high). Thus, the topology can exchange energy between nodes bidirectionally for instance, allowing a depleted 1.8-V battery to draw from a 12-V battery or vice versa.

Conventional hard-switched converters suffer efficiency losses during switching transitions. Negative current in light-load scenarios enables zero-voltage switching (ZVS) by bootstrapping the switching node, but only with excessive ripple relative to DC load current, impractical for most uses. The solution lies in embracing reverse current fully via two-stage topologies that generate a zero-average, 50% duty-cycle square-wave current for soft switching, then rectify it to DC.

The phase-shift DAB exemplifies this approach. It features isolated full bridges on primary and secondary sides, connected via a high-frequency transformer. Phase-shift modulation between bridges controls power direction and magnitude at fixed frequency. Primary and secondary phase-shift chains, with proportional-integral compensation, regulate the designated output port. Auxiliary flyback converters and pulse transformers provide high-side gate drive and isolated feedback. Dead-time generation prevents shoot-through.

Simulations for grid-tied/off-grid storage applications (100 V to 5000 V DC link) show bipolar inductor current enabling consistent ZVS. Phase-shift control outperforms PWM by maintaining fixed frequency, simpler filtering, easier gate-drive design, and fewer diode conductions, yielding excellent full-load efficiency (though light-load performance degrades as ZVS weakens).

Rossetti concludes that emerging bidirectional markets challenge designers under tight deadlines. Yet, familiar tools like synchronous half-bridges form the foundation. A full bridge extends two half-bridges, and with imagination, unidirectional concepts evolve into mature bidirectional architectures like the DAB. This perspective eases the mindset shift from one-way to two-way energy control, helping engineers deliver innovative solutions for EMS applications.

CONTACT ALPHACORE 

Also Read:

Aerial 5G Connectivity: Feasibility for IoT and eMBB via UAVs

A Tour of Advanced Data Conversion with Alphacore

Analog to Digital Converter Circuits for Communications, AI and Automotive


How to Overcome the Advanced Node Physical Verification Bottleneck

How to Overcome the Advanced Node Physical Verification Bottleneck
by Mike Gianfagna on 04-22-2026 at 6:00 am

How to Overcome the Advanced Node Physical Verification Bottleneck

It is well-known that advanced semiconductor process technology presents substantial challenges across the full design flow and global supply chain. In this piece, we will focus on a particularly difficult problem – physical verification. This design step is the final gate to manufacturing. Producing a final tape‑out GDS that complies with all foundry signoff rules is required to ensure functional silicon. As process nodes shrink, the explosion of rules and operations is substantial. And the expanded interaction of subtle effects has also caused the required number of checks to quadruple.

This problem represents a discontinuity in the design flow. The traditional and trusted methods of yesterday do not scale to address these new requirements. Conquering these requirements demands a modern software architecture and innovative methodologies. In this post we’ll explore the dimensions and challenges of physical verification and highlight a new approach that tames these issues. The graphic at the top of the post illustrates a high-level view of what design teams are facing. Let’s examine the problem in more detail and explore an effective solution as we explain how to overcome the advanced node physical verification bottleneck.

The Problem

Physical verification is an unforgiving task. Thanks to the expanded use of digital and analog chiplets integrated with advanced packaging technology, designs must pass 100% of all checks across the digital, analog and packaging flows. A partial list of these checks includes DRC, LVS, FILL, PERC/ESD, Antenna, and Stress.

The actual list is much longer. For many of these checks, complexity and run time are increasing exponentially. Full-chip runs often require days to weeks to complete. And sometimes those long runs don’t finish at all. The figure below summarizes a few of the critical bottlenecks design teams are facing.

Critical Bottlenecks

Beyond the specific number and complexity of tasks now involved in physical verification, there is a common thread in the table presented above – the ability to use compute resources in a flexible, elastic way. An approach like this will reduce overall cost and improve efficiency. More on this in a moment.

To highlight the size of the physical verification problem, here are a couple of facts about the various bottlenecks design teams face:

  • Full-chip antenna checks: Between N7 and N2 there has been an 8X increase in transistor count. Metal layers have grown from 14 to 25. And there are now 8x more property calculations.
  • Full-chip PERC ESD checks: These checks typically take weeks to complete. This impacts ESD verification and fixing, putting the tape-out schedule at risk. At N2, there are many more die-to-die interface checks and a larger ESD device footprint.

The Solution – Synopsys IC Validator

Synopsys IC Validator represents a fresh approach to physical verification, reinforcing its commitment to performance and excellence in this domain, delivering the industry’s best distributed processing scalability. Its performance and scalability have enabled some of the industry’s largest reticle limit chips with billions of transistors, same day design rule checking (DRC), antenna rule checking, layout versus schematic (LVS), and programmable electrical rule checking (PERC) turn-around time.

Solving a problem this large requires a broad design flow footprint. IC Validator seamlessly integrates with the Synopsys Fusion Compiler™ RTL-to- GDSII solution, 3DIC Compiler, the IC Compiler® II place and route system, Custom Compiler, and StarRC for golden signoff parasitic extraction. This integrated physical verification solution handles the challenges presented by hyper convergent chip design. That is, multiple technologies, architectures, and protocols integrated into a single, complex semiconductor design.

The Fusion Technology delivered by IC Validator accelerates design closure for manufacturing by enabling independent signoff-quality analysis and automatic repair within a unified implementation environment. Let’s look a bit closer at a couple of the more difficult checks and how IC Validator removes barriers.

Full-Chip Antenna Checks

Scaling for Full Chip Antenna Checks

Antenna complexity is increasing exponentially, resulting in full-chip antenna runs becoming a major bottleneck for tape-outs. The IC Validator HyperSync architecture enables (or improves) connectivity and antenna scaling to help overcome these challenges. Across many design styles, including hyperscaler, network processor, CPU/GPU and complex ASICs, IC Validator delivers 2X – 4X faster turn-around time. And often CPU core usage is reduced by 70% compared to traditional, legacy approaches.

The diagram to the right illustrates how well the tool’s performance scales for full-chip antenna checks. And the diagram below illustrates the impact of IC Validator’s unique HyperSync architecture as process nodes advance.

Impact of HyperSync Architecture

Full-Chip PERC ESD Checks

To provide a bit of background, programmable electrical rules checking (PERC) is a method for assessing chip reliability issues that cannot be checked with DRC or LVS. These reliability checks are frequently electrostatic discharge (ESD) related, but they can extend to other areas as well, including electrical overstress and dielectric breakdown. The rules involve connectivity and netlist information but also need to support full customization from design to design making the process challenging.

Scaling for Full Chip PERC ESD Checks

Full-path PERC ESD runs often take weeks to complete, significantly impacting ESD verification and fixing and risking the tape-out schedule. IC Validator’s HyperSync architecture helps to tackle flat ESD problems with intelligent scaling. Across designs including complex ASICs, CPU/GPUs, mobile and CPU processors IC Validator delivers 2X – 4X faster turn-around time for full-chip PERC ESD. In some cases, the product successfully completed the run when other tools could not.

The diagram to the right illustrates how well the tool’s performance scales for full-chip PERC ESD checks. And the diagram below illustrates the impact of IC Validator’s unique ESD-aware engine and HyperSync architecture as process nodes advance.

Impact of ESD Aware Engine

 

Optimal Compute Efficiency with Elastic Compute

ICV Scaling vs. Legacy Approaches

As mentioned above, the ability to use compute resources in a flexible, elastic way is a critical element to taming advanced node physical verification bottlenecks. IC Validator’s architecture supports highly efficient, massive-scale distributed processing, dynamically utilizing CPU cores during runs without affecting overall turn-around time. This reduces compute cost with 50% less hardware resources.  This approach eliminates queue time and optimizes resource utilization without turn-around-time impact. The system is highly optimized for both cloud and on-premises environmentsand eliminates the need for a customized deck-splitting approach to achieve target turn-around times.

The bottom line is that design teams can get the best runtime with 50% less compute resources for large SoC designs. The figure on the right illustrates the effectiveness of this architecture when compared with traditional, legacy approaches.

Early Exploration and In-Design Flows

To get to tape-out on time requires review and fixing of errors across many checks. Each one presents its own unique set of challenges and requirements. Thanks to the broad set of In-Design flows that are part of IC Validator, this process becomes more coherent, consistent and efficient. The figure below shows a view of the tools available.

Broad Set of In Design Flows

DRC Explorer uses advanced technology and signoff foundry decks to find the most important design problems quickly on dirty designs, using fewer resources while achieving faster runtimes.

LVS Explorer is similar, focusing on fast checks of top-level shorts and opens, avoiding complicated LVS debug efforts that complicate signoff.

Antenna Explorer allows the user to focus on the most common cause of antenna violations:  those that occur at the interfaces between top level routing and IP blocks.  By eliminating unnecessary and irrelevant violations, designers clean interface antenna issues efficiently.

Automatic DRC fixing leverages the signoff quality of IC Validator results against the efficiency of the Fusion Compiler router.  This integrated flow cuts down on painful manual iterations between the two tools.

Similarly, timing-aware fill is able to identify timing critical nets and provide adequate spacing to foundry fill shapes, thus performing the necessary metal fill step while maintaining the timing closure status.

When DRC checking and metal fill are done in the In-Design environment, incremental updates are possible after manual modifications to the layout by the user.  DRC checks become lightning fast, and metal fill is inserted surgically, minimizing the impact to parasitics, timing, and signal integrity.

Customer Adoption and Foundry Certification

IC Validator adoption in the market is accelerating with industry-leading customers leveraging its full-chip performance and hardware efficiency for signoff applications.

IC Validator is certified by leading foundries across advanced process nodes, with qualified runsets available directly from the foundries.

To Learn More

We have touched on some of the significant challenges to tape-out an advanced node design.  A new approach to tape-out is required to meet these challenges, time-tested traditional approaches can no longer be relied upon.

IC Validator offers a modern, fully unified approach that delivers results for comprehensive physical verification sign-off for advanced designs. We’ve discussed how its HyperSync architecture enables massive scalability for full-chip PERC ESD and antenna checks. With Elastic Compute, IC Validator reduces compute cost with 50% less compute resources. And thanks to early exploration and In-Design applications using foundry-certified runsets, design convergence is accelerated.

In future posts, we will share results from top-tier customers who are using IC Validator to tape-out some of the world’s most sophisticated designs. Until then, you can learn more about Synopsys IC Validator here or contact Synopsys if you have any questions. And that’s how to overcome the advanced node physical verification bottleneck.

Also Read:

Podcast EP342: The Evolution and Impact of Physical AI with Hezi Saar

WEBINAR: Beyond Moore’s Law and The Future of Semiconductor Manufacturing Intelligence

From Wooden Boards to White Gloves: How FPGA Prototyping and Emulation Became Two Worlds of Verification… and How the Convergence Is Unfolding


Is Intel About to Take Flight?

Is Intel About to Take Flight?
by Jonah McLeod on 04-21-2026 at 10:00 am

Chip to Austin

The Pan Am–Boeing playbook and what Musk’s Terafab order could mean for Intel Foundry

“We either build the Terafab or we don’t have the chips.” That’s Elon Musk, speaking to Reuters, stating a supply constraint as plainly as anyone has stated one. TSMC is sold out. Samsung is committed. The existing supply chain can’t expand fast enough to meet what his companies will need for AI, robotics, and space. So he went looking for a different kind of supplier — one with capacity, knowhow, and no queue.

Pan Am’s Juan Trippe found himself in the same position in the mid-1960s with a seat shortage. Existing aircraft couldn’t move enough people at a price that made mass air travel possible. Boeing was hemorrhaging money chasing Concorde, consumed by the prestige race for supersonic speed. Trippe didn’t ask for the fastest plane, but one with enough capacity to move the most people, to the destinations they dreamed of but couldn’t afford.

Trippe walked into Boeing with a vision of a world that had never flown before — and left with Boeing committed to building something beyond their own comprehension. Allen built a factory larger than anything Boeing had ever constructed. Trippe ordered planes Pan Am couldn’t fully pay for. The supersonic program died. The 747 reshaped global travel for fifty years.

What made both men effective was their audacity — the alpha instinct to get out in front of the pack and identify where the big kill is before anyone else sees it. Trippe walked into Boeing with a betcha neither company could afford to lose. In his mind the 747 with Pan Am’s blue and white livery was already crossing every ocean, connecting cities worldwide, carrying people who had never flown.

Musk walks into Intel with the same completed picture — a conveyor belt of silicon feeding his robots, his cars, his satellites, a future he can already see in full resolution even if no one else yet can. He walked in as a partner proposing a betcha Intel can’t afford to lose alone. Both pairs of men at the bridge’s edge — cord untested, betting it holds.

In mid-April, Intel CEO Lip-Bu Tan sent a memo to his staff — two days after Intel announced its Terafab involvement via a 60-word post on X with no press release published to its own website. “Musk’s expansive vision across AI, transportation, communications, robotics and space travel relies heavily on an ample and uninterrupted supply of silicon chips,” Tan wrote of Musk. “Intel is thus a natural partner to help him realize his vision.” He noted that he and Musk had held “wide-ranging and deep conversations,” from which “both sides quickly realized that working together would be mutually beneficial.” That’s the Allen moment — the handshake that commits the company before the engineering is fully proven.

The prestige race is with TSMC. And like Boeing’s pursuit of supersonic transport, it has been expensive, consuming, and largely beside the point of what Intel actually needs to survive.

TSMC built its dominance not by inventing new physics but by running sophisticated machines with greater discipline, consistency, and yield than anyone else. Five companies supply those machines and collectively control the world supply of leading-edge chip production. ASML holds a monopoly on EUV lithography — the only process capable of printing patterns that define a leading-edge transistor, with no alternative anywhere on earth. Applied Materials encodes deposition and materials engineering. Lam Research owns plasma etch. KLA dominates inspection and metrology. Tokyo Electron controls thermal processing and the track systems every wafer passes through before lithography. One Dutch, one Japanese, three American. Remove any single link from the chain and leading-edge production stops globally. Not slows. Stops.

These companies represent the accumulated process expertise of an entire generation of engineers, encoded into capital equipment over decades. The lithography knowledge that once lived in the minds of physicists at Philips now lives in a $400 million machine. The etch knowledge that took Lam’s engineers thirty years to develop runs in software on a plasma chamber. The inspection intuition KLA’s founders built from first principles is now a metrology system examining wafers at resolutions the human eye cannot approach.

The encoding worked — up to a point. What it captured was the physics, the measurement, the repeatable process. What it couldn’t capture was the judgment — what to do when you find something unexpected in a combination of circumstances that has never occurred before in exactly that way. That knowledge lives in people. It always has.

ASML and TSMC didn’t arrive at dominance separately. They built each other. TSMC needed a lithography partner willing to develop equipment around a pure-play foundry model. ASML needed a customer whose volume commitments made the economics of EUV development viable — a technology that took twenty years and billions of dollars to become real. TSMC’s process discipline gave ASML a proving ground. ASML’s roadmap gave TSMC the node leadership no integrated device manufacturer could match.

They didn’t plan to build a duopoly. They kept showing up for each other when the technology needed one more generation of commitment to become real. The three American equipment companies — Applied Materials, Lam, KLA — built the tools that made TSMC possible. The Dutch-Taiwanese partnership built the manufacturing culture that knew how to run them. The tools stayed in California. The judgment went to Hsinchu.

Pan Am and Boeing ran the same play a decade earlier. Neither knew they were building something that would reshape global travel. They were solving immediate problems for each other — Pan Am needed seats, Boeing needed a customer large enough to justify the factory. The 747 was the output of that mutual dependency, not a grand vision executed from the top down.

Tan’s memo reads the same way. Not a grand vision — a natural partnership. A customer with an uninterrupted need for silicon, a supplier with allocated capacity and no queue. Tan has assigned his chief of staff and interim CTO, Pushkar Ranade — an 18-year Intel veteran — to manage the engagement directly, with Tan overseeing it personally. “I have asked Pushkar to assemble and engage select technologists across the company to contribute to this project,” Tan wrote. That is not a business development assignment. That is the company’s institutional memory being pointed at the problem.

What makes the situation urgent is that the judgment Hsinchu accumulated is exactly what Hillsboro is now losing.

The process tech laid off at National Semiconductor in the 1980s whose departure preceded an immediate yield collapse is the oldest story in semiconductor manufacturing — management cutting what it couldn’t measure, losing what it couldn’t replace. Intel’s layoffs are running the same script right now, with engineers who ran 14nm and Intel 4 through production ramps walking out the door carrying knowledge no machine has recorded.

The fab managers executing those layoffs made locally rational decisions. So did every executive who offshored assembly, every investor who rewarded the fabless model, every university that defunded process engineering programs because students wanted to study AI. Each decision looked good on its own terms. The aggregate is TSMC’s Arizona operation running identical machines to Taiwan at lower yields — because the machines transferred and the judgment didn’t.

You cannot allocate budget for the intuition that forms when enough people who know what they are doing work in close enough proximity for long enough that the knowledge stops being individual and becomes environmental. The CHIPS Act funded the machines. It didn’t fund the community. AI is making this worse in a way the policy apparatus hasn’t fully reckoned with.

AI is doing to semiconductor process engineering education what offshoring did to semiconductor manufacturing employment — making it invisible as a career path at exactly the moment it becomes strategically critical. A Stanford graduate who could become a process integration engineer at Hillsboro is instead becoming a machine learning researcher at a billion-dollar startup. The pay, the status, and the trajectory all point the same direction, and it isn’t toward a fab.

AI runs on chips. The foundation models, inference engines, training clusters — all of it requires leading-edge silicon that requires the process engineering expertise the AI industry is pulling talent away from. The demand that makes AI possible is eroding the supply chain that makes AI possible. The next generation of potential process engineers is being pulled into AI’s orbit before they ever acquire the knowhow that needed recording. The reservoir isn’t just draining from the top. It is failing to refill from the bottom.

Musk isn’t requesting Intel’s most sophisticated capability. Tesla’s inference chips for Optimus and FSD are demanding but structurally simpler than the hyperscaler XPUs that exposed Intel’s yield limitations with Broadcom. Where Broadcom throws fastballs — face-to-face stacking, “Correct By Construction” yield requirements, zero tolerance for first-pass failure — Musk lobs softballs. Single-die inference chips, EMIB packaging Intel already owns, volumes that ramp gradually. Intel can hit that. Every wafer it runs for Tesla is a yield learning cycle on 18A — batting practice that makes the fastball more hittable the next time Broadcom steps onto the mound.

Intel also has something Terafab couldn’t acquire independently: ASML machines already installed, allocated, and running on a node with no external customer queue. Musk doesn’t need to build a fab. He needs to fill one that already exists and is currently running below its potential. TSMC’s allocation is spoken for years out. Samsung is committed. There is no line to cut at either foundry. Intel is sitting on capacity the rest of the industry can’t offer.

Pan Am’s 747 order didn’t make Boeing immediately competitive with Concorde. It kept Boeing solvent and learning while the prestige race burned money in the background. Musk’s inference chip orders may do the same for Intel Foundry — keeping the organization alive, building process confidence, absorbing displaced Hillsboro talent into a program with real volume and real deadlines before that talent disperses entirely.

The skepticism is warranted. Brad Gastwirth, global head of research at supply chain firm Circular Technology, noted last week that while “the ambition implied is significant,” visibility into execution remains limited. “There is no defined timeline to high volume manufacturing, no disclosure around capital intensity or cost per wafer, and no guidance on yield ramp expectations — which are critical given how sensitive advanced node production remains.” Those gaps are real. Tan’s promise to disclose the scope in coming weeks will either close them or widen them.

The warning the Pan Am analogy carries is the one Boeing learned the hard way. Pan Am pushed faster than the engineering was ready, contributing to early reliability problems that nearly killed the 747 before it found its footing. Musk’s timelines carry the same risk. The customer who saves you can also be the customer who breaks you if the pressure outruns the capability. Tweets never rescued a process node that couldn’t sustain yield.

The machines are extraordinary. The equipment suppliers encoded more process expertise into capital than any industry in history. But yield still lives in people. And the people are leaving. The window is closing —one career change at a time, the way it was lost.

Tan’s memo said Intel is a natural partner. The 747 flew. Whether Intel’s version does is the question the next few years will answer.

Also Read:

Disaggregating LLM Inference: Inside the SambaNova Intel Heterogeneous Compute Blueprint

Who’s Buying America’s Foundry Future?

Intel, Musk, and the Tweet That Launched a 1000 Ships on a Becalmed Sea


proteanTecs at Chiplet Summit – Changing the Game for Health & Performance Monitoring of Chiplets

proteanTecs at Chiplet Summit – Changing the Game for Health & Performance Monitoring of Chiplets
by Mike Gianfagna on 04-21-2026 at 6:00 am

proteanTecs at Chiplet Summit – Changing the Game for Health & Performance Monitoring of Chiplets

The recent Chiplet Summit 2026 was a great place to learn about new chiplet designs, emerging standards, and a growing array of support technologies to help design and manufacture chiplet-based systems. In my travels at the show, I found a lot of technology that fit these descriptions. But there were also companies at the show that took a different approach to support chiplet design. We’re all aware of the importance of a well-centered design that delivers optimal performance and power consumption in the smallest footprint.

Starting with a solid design that is extensively verified before tapeout is an important step to achieve this goal. But production and real-world uncertainties can create problems for even the best design. proteanTecs is one of the companies that uses novel technology to approach this problem. Their approach delivers accurate information about the chip throughout its lifetime and takes action on this data to ensure consistent health while optimizing performance and power. And all this is done without impacting die size. Let’s take a look at how proteanTecs is changing the game for health & performance monitoring of chiplets.

My Tour Guide

Nir Sever

I met Nir Sever, Senior Director of Business Development for proteanTecs in the company’s booth at Chiplet Summit. Nir has been with proteanTecs for over six years. Previously, he had a long history in executive management, chip design and design methodology at companies such as Tehuti Networks, 3dfx, Cadence and Zoran.

Nir explained the significant capabilities offered by proteanTecs and he demonstrated how it all worked in a live demo on the show floor with a real commercial application.

The Demo

The first point Nir covered was the scope of what proteanTecs delivers to its customers. proteanTecs is offering monitoring which is much more than the simple concept of embedded sensors. Leveraging its propriety ML-driven software engine, the software analyzes the details of the design, analyzes elements such as block size and number of clock and power domains.

Demo System

Then, based on this analysis a recommended configuration of monitoring agents and Hardware (HW) Monitoring System Infrastructure is deployed. Once this configuration is agreed to, proteanTecs software does additional analysis to determine where to place each of the agents. A key point here is that this work is done *after* the design is placed and routed. So, the agents literally fit in the white space of the design thereby avoiding any additional overhead. Agent placement is an important step to optimize proximity to areas to be monitored for tight interaction between the agents.

Once implemented, the HW Monitoring System delivers critical information about the circuit without impacting die size or performance.  Nir demonstrated the solution, compatible for all major implementation flows, on an Alphawave Semiconductor system with a datacenter grade high-speed optical networking chip built in a TSMC 5nm process.

The diagram below shows the configuration and placement of the agents for the demo circuit.

Configuration and Placement of the Agents

Once the Agents are embedded in the design, Nir highlighted some key capabilities. The first capability he showed was Continuous Performance Monitoring. This capability reads the information from all agents periodically and visualizes it. This information is useful for system designers to help them understand how different chip configurations or workloads impact overall performance. It’s similar to a logic analyzer/virtual scope embedded in the system. The diagram below is screen capture of a fully populated display.

Continuous Performance Monitoring

Margin to Timing Failure is tracking the critical path for the chip. Each of the traces in this display represent the information from one of the Margin Agents (blue dots) in the “Floorplan” figure. DC Voltage displays the voltage at various locations. V&T Stress shows the impact of voltage and temperature stress over time. Frequency depicts the stress from the toggle rate, which reflects aging in the chip. Clock Cycle-to-cycle Jitter measures noise on the clock and Effective Cycle Time Delta measures power supply noise. A complete view of chip status and behavior delivered in real time.

This mode performs passive display of the chip’s performance. Next, Nir moved to a mode where the embedded system performed active optimization and enabled adaptive voltage scaling. He explained that a typical chip operates with a voltage that has sufficient margin to ensure the chip operates correctly over its lifetime. While this strategy delivers predictable performance, it wastes power in the early stage of the chip’s lifetime, when it can safely operate at a lower voltage.

The adaptive voltage scaling function delivers the capability to “tune” the operating voltage to a safe level over the lifetime of the chip. The diagram below summarizes how this works.

Adaptive Voltage Scaling

The left display has the familiar Margin to Timing Failure traces. The green line below the traces shows the safe minimum timing value. You can see the system’s default operation is significantly above this value, meaning it is operating at an unnecessarily high voltage. The system then begins to lower the operating voltage as shown on the right. This continues until the chip is operating at the lowest safe timing threshold. On the far left, you can see a power saving of 11.64% has been achieved. And because the chip is operating at a lower voltage the lifetime of the chip has been extended by 16.28%. Nir explained that the code to perform this optimization is embedded in the on-chip application. It is completely self-contained and requires no external cloud access.

For demonstration of the safety-net, Nir forced a lower operating voltage to drive the system below the safe operating margin. Within two clock cycles the problem was sensed and corrected, keeping the chip exactly at the minimum safe operating point. This is one of many potential applications that can be deployed using proteanTecs. Nir explained that customers can develop their own applications as well. Since most of the processing is done on the embedded hardware, the size of a typical application is quite small at 10 – 50 kilobytes. It became clear to me that the proteanTecs technology delivers safe, robust operation across the lifetime of the chip with very little overhead.

To Learn More

During my time with Nir at the proteanTecs booth I saw an impressive demonstration of what proteanTecs can deliver to optimize any design over its lifetime. This discussion only scratches the surface of what the company offers.

You can learn more on their power reduction solution and AVS Pro here. You can also learn more about the depth of extensive capabilities offered by proteanTecs here. And that’s proteanTecs at Chiplet Summit – changing the game for health & performance monitoring of chiplets.

Also Read:

Intelligent Networks: Power, Reliability, and Maintenance in Telecom — Webinar Preview

Accelerating NPI with Deep Data: From First Silicon to Volume

Failure Prevention with Real-Time Health Monitoring: A proteanTecs Innovation


WEBINAR: Intrinsic Techniques in RF Power Amplifier Design

WEBINAR: Intrinsic Techniques in RF Power Amplifier Design
by Don Dingee on 04-20-2026 at 10:00 am

Intrinsic node overview

Load-pull power amplifier (PA) design techniques determine the optimal impedances at the power transistor’s extrinsic reference plane, which is the physically accessible boundary for measurement or simulation. This reference plane can be the package transistor leads, die bond pads, or IC chip terminals. It includes the parasitic resistance, capacitance, and inductance inside the GaN device. A simulation-based GaN PA design can supplement this load-pull technique by providing access to the power device’s intrinsic current source. Enhanced GaN models in Keysight’s Advanced Design System (ADS), along with intrinsic techniques like load-line analysis, help RF engineers design high-frequency, highly efficient PAs.

The second webinar session in Keysight’s RF PA design master class series, led by Matt Ozalas, Principal Product Owner for ADS and Scientist, and Joe Schultz, RF Solutions Engineer, explores intrinsic techniques in detail. (The first session focused on extrinsic techniques, particularly load-pull analysis.) Webinar 2 introduces a simple Class J PA design with an intrinsic-node model, examines the history of intrinsic modeling and analysis techniques, and walks through in-depth simulations of the Class J amplifier in an ADS intrinsic analysis workspace.

Waveform engineering applied to a Class J PA

An intrinsic node is a virtual construct designed for simulation. Matt starts with a simplified GaN transistor model that surrounds an ideal current generator with non-ideal parasitic elements. Representing parasitics faithfully increases behavior fidelity, which in turn enables a more accurate simulation of efficiency. In the chart below, power tracks between extrinsic and intrinsic analyses, but drain efficiency differs considerably between the two techniques, with the intrinsic analysis being more accurate.

That’s important for a Class J PA, which begins with a construct similar to a Class B amplifier but tunes the harmonic load impedances so that power delivery is zero at the harmonics, theoretically allowing maximum PA efficiency at full voltage swing. Reaching that efficiency requires waveform engineering, strategically reducing the overlap between voltage and current waveforms to minimize resistive power dissipation. “Once we’ve designed these waveforms mathematically, it’s possible to apply a Fourier transform, and then, rather than looking at time-domain waves, we look at harmonic frequency tones,” Matt says. Voltage and current tones at each harmonic frequency yield computed impedances. By applying those Fourier-transformed impedances back to the idealized current generator in the intrinsic transistor model, energy organizes itself to maximize achievable efficiency.

Building comprehensive intrinsic analysis in an ADS workspace

Joe is a recent addition to the Keysight team, with three decades of RF power amplifier design experience spanning Motorola, Freescale, and NXP. He begins with a bit of a digression, mentioning the power amplifier work of Steve Cripps, particularly his 1999 text, “RF Power Amplifiers for Wireless Communications,” and a seminal 2009 article by Paul Tasker on “Practical Waveform Engineering” in IEEE Microwave Magazine.

Joe also shares his own design experience. He worked with Motorola’s LDMOS transistor technology in his early career, which was state-of-the-art at the time. GaN technology can deliver four or five times the power density of LDMOS in amplifier applications, but only if designers pay proper attention to efficiency and power dissipation in PA designs.

After some slides on LDMOS versus GaN, Joe dives into the core of his presentation: a highly technical discussion of an intrinsic approach to RF PA design. The foundation of the analysis is Cripps’ load-line method, which uses swept DC current-voltage (DCIV) analysis with full-scale voltage and current excursions to generate power contours by tracing constant-resistance and constant-conductance circles for different power points on a Smith chart.

Joe also discusses conduction angle, the portion of the RF cycle where current conduction occurs, and one of the differentiators between amplifier classes. The presentation then builds into a GaN FET analysis in ADS, using a new Keysight ASN-HEMT GaN FET demo model and running DCIV, S-parameter, AC sweeps, harmonic balance, and both load-pull and load-line analysis to establish matching impedances. The power of ADS’s data display with the full results is evident in one of Joe’s concluding slides, though a screenshot alone doesn’t do the narrative justice.

If you are interested in RF power amplifier design or learning more about extrinsic load-pull and intrinsic load-line analysis with ADS, hearing from Keysight’s experts makes the time spent viewing these two webinars worthwhile – and yes, the ADS workspaces used in these webinars are available online for hands-on learning. Register for both on-demand sessions at this link:

RF Power Amplifier Design MasterClass Webinar Series

Also Read:

WEBINAR: Two-Part Series on RF Power Amplifier Design

On the high-speed digital design frontier with Keysight’s Hee-Soo Lee

2026 Outlook with Nilesh Kamdar of Keysight EDA


Analog Bits Demos Real-Time On-Chip Power Sensing and Delivery on N2P at the TSMC 2026 Technology Symposium

Analog Bits Demos Real-Time On-Chip Power Sensing and Delivery on N2P at the TSMC 2026 Technology Symposium
by Mike Gianfagna on 04-20-2026 at 6:00 am

Analog Bits Demos Real Time On Chip Power Sensing and Delivery on N2P at the TSMC 2026 Technology Symposium

Analog Bits has a way of stealing the show at every event they attend. The formula is actually quite straight-forward – come to the show with the most relevant, highest impact IP running on the most advanced process. The company will be applying this strategy again at the upcoming TSMC 2026 Technology Symposium with an array of real-time on-chip sensing and delivery IP on TSMC’s N2P process.

The latest megawatt AI and HPC systems using multi kilowatt SoCs face thermal, power efficiency, performance variability and reliability challenges that digital design alone cannot solve. Traditional approaches are no longer effective as transistors speed up and voltages scale down, creating increasing power density. Multi-chip packaging compounds power problems as well.

Addressing these power density challenges requires new approaches that leverage advanced processes along with architectural-level optimizations to achieve power targets. These are the capabilities Analog Bits will bring to the TSMC event. Let’s look a bit closer as Analog Bits demos real-time on-chip power sensing and delivery on N2P at the TSMC 2026 Technology Symposium.

The Hardware

Below is a photo of the TSMC N2P test board with the test chip inserted.

ABITCN2P2 Test Board : Test Chip

And here is the test chip layout.

ABITCN2P2 Test Chip Layout

The Demos

Here is an overview of some of the capabilities that you will see at the show.

LDO: A linear regulator with a small difference between input and output voltage. Example: 50-100mV. Benefits of the on-die LDO include improved power efficiency & signal integrity, fast transient response and efficient regulation, voltage scalability, integration and space savings, and noise reduction.

Target customer use cases include high-performance CPU (ARM) cores and high lane count high performance SERDES. There are multiple working silicon examples in N3P. The N2P LDO delivers a 30% area reduction and ultra-high bandwidth operation.

The Droop Detector provides an always on sensor for security hacks on TSMC N2P.

The Glitch Catcher provides high frequency detection for high reliability on TSMC N2P.

The Ultra Low Power PLL supports microwatt class low power applications on TSMC N2P.

A Low Jitter C2C PLL (up to 20GHz), ultra-low power PLL, and patented pinless core powered PLL are all available on TSMC N2P.

A high accuracy PVT sensor and pinless high accuracy PVT sensor are both available on TSMC N2P.

Also being featured for the first time are the highly accurate remote pinless PVT sensors with a +/-3.5C (untrimmed), and low power PLLs that feature microwatt class power levels at 0.5 micro-watt/MHz.

These new IPs will provide significant advantages to customers seeking PPA optimization and intelligent on-chip power management for advanced SoCs on TSMC N2P process technology. Mahesh Tirupattur, CEO of Analog Bits, commented, “Our integrated on-die LDO provide an exceptionally clean form of power delivery, coupled with the glitch catcher and droop detector features, making power observable in real-time and enabling fast corrective actions to be taken almost instantaneously.”

To Learn More

The 2026 TSMC Technology Symposium will be held on April 22, 2026, at the Santa Clara Convention Center. You can register for the event here.  Be sure and stop by Analog Bits at booth #608 to see a demo of over 12 IPs on the TSMC N2P test chips. And that’s how Analog Bits demos real-time on-chip power sensing and delivery on N2P at the TSMC 2026 Technology Symposium.

Also Read:

2026 Outlook With Mahesh Tirupattur of Analog Bits

Podcast EP322: A Wide-Ranging and Colorful Conversation with Mahesh Tirupattur

Analog Bits Steps into the Spotlight at TSMC OIP