SiC Forum2025 8 Static v3

GlobalFoundries, MIPS, and the Chiplet Race for AI Datacenters

GlobalFoundries, MIPS, and the Chiplet Race for AI Datacenters
by Jonah McLeod on 10-09-2025 at 6:00 am

GF MIPS

GlobalFoundries’ (GF) acquisition of MIPS in 2025 wasn’t a nostalgic move to revive a legacy CPU brand. It was a calculated step into one of the most lucrative frontiers in semiconductors: AI, high-performance computing (HPC), and datacenters. As Nvidia, AMD, Intel, and hyperscalers embrace chiplet architectures, GF is betting that owning CPU IP will secure it a central role in the modular compute era.

Chiplets Reshape the Datacenter

The shift to chiplets in datacenters is now undeniable. AI training and inference workloads have pushed beyond the limits of monolithic scaling. Traditional GPUs and accelerators face reticle-size ceilings, yield problems, and soaring power and cooling demands. Chiplets solve these constraints by breaking compute into smaller dies—CPUs, GPUs, accelerators, memory, and I/O—then stitching them together with advanced interconnects.

The model is already proven. AMD’s EPYC CPUs and Instinct GPUs rely on chiplet designs. Intel uses Foveros and EMIB (Embedded Multi-die Interconnect Bridge) in its Xeon and GPU hybrids. Nvidia adopted chiplets in its Hopper and Blackwell roadmaps. Hyperscalers like Google, AWS, Microsoft, and Meta are developing modular dies to balance cost, performance, and yield.

Market forecasts highlight the speed of adoption. Fortune Business Insights valued the global chiplet market at $44.82 billion in 2024 and projects it will reach $233.81 billion by 2032—nearly 23 percent CAGR. The fastest growth is in AI, HPC, and datacenter deployments, where modular compute is shifting from experiment to mainstream infrastructure.

Why GF Bought MIPS

Within this context, GF’s acquisition of MIPS secures direct access to CPU IP tuned for RISC-V. MIPS’ Atlas cores bring multithreading, functional safety, and custom instructions optimized for AI and datacenter workloads. With MIPS under its roof, GF can deliver silicon-proven CPU chiplets already mapped to its 22FDX and 12LP+ FinFET nodes. Customers gain a known-good CPU die—tested, interoperable, and ready for integration.

GF President and COO, Niels Anderskouv stated in recent quarterly report, “MIPS brings a strong heritage of delivering efficient, scalable compute IP tailored for performance-critical applications, which strategically aligns with the evolving demands of AI platforms across diverse markets.”

Automotive adds credibility. While GF has never named Mobileye as the primary driver of the deal, MIPS’ long partnership with Mobileye in ADAS applications underscores its reputation in safety-critical environments. That track record likely factored into GF’s decision.

This clarity of purpose contrasts sharply with Intel. Intel’s CPU business depends on keeping x86 as the anchor die in datacenters, making RISC-V chiplets look like a threat to its core franchise. Intel Foundry Services, however, must sell wafers, packaging, and chiplets to all comers—including RISC-V adopters. Its partnership with SiFive shows the dilemma: enabling RISC-V strengthens Intel’s foundry arm but undermines x86; resisting it protects x86 but limits foundry growth.

The analogy to the earlier “known-good-die” (KGD) era illustrates the stakes. In the 2010s, KGD built the trust system designers needed before investing in costly multi-die packaging. Without it, 2.5D and 3D adoption would have stalled. Today, datacenter chiplets face the same barrier: customers want assurance that CPUs, GPUs, and accelerators will interoperate seamlessly.

GF’s MIPS acquisition positions it to provide exactly that—CPU chiplets validated for AI and HPC. Beyond technology, it also plays into geopolitics. As governments and hyperscalers seek sovereign compute platforms independent of Nvidia and Intel, GF’s ownership of RISC-V CPU IP positions it as a neutral supplier of trusted, open chiplets.

Packaging Power Plays

Owning CPU IP is only the first step. To compete in the chiplet era, GF must also master advanced package integration. This is not unfamiliar ground: after acquiring Chartered Semiconductor in 2010, GF operated an Assembly & Test business in Singapore that offered wirebond, flip-chip, and wafer-level packaging. But as the company refocused on front-end wafer technology, those back-end operations were wound down, leaving GF dependent on OSAT (Outsourced Semiconductor Assembly and Test) partners.

That reliance is problematic because packaging has become the new choke point. TSMC has spent more than a decade perfecting CoWoS (Chip-on-Wafer-on-Substrate), InFO (Integrated Fan-Out), and SoIC (System on Integrated Chips), elevating packaging to a first-class capability. Nvidia’s GPUs, AMD’s server CPUs, and Apple’s processors all depend on these lines. Industry media report that packaging capacity is now one of the primary bottlenecks in global AI chip supply.

Despite rising wafer output, advanced steps such as 2.5D interposers, HBM integration, and 3D stacking are straining available lines. Most of TSMC’s CoWoS capacity is already booked by Nvidia, AMD, and Apple, leaving little room for smaller players. Demand for AI accelerators has outpaced back-end investment, creating long lead times and fierce competition for packaging slots. Packaging has shifted from a back-end process to a strategic resource, as critical to performance and delivery as the fabs themselves.

Intel and Samsung are investing heavily in EMIB, Foveros, and X-Cube, but TSMC remains the entrenched leader with unmatched scale and ecosystem depth. For GF, the MIPS acquisition provides the anchor CPU chiplets, but without building in-house packaging capacity—or forging deeper ties with leading OSATs such as ASE, Amkor, and JCET—it risks being shut out of the highest-value datacenter opportunities. These partners bring expertise in 2.5D/3D integration, fan-out wafer-level packaging, and automotive-grade reliability—areas where GF still trails.

Foundries, OSATs, and Marketplace Models

Intel promotes EMIB and Foveros but still leans on OSATs. Samsung offers I-Cube and X-Cube with heavier OSAT reliance. GF, by contrast, does not yet operate advanced packaging at this scale and relies primarily on partnerships. Meanwhile, TSMC is building a neutral chiplet marketplace, with partners such as SiFive, Andes, and Arm supplying CPU IP. For each, TSMC maintains the GDSII layouts, test data, and packaging rules.

Chiplets are then fabricated, validated, and packaged on demand. This model positions TSMC as an impartial enabler, offering interoperable building blocks that customers can combine with memory, accelerators, or custom logic. Unlike Arm’s original design-time macros, TSMC delivers finished dies, ready for integration.

GF is pursuing a hybrid path: it owns CPU IP through MIPS to offer anchor dies but must rely on OSAT partnerships for packaging. Samsung leverages its memory dominance, Rapidus emphasizes sovereignty through Renesas, and Intel struggles with neutrality because of x86.

Physical AI and the Sovereignty Play

GF frames the MIPS acquisition as enabling “Physical AI”—processors that not only sense and infer but act in real time for robotics, autonomous driving, and Smart NIC DPU. But the immediate battleground is datacenter AI. With chiplets becoming the architecture of choice, GF is positioning itself not just as a wafer supplier but as a strategic partner, offering anchor dies for sovereign compute platforms.

In practice, GF’s strongest near-term opportunities lie in automotive and edge AI, where its process nodes align with safety, reliability, and low-power requirements. Its datacenter ambitions may depend less on competing head-to-head with TSMC and more on offering sovereign, open alternatives through chiplets. In the race to define the backbone of AI datacenters, GF’s MIPS bet is less about reviving a legacy brand and more about ensuring relevance in a modular, sovereign compute era.

Also Read:

Breaking the Thermal Wall: TSMC Demonstrates Direct-to-Silicon Liquid Cooling on CoWoS®

Teradyne and TSMC: Pioneering the Future of Semiconductor Testing Through the 2025 OIP Partner of the Year Award

Sofics’ ESD Innovations Power AI and Radiation-Hardened Breakthroughs on TSMC Platforms


Moores Lab(AI): Agentic AI and the New Era of Semiconductor Design

Moores Lab(AI): Agentic AI and the New Era of Semiconductor Design
by Kalar Rajendiran on 10-08-2025 at 10:00 am

Silicon Engineering at the Speed of AI

For decades, chip design has been a delicate balance of creativity and drudgery. Architects craft detailed specifications, engineers read those documents line by line, and teams write and debug thousands of lines of Verilog and UVM code. Verification alone can consume up to 35 percent of a project’s cost and add many months to the schedule.

Moores Lab(AI), a young company founded by veterans of chip engineering and artificial intelligence, believes the era of “manual everything” is ending. Its vision is agentic AI that not only assists engineers but actively orchestrates the entire pre-fabrication workflow—from architecture specification all the way to a verified GDSII hand-off.

At the AI Infra Summit 2025, I interviewed Shelly Henry, the founder of Moores Lab(AI). Following is a synthesis of that conversation.

The Spark Behind Agentic AI

The founders’ “aha” moment came when they recognized that large language models could finally tackle what had always been an understanding problem, not merely an automation problem. Before modern AI, no tool could read a specification document, truly comprehend it, and then translate that intent into correct, production-ready RTL or UVM code.

Today Moores Lab(AI)   can do exactly that. Its platform reads and interprets a specification much like a human designer, generates Verilog and UVM code, and iterates automatically until the design compiles cleanly. It integrates with industry-standard verification suites—Synopsys VCS, Cadence Xcelium, Siemens Questa, and others—fixing issues as it goes. The goal is not to replace EDA tools but to remove the manual glue work that has held the process together for decades.

Middleware on the Backend, Automation on the Frontend

Moores Lab(AI)’s   technology works like a two-sided engine. On the backend it serves as middleware that ties together heterogeneous EDA tools and manages constraints, switches, and re-runs—tasks that once required endless scripting. On the frontend it tackles the most labor-intensive work: writing Verilog, building UVM testbenches, debugging, and optimizing for power, performance, and area. This combination makes the platform both an orchestration layer and an agentic AI assistant.

Three Operating Modes for Real-World Designs

A system-on-chip rarely begins from a clean slate. Some blocks are reused, others need modification, and some must be built from scratch. Moores Lab(AI)   trains its agent tools to mirror that reality with three operating modes: “Scratch” for brand-new IP blocks, “Edit” to modify existing designs, and “Complete” to integrate third-party or legacy IP unchanged. Engineers simply declare the status of each block and the platform selects the right strategy.

Measurable, Independent Productivity Gains

According to the company, customer pilots using Moores Lab(AI)’s VerifAgent™ have shown striking improvements. Creating a test plan that once took three weeks can now be done in about ten minutes. Building a testbench that typically required three months can be completed in less than a day. Implementing test cases, which used to take two to three months, can also be finished in a single day. These results translate to productivity improvements of roughly 92 to 97 percent and schedule accelerations of seven to ten times, even before full SoC automation is available. Current tools focus on digital blocks, with analog support on the roadmap.

Security and Control Built In

Customer intellectual property protection is paramount. All Moores Lab(AI) tools run on-premises or in a customer’s private cloud. The LLM connection is entirely under the customer’s own subscription—Azure, OpenAI, Anthropic Claude, Google Gemini, or another provider. EDA licenses remain customer-owned. Moores Lab(AI)   never sees customer data, making the platform a self-contained, enterprise-grade solution.

As I Was Finishing My Interview

As I was finishing my interview with Shelly, I realized a value proposition that may be hiding within Moores Lab(AI)’s technology. Many semiconductor companies have historically locked into a predominantly single-vendor EDA flow for valid reasons—licensing simplicity, team expertise, and risk management among them. That approach, however, limits flexibility and can force compromises on tool quality.

What if there were a painless way to enable easy mix-and-match of EDA tools on a project-by-project or design-by-design basis? I asked Shelly if that was possible, and he affirmed it and elaborated.

Freedom to Mix and Match EDA Tools More Easily

With a few configuration settings, engineering teams can create bespoke multi-vendor flows for every chip or block, choosing the best simulator, synthesizer, or place-and-route engine for each project. Traditionally, achieving this kind of heterogeneity required weeks of fragile scripting and complex license management. Moores Lab(AI)   abstracts those differences, making the process almost plug-and-play. The implications are significant: easier adoption of best-in-class tools, reduced vendor lock-in, and even new leverage when negotiating future EDA license deals.

A Complement, Not a Competitor

Crucially, Moores Lab(AI)   does not replace core EDA engines such as synthesis or place-and-route. It layers on top, automating the manual tasks that once linked those engines together. That makes partnerships—not displacement—the natural relationship with incumbents such as Synopsys, Cadence, and Siemens.

Looking Ahead

Over the next three to five years, Moores Lab(AI)   expects agentic AI to extend across RTL, verification, and backend flows with growing autonomy, while humans remain essential for architectural decisions and final sign-off. Longer term, fully automated design is conceivable but will always require human oversight.

Summary

Whether customers adopt it first for the eye-catching seven-to-ten-times schedule acceleration or for the newfound ability to tailor EDA workflows to each project, Moores Lab(AI)   is quietly re-architecting how chips are built. For an industry eager to shorten design cycles, cut costs, and escape vendor lock-in, that combination may be transformative.

Learn more at MooresLab.ai

Also Read:

AI Everywhere in the Chip Lifecycle: Synopsys at AI Infra Summit 2025

Neurosymbolic code generation. Innovation in Verification

Podcast EP308: How Clockwork Optimizes AI Clusters with Dan Zheng


Breaking the Thermal Wall: TSMC Demonstrates Direct-to-Silicon Liquid Cooling on CoWoS®

Breaking the Thermal Wall: TSMC Demonstrates Direct-to-Silicon Liquid Cooling on CoWoS®
by Daniel Nenni on 10-08-2025 at 8:00 am

ETC 2025 TSMC

In a landmark presentation at the 2025 IEEE Electronic Components and Technology Conference (ECTC), TSMC unveiled a groundbreaking advancement in thermal management: Direct-to-Silicon Liquid Cooling integrated directly onto its CoWoS® platform. This innovation, detailed in the paper “Direct-to-Silicon Liquid Cooling Integrated on CoWoS® Platform,” addresses the escalating thermal challenges posed by HPC and AI applications, where power densities are surging beyond traditional cooling limits. As AI accelerators and data center chips push thermal design power toward kilowatt levels, TSMC’s solution promises to shatter the “thermal wall,” enabling denser, faster, and more efficient semiconductor designs.

The “thermal wall” refers to the fundamental barrier where heat generation outpaces dissipation capabilities, throttling performance and reliability in advanced nodes. With the rise of 2.5D/3D packaging technologies, chips now integrate multiple dies, high-bandwidth memory (HBM) stacks, and interposers on a single package, amplifying power densities to over 4.8 W/mm². Air cooling, once sufficient for consumer-grade processors, falls short for HPC workloads. Even advanced air-cooled heatsinks struggle with TDPs exceeding 1 kW, leading to hotspots that degrade silicon integrity and limit clock speeds. Liquid cooling has emerged as a necessity, but conventional methods—relying on bulky external loops or thermal interface materials (TIMs)—introduce inefficiencies, adding thermal resistance and manufacturing complexity.

TSMC’s Direct-to-Silicon Liquid Cooling redefines this paradigm by embedding microfluidic channels directly into the silicon structure, bypassing TIMs for near-zero thermal impedance. At the heart of this technology is the Si-Integrated Micro Cooler, a silicon-based solution fusion-bonded to the chip’s backside. Demonstrated on a 3.3X-reticle CoWoS®-R package—a massive ~3,300 mm² interposer supporting multiple logic dies and HBM stacks—the system achieves junction-to-ambient thermal resistance (θJA) as low as 0.055 °C/W at a coolant flow rate of 40 ml/s. This outperforms lidded liquid cooling with TIMs (0.064 °C/W) by nearly 15%, enabling sustained operation at over 2.6 kW TDP with a temperature delta under 63°C.

CoWoS®  is TSMC’s flagship 2.5D packaging technology, pivotal for AI giants like NVIDIA’s GPUs and AMD’s Instinct accelerators. It stacks chips on a silicon interposer for ultra-high interconnect density, supporting up to 12 HBM4 stacks in future “Super Carrier” iterations spanning 9 reticles. However, as interposers scale to 2,500 mm² or larger, heat flux intensifies, risking electromigration and yield loss. The IMC-Si integrates seamlessly into CoWoS®-R and upcoming CoWoS®-L variants, which incorporate backside power delivery networks (BSPDN) and embedded deep trench capacitors (eDTCs) for enhanced power stability. Microchannel designs—featuring square pillars, trenches, or flat planes—optimize fluid dynamics, with pillar structures proving superior for turbulent flow and heat extraction.

The demonstration highlights practical viability. TSMC tested prototypes with deionized water as coolant, achieving power densities exceeding 7 W/mm² on logic chip backsides. Fusion bonding ensures hermetic seals, preventing leaks in high-pressure environments, while low-temperature processes maintain compatibility with 1.6nm-class nodes. Early results show no degradation in electrical performance, with signal integrity preserved across hybrid bonding interfaces.

This breakthrough extends beyond cooling; it’s a cornerstone of TSMC’s 3DFabric ecosystem, aligning with “More than Moore” strategies like hybrid bonding and CMOS 2.0. By eliminating TIMs, it reduces assembly costs and variability, while enabling trillion-transistor monolithic-like systems. For data centers, it slashes rack-level power—potentially halving cooling infrastructure needs—and supports immersion-compatible designs. In edge AI and 5G, compact IMC-Si modules could fit mobile HPC, boosting efficiency in autonomous vehicles and AR/VR.

Challenges remain: scaling microfluidic fabrication to high volumes, ensuring coolant purity to avoid corrosion, and integrating with emerging materials like silicon carbide interposers for even higher thermal conductivity. Yet, TSMC’s track record—powering 80% of advanced AI chips—positions it to lead commercialization by 2027.

Dr. Kevin Zhang, TSMC’s Deputy Co-COO and Senior Vice President, emphasized: “Direct-to-Silicon Liquid Cooling breaks the thermal wall, unlocking the full potential of CoWoS® for exascale AI. This isn’t just incremental; it’s transformative for sustainable computing.”

As AI workloads explode, TSMC’s innovation heralds a cooler, greener future for semiconductors, where heat is no longer the limiter but a solved equation.

Also Read:

Teradyne and TSMC: Pioneering the Future of Semiconductor Testing Through the 2025 OIP Partner of the Year Award

Sofics’ ESD Innovations Power AI and Radiation-Hardened Breakthroughs on TSMC Platforms

Synopsys and TSMC Unite to Power the Future of AI and Multi-Die Innovation

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025


From Prompts to Prompt Engineering to Knowing Ourselves

From Prompts to Prompt Engineering to Knowing Ourselves
by Bernard Murphy on 10-08-2025 at 6:00 am

From prompts to Promt Engineering

I am on a voyage of discovery through prompting and prompting technologies because these are the critical interfaces between what we want (or roughly imagine we want) from AI, and AI’s ability to deliver. I have seen suggestions that any deficiencies today are a detail that will soon be overcome. I’m not so sure. Yes, prompting technology will continue to advance but there are hurdles along the way which may require rethinking how we humans interact with AI. For this blog I have drawn on a widely cited paper which studied how people who are not AI experts interact with a recipe assistant chatbot to optimize its behavior for maximum user friendliness. Not difficult to see how lessons learned might apply in any domain.

First, what already works well

Before I get to problems, some GenAI applications are already easy to use (once built). Drafting a reply to an email is an example. You start with a prompt, “Draft a friendly reply and make the following points in response…” The chatbot generates a draft which you can then edit as needed before sending. Refining your prompt might be a nice-to-have but is not essential since you have final control over the content, especially important to be able to correct any glaring errors in the draft.

An advance over a simple prompt would be an “appified” prompt. An end user is offered a menu of pre-determined options, possibly allowing for some level of parameterization. Scaling this kind of app to a wide user base with no expertise in AI should not be challenging. On the other hand, developing the app requires significant prompt engineering expertise and a well-characterized understanding of expected use models. To develop and support such apps you no longer need an expert software engineer. Now you need an expert prompt engineer!

Beyond these and similar use cases, expected use models start to look more like regular prompt engineering for chatbots, designed to support non-AI experts within some domain. An example might be a help chatbot to get advice on how to use a simulator feature, or help generate a complex assertion, or to generate a script to accomplish some task. This is where the study I mentioned above becomes interesting.

The study experiment

GenAI systems have already advanced beyond simple one-time prompts to allow for multiple rounds of edit in refining an initial attempt. This is commonly called multi-turn prompting. Some systems even switch roles, prompting you to provide answers to a series of questions, in expectation that answering simple questions may reduce opportunities for ambiguity. The study considered here somewhat combines these two approaches.

The study is based on non-experts working with a recipe development chatbot to refine how it communicates to an end-user-friendly bot on how to prepare and cook veggie tempura, supported by an example of a real TV show in which a chef guides would-be cooks in preparing the same dish. Participants ranged from academics to professional programmers, all with STEM backgrounds, and none had meaningful experience with AI of any kind.

While this example may not seem very relevant to most chatbot applications, what caught my attention is not the detail of the application but rather the prompt engineering experiences of these participants and the problems they ran into along the way, most of which are very relevant (I think) to almost any kind of human-driven prompt engineering.

Findings

The important insights here are much more around our human limitations than in limitations in chat technologies. First, the study finds that participants almost exclusively used an opportunistic, ad hoc approach to prompt exploration.

You might ask, “As opposed to what?” Professionals in almost any domain are comfortable with the need to systematically analyze options to choose a best next step, but this is not how most of us approach prompting. We expect to converge quickly on what we intuitively want without need for disciplined structure or semantics in our prompt or prompt refinements.

In a similar vein, participants were prone to over-generalize from success/failure on individual prompt changes and clearly modeled their interaction on human-to-human exchanges, not appreciating differences in how AI processes feedback.

In over-generalization, they generally aimed to find a desirable behavior in one or two attempts. If that worked it was good enough for them, if not they assumed what they asked for was beyond the capabilities of the AI. I can relate. If I am using prompt refinement to get to a goal and I get close, why would I push further? If I don’t get close as a non-expert I have neither skill, nor time nor interest in systematically exploring how different changes to the prompt will affect the outcome.

The assumption that human-to-AI dialog mirrors human-to-human dialog leads to some interesting disconnects. One example is in participants preferring to use direct instruction (do this… ) rather than examples, even when examples are available (from the TV show). This is a common issue in storytelling where we should “show rather than tell”. Show by example rather than by direct instruction. This seems counterintuitive to us, but bots agree. They respond more effectively to examples than to direct instructions. If we’re honest, even we humans would agree. Direct instruction works well in textbooks, not always so well in conversation.

In a similar vein, participants would often use negatives to try to direct behavior. They were surprised to find these directions were often ignored (LLMs struggle with negatives). They were encouraged by study leaders to repeat themselves multiple times to reinforce a direction but apparently felt this was unnatural, even when they were shown it could be effective. Another area where human-human dialog and human-AI dialog diverge.

Conclusions

If these findings are self-evident to you, congratulations. You are one of a small and elite group of prompt engineering experts. But that’s not very helpful to the large market of non AI-expert users in business, engineering and other domains (in which I include myself). The behaviors we must learn to effectively prompt chatbots are not at all intuitive to us. Perhaps we should think of chatbots as very experienced children. They know a lot about the domain that interests us and can provide very useful answers to our questions, but we need to coax those answers from them through conversational gambits, rather different from the approach we would use in talking with an experienced adult.

Or maybe there is a way chatbots can treat us as very experienced children (!), guiding us through a series of simple prompts to what we really want.

By the way, shout out to David Zhi LuoZhang (CEO of Bronco AI) for pointing me to Gemini Nano Banana, my new favorite for AI-based image generation!


GaN Device Design and Optimization with TCAD

GaN Device Design and Optimization with TCAD
by Daniel Payne on 10-07-2025 at 10:00 am

GaN Power, Frequency min

I’ve read articles about power electronics, RF systems and high-frequency applications using SiC and GaN transistors, especially in EVs and chargers, but hadn’t looked into the details of GaN devices. A recent Silvaco webinar proved to be just the format that I needed to learn more about GaN design and optimization. Udita Mittal, Field Application Engineer at Silvaco presented this webinar aimed at process engineers, simulation engineers and fabrication engineers.

GaN and SiC devices provide a wide bandgap, high electron velocity and high temperature range, making them ideal for power electronics applications. Here’s where Silicon, SiC and GaN technologies are being used across the power and frequency spectrums.

The first type of GaN device introduced in detail was a GaN HEMT (High Electron Mobility Transistor) dubbed a pGaN, where the cross-section is shown below:

The pGaN has a high sheet carrier density at the red-dashed interface, caused by the unique polarization-induced electric field. TCAD tools accurately model the Schottky gate, alGaN barrier, high electric fields, SiN passivation and the buffer layer. Many parameters impact the device performance, like: polarization charge, bulk traps, interface traps, metal workfunction, electron/hole mobility along the interface and the substrate stack. The good news is that Silvaco has many TCAD and EDA tools to accurately design and optimize GaN devices:

Process simulation is driven by the physics, where the GDSII layouts create each process layer, etching and deposition are modeled, implantation is simulated, and even stress and strain effects from the local lattice mismatch and thermal mismatch are calculated. Here’s the GaN mobility model produced with velocity saturation effects

The Victory DoE tool is both a project manager and DoE tool, while Victory Analytics provides analytics along with machine learning. You provide the desired targets, then Victory Analytics feeds back the input values to achieve the targets. This approach gives insight about which parameters are dominant.

Udita presented the fabrication flow for a normally off GaN HEMT to optimize the device structure and electrical performance. Users can then visualize the process cross-section and device characteristics.

Using TCAD, the key inputs were identified that affected steady state IV characteristics. A DoE analysis showed how IdVd was impacted by the AlGaN barrier thickness and Al mole fraction in AlGaN barrier. Even the breakdown voltage can be modeled in lateral HEMTs.

The final device example discussed was a vertical GaN (CAVET), where Victory Process and Victory Device were used to create IV curves, dominant model parameters were selected with Victory Analytics, and sensitivity analysis performed in Victory DoE. Victory Analytics also generated a ML regression model and visualized FoM (Figure of Merit) trends.

Creating SPICE models from TCAD tools is accomplished with UtmostIV, then simulated with SmartSpice. Utmost creates CMC standard GaN FET models or third-party models.

Summary

Silvaco had broadened its offerings to include TCAD, EDA and Semiconductor IP. The growing demand for power electronics is being met with GaN devices. Using TCAD tools an engineering team can now design and optimize their GaN devices, resulting in faster time to market and improved performance. Victory Design of Experiments and Victory Analysis tools make it possible to explore the design space on the path to an optimized transistor structure modeled as a digital twin.

Customers like ST Microelectronics, Wavetek and Fraunhofer ISIT have used Silvaco tools in their GaN device projects for DTCO.

View the archived webinar online.

 

Related Blogs


How the Father of FinFETs Helped Save Moore’s Law

How the Father of FinFETs Helped Save Moore’s Law
by Daniel Nenni on 10-07-2025 at 8:00 am

Chenming Hu and Obama

In the early 2000s, Moore’s Law—the observation that the number of transistors on a chip doubles roughly every two years—was facing an existential crisis. As semiconductor nodes shrank below 90nm, planar transistors suffered from debilitating issues: leakage currents soared, power efficiency plummeted, and scaling became unsustainable. Enter Dr. Chenming Hu, widely regarded as the “Father of FinFETs,” whose invention of the Fin Field-Effect Transistor revolutionized semiconductor design and breathed new life into Moore’s Law, enabling the modern era of computing.

Moore’s Law, coined by Gordon Moore in 1965, had driven decades of exponential growth in computing power, fueling everything from PCs to smartphones. By the late 1990s, however, planar MOSFETs (Metal-Oxide-Semiconductor Field-Effect Transistors) hit physical limits. At smaller nodes, short-channel effects caused electrons to leak, increasing power consumption and heat. By 2003, leakage power in 90nm chips was nearly equaling dynamic power, threatening performance and battery life. Scaling transistors further seemed impossible without sacrificing reliability or efficiency, prompting industry leaders to declare Moore’s Law “dead.”

Dr. Hu, a professor at UC Berkeley and a veteran of semiconductor research, proposed a radical solution: the FinFET. Unlike planar transistors, which lie flat on the silicon surface, FinFETs are three-dimensional structures with a thin, fin-like channel protruding vertically. This “fin” is surrounded by a gate on three sides, providing superior electrostatic control over the channel. Introduced in a seminal 1999 paper, Hu’s FinFET design reduced leakage current by orders of magnitude, improved switching efficiency, and enabled scaling to sub-20nm nodes. His team’s simulations showed that FinFETs could operate at lower voltages while maintaining high performance, a critical breakthrough for power-constrained devices.

The impact was profound. By 2011, Intel adopted FinFETs for its 22nm Ivy Bridge processors, marking the technology’s commercial debut. TSMC and Samsung followed, integrating FinFETs into 16nm and 14nm nodes by 2014. FinFETs allowed chipmakers to pack more transistors into smaller areas without the catastrophic leakage of planar designs. For example, Intel’s 22nm FinFET process achieved a 37% performance boost at the same power or a 50% power reduction at the same performance compared to 32nm planar chips. This revitalized Moore’s Law, enabling the development of power-efficient CPUs, GPUs, and AI accelerators.

Hu’s innovation wasn’t just technical; it was a paradigm shift. FinFETs required rethinking transistor architecture, fabrication processes, and design tools. The 3D structure demanded precise lithography and new materials, like high-k dielectrics, to manage capacitance. TSMC’s 7nm FinFET node, powering chips like Apple’s A12 Bionic, achieved transistor densities of over 90 million per mm², a feat unimaginable with planar technology. By 2025, FinFETs remain the backbone of advanced nodes, with TSMC’s 3nm process pushing densities to 200 million transistors per mm², driving AI, 5G, and HPC applications.

Beyond technical merits, Hu’s work had economic and societal impacts. FinFETs extended the viability of Moore’s Law, sustaining the semiconductor industry’s growth. The global chip market, valued at $600 billion in 2024, owes much to FinFETs’ ability to meet demands for faster, smaller, and greener devices. From smartphones to data centers, FinFETs underpin modern technology, enabling AI models like those powering chatbots and autonomous vehicles. Hu’s contributions earned him the 2016 IEEE Medal of Honor and recognition as a visionary who “saved” Moore’s Law.

Challenges remain as scaling approaches 1nm. Quantum tunneling and heat dissipation threaten further miniaturization, prompting exploration of gate-all-around (GAA) transistors and 2D materials. Yet, FinFETs laid the foundation for these innovations, proving that architectural ingenuity could overcome physical limits. Dr. Hu’s legacy is not just in sustaining Moore’s Law but in inspiring a generation of engineers to rethink the impossible. As semiconductors evolve, his FinFET remains a cornerstone, ensuring Moore’s Law endures for years to come.

Read full article here.

Also Read:

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion


A Remote Touchscreen-like Control Experience for TVs and More

A Remote Touchscreen-like Control Experience for TVs and More
by Bernard Murphy on 10-07-2025 at 6:00 am

touchless touchscreen

How do you control your smart TV? With a remote control of course, already quite capable since it allows voice commands to find a movie or TV show without needing all that fiddly button-based control and lookup. But there’s a range of things you can’t do that we take for granted on a tablet or phone screen. Point and click on an object, drag and drop, swipe, or draw freehand (maybe circle an actor to ID). This is what the emerging generation of smart automation for remotes will make possible for TVs, for gaming and for other applications such as interacting with images cast from a PC to a large monitor. LG Smart TVs with Magic Remote have already proven the appeal of an earlier generation of pointing control as LG has included this technology in its flagship and main line Smart TVs since 2010 and over 600 OEMs have licensed their underlying WebOS® providing pointing control enabled by Ceva’s MotionEngine™ sensor fusion software. Now Ceva has extended MotionEngine further, to bring true remote touchscreen-like possibilities to how we can interact with home entertainment and commercial presentation devices.

The opportunity

The global market for smart TVs alone was around a quarter of a trillion dollars in 2023/2024 and is expected to show a CAGR of between 11.5% and 12.8% through 2033. This is closing fast on and may ultimately surpass smart phone market sizes. I guess in a busy, always-on world we still need our passive entertainment down time at the end of the day, much though we like our phones.

Better ways to interact with home (and office) applications have always spurred innovation, as seen in the wide popularity of voice control. Looking for further innovation or parity, TV makers will need to encourage interaction in ways that mirror modern paradigms for touchscreens, without needing to touch the screen. Why not point and drag or wave controls off-screen with your remote? Or shake the remote or draw a symbol on the screen to prompt a user-selected action (jump to your favorite news channel perhaps). I expect that once a trend becomes apparent here it will quickly turn into a flood.

Behind a touchless touchscreen

These systems depend on capabilities in both the host (say a TV) and the remote. The remote uses sensors, such as an inertial measurement unit (IMU), fusing that information to determine pointing position on the target and communicating to the host side software.

Historically these systems have used 3DoF (3 degrees of freedom: pitch, yaw and roll) to sense where the remote is pointing, but more is possible with 6DoF systems combining a 3-axis accelerometer, 3-axis gyroscope and UWB-based position sensing. These can determine position and orientation determination with high spatial and temporal accuracy.

The Ceva-MotionEngine Hex software supports this capability with some unique advantages. First, as a software-only library it is designed to allow OEMs to build around their preferred processor and sensor choices. It also provides absolute cursor positioning, essential to support interactions like precise point and click sensitivity, on- and off-screen gestures and drag and drop.

Further, the library provides the must-have features essential in any application of this nature. Whatever sensors you may be using, Ceva-MotionEngine Hex handles automatic bias calibration, correcting for drift without need for manual calibration. For a hand-held remote, small tremors in holding the remote or pressing buttons could be damaging to accuracy. The library stabilizes against such movements. It supports superior slow-motion detection to select small targets. And it offers very low latency to ensure naturally responsive behavior.

Unsurprisingly given Ceva’s strength in low power applications, the embedded library is designed for systems with very constrained resources and can operate efficiently on low power MCUs or DSPs, just what you need for a TV or gaming remote or a stylus pen designed to write over a PC or projected presentation.  It can also reside in the TV operating system (Android®, Linux®, webOS®) simplifying deployment across many platforms and enabling additional advanced features.

The Ceva-MotionEngine Hex advantage

This software library is built on more than two decades of experience working with MEMS IMUs and partners building motion-aware systems. Among these LG Electronics built the library into their Smart TV and Magic Remote back in 2010 and adoption has grown across LG branded Smart TVs and many third-party brand partners since then.

It might seem that at first glance that an OEM might choose to build this capability themselves. According to Chad Lucien (VP and GM for the sensor and audio business unit at Ceva) this isn’t as easy as it might appear. Delivering a production solution is much more complex, including challenges like robustly handling sensor drift over time, understanding the many possible user experiences that could be modeled, and mitigating the impact of users’ hand tremors which can result in large pointing inaccuracies if not addressed.

Translating from a prototype to a robust production solution is where the promise of a remote touchscreen-like experience could fall apart, unless you work with a supplier which has significant and proven experience in this field. To learn more, check HERE.

Also Read:

WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?

WEBINAR: Edge AI Optimization: How to Design Future-Proof Architectures for Next-Gen Intelligent Devices

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier


Teradyne and TSMC: Pioneering the Future of Semiconductor Testing Through the 2025 OIP Partner of the Year Award

Teradyne and TSMC: Pioneering the Future of Semiconductor Testing Through the 2025 OIP Partner of the Year Award
by Daniel Nenni on 10-06-2025 at 10:00 am

TSMC 3D Fabric Packaging TSMC OIP 2025

In a significant milestone for the semiconductor industry, Teradyne was honored as the 2025 TSMC Open Innovation Platform® Partner of the Year for TSMC 3DFabric® Testing. This award, announced on September 25, 2025, underscores the deep collaboration between Teradyne, a leader in automated test equipment and robotics, and TSMC, the world’s premier semiconductor foundry. The recognition highlights their joint efforts in advancing multi-die test methodologies for chiplets and TSMC’s CoWoS® advanced packaging technology, marking a pivotal step in the shift toward chiplet-based architectures essential for AI and high-performance computing.

Teradyne, headquartered in North Reading, Massachusetts, specializes in designing and manufacturing automated test solutions for semiconductors, electronics, and robotics systems. Its portfolio ensures high-quality performance across complex devices, from wafer-level testing to final assembly. TSMC dominates the foundry market with cutting-edge process nodes and packaging innovations. The partnership traces back to at least 1999, when TSMC adopted Teradyne’s automatic test equipment for 0.18-micron test chips. Over the years, this alliance has evolved, with Teradyne contributing to TSMC’s ecosystem through innovations in test strategies for heterogeneous integration.

At the heart of this award is TSMC’s OIP, launched in 2008 to foster collaboration among design partners, IP providers, and ecosystem members. OIP accelerates innovation by integrating process technology, EDA tools, and IP, enabling faster implementation of advanced designs. Celebrating its 15th anniversary in 2023, OIP has grown from 65nm nodes onward, addressing rising design complexities. Within this framework, the 3DFabric Alliance, introduced in 2023, focuses on overcoming challenges in 3D integration and advanced packaging.

TSMC 3DFabric® represents a comprehensive suite of 3D silicon stacking and advanced packaging technologies, encompassing both 2.5D and 3D architectures like CoWoS and InFO. These enable heterogeneous integration, boosting system-level performance, power efficiency, and form factors for applications in AI accelerators, 5G, and HPC. CoWoS, in particular, supports multi-die packages by stacking chips on silicon interposers, ideal for demanding AI workloads.

Through the 3DFabric Alliance, Teradyne and TSMC have pioneered test methodologies that enhance silicon bring-up efficiency and test quality. Teradyne’s investments in UCIe, GPIO, and streaming scan test solutions facilitate scalable, high-quality testing of die-to-die interfaces. UCIe, an open standard for chiplet interconnects, ensures seamless data transfer between dies, while streaming scan enables high-speed testing over these interfaces at wafer sort or probing stages. This reduces defect escapes, lowers quality costs, and accelerates time-to-market for 3D ICs used in AI and cloud datacenters.

Shannon Poulin, President of Teradyne’s Semiconductor Test Group, emphasized the value of TSMC’s collaborative ecosystem: “At Teradyne, we strongly believe in the open and collaborative ecosystem approach of TSMC’s Open Innovation Platform and look forward to continuing our partnership to drive innovation and deliver exceptional value to our customers.” Aveek Sarkar, Director of TSMC’s Ecosystem and Alliance Management Division, congratulated Teradyne, noting their contributions to improving silicon bring-up and enabling AI proliferation through energy-efficient compute.

The award was unveiled at the 2025 TSMC North America OIP Ecosystem Forum in Santa Clara, California, on September 24, 2025. This event gathered industry leaders to explore AI’s role in next-generation designs for TSMC’s advanced nodes like A16, N2, and N3. Highlights included discussions on AI-accelerated chip design, multi-die systems, and 3DFabric advancements, with partners showcasing tools for HPC and energy efficiency.

This partnership not only strengthens Teradyne’s position in AI hardware testing but also propels the industry toward more efficient, scalable semiconductor solutions. As demand for AI and cloud infrastructure surges, collaborations like this will be crucial in shortening development cycles and enhancing reliability. Looking ahead, Teradyne and TSMC’s ongoing innovations promise to redefine heterogeneous integration, driving the next wave of technological breakthroughs

Also Read:

Synopsys and TSMC Unite to Power the Future of AI and Multi-Die Innovation

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

Analog Bits Steps into the Spotlight at TSMC OIP


How Secondary Electrons Worsen EUV Stochastics

How Secondary Electrons Worsen EUV Stochastics
by Fred Chen on 10-06-2025 at 8:00 am

How Secondary Electrons Worsen EUV Stochastics 1

Increasing dose not only faces diminishing returns, but lets electron noise dominate over photon noise.

The EUV lithography community should now be well aware that rather than EUV photons driving resist chemical response directly, they release photoelectrons, which further release secondary electrons, that in turn cause the photon’s energy to be deposited over many molecules in the resist [1]. While this directly leads to a blurring effect which can be expressed as a quantifiable reduction of image contrast [2], this also leads to consequences for the well-known stochastic effects. The stochastic behavior in EUV lithography has often been attributed in large part to (absorbed) photon shot noise [3], but until now there has been no consideration of the direct contribution from the electrons themselves.

There is a randomness in the number of electrons released per absorbed EUV photon [4]. The upper limit of 9 can be taken to be the maximum number of lowest energy losses (~10 eV) from an absorbed 92 eV photon, while a lower limit of 5 can be estimated from considering Auger emission as well as the likely loss of an electron through the resist interface with the underlayer or the hydrogen plasma ambient above the resist. Intermediate values are also possible, e.g., two secondary electrons may precede an Auger emission, leading to 7 electrons total. Thus, unlike the classical split or thinned Poisson distribution which characterizes photon absorption [5], a uniform distribution of integers from 5 to 9 as the probability mass function can be reasonably used, at least as a starting point. Let’s now take a closer look at the statistics from such a distribution.


Sofics’ ESD Innovations Power AI and Radiation-Hardened Breakthroughs on TSMC Platforms

Sofics’ ESD Innovations Power AI and Radiation-Hardened Breakthroughs on TSMC Platforms
by Daniel Nenni on 10-06-2025 at 6:00 am

Sofics TSMC OIP 2025 SemiWiki

In the fast-evolving semiconductor landscape, electrostatic discharge (ESD) protection is pivotal for ensuring chip reliability amid shrinking nodes and extreme applications. Sofics, a Belgian IP provider specializing in ESD solutions for ICs, has cemented its leadership through strategic collaborations showcased at TSMC’s 2025 Open Innovation Platform Ecosystem Forum. By delivering Power-Performance-Area optimized ESD IP across TSMC nodes from 250nm to 2nm, Sofics enables innovations in AI infrastructure and harsh-environment electronics.

A prime example is Sofics’ partnership with Celestial AI, tackling AI’s “memory wall” bottleneck. As AI models explode in size—410x every two years for Transformers—compute FLOPS have scaled 60,000x over 20 years, but DRAM bandwidth lags at 100x and interconnects at 30x, wasting cycles on data movement. Celestial AI’s Photonic Fabric™ revolutionizes this with optical interconnects, delivering data directly to compute points for superior bandwidth density, low latency, and efficiency. Traditional optics demand DSPs and re-timers, inflating power and latency, but Photonic Fabric uses linear-drive optics, eliminating DSPs via high-SNR modulators and grating couplers.

Sofics customized ESD IP for TSMC’s 5nm process, proven in production, to protect Photonic Fabric’s sensitive interfaces. Tx/Rx circuits operate at ~1V with <20fF parasitic capacitance for 50-100Gbps signals, ensuring signal integrity while fitting dense packaging. ESD ratings hit 50V CDM with <100nA leakage, supporting thin-oxide circuits without GPIO cells. Power clamps handle non-standard voltages (1.2V-3.3V) in small areas, vital for EIC-PIC integration. This collaboration, highlighted at OIP, breaks bandwidth barriers, enabling multi-rack AI scaling. Celestial AI’s August 2025 Photonic Fabric Module, a TSMC 5nm MCM with PCIe 6/CXL 3.1, exemplifies this, backed by $255M funding.

Equally groundbreaking is Sofics’ alliance with Magics Technologies, enabling radiation-hardened (rad-hard) ICs for nuclear, space, aerospace, and medical sectors. Demand surges for rad-hard electronics amid space exploration and nuclear fusion research like ITER, where ICs must endure >1MGy TID and >62.5 MeV·cm²/mg SEE without malfunction. Magics, a Belgian firm with 10+ years in rad-hard-by-design, offers chips like wideband PLLs (1MHz-3GHz, -99dBc/Hz phase noise) and series for motion, imaging, time, power, and AI processing.

Sofics provides rad-hard ESD clamps for Magics’ TSMC CMOS designs, supporting voltages like 1.2V/3.3V with >2kV HBM, <20nA leakage, and <1700um² area. Key features include cold-spare interfaces (latch-up immune, SEE-insensitive up to 80MeV·cm²/mg) and stacked thin-oxide devices for 1.2V GPIOs on 28nm, bypassing thick-oxide limitations. This 15-year TSMC-Sofics tie, via IP & DCA Alliances, ensures early access and quality. Magics’ €5.7M funding in April 2025 accelerates commercialization.

Bottom line: These partnerships underscore TSMC’s ecosystem strength, with Sofics supporting 90+ customers in AI/datacenters (40+ projects) and space (e.g., Mars rover, CERN). By optimizing ESD for photonics and rad-hard apps, Sofics drives innovation, from hyperscale AI to fusion reactors, proving ESD IP’s role in overcoming physical limits.

Fore more information contact Sofics

Also Read:

Synopsys and TSMC Unite to Power the Future of AI and Multi-Die Innovation

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging