100X800 Banner (1)

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025
by Daniel Nenni on 10-01-2025 at 6:00 am

Alchip TSMC OIP 2025

In the relentless race to power next-generation artificial intelligence (AI) systems, data connectivity has emerged as the critical bottleneck. As AI models balloon in size—from billions to trillions of parameters—compute resources alone are insufficient. According to Ayar Labs, approximately 70% of AI compute time is wasted waiting for data, a inefficiency that escalates exponentially with system scale. Traditional copper-based electrical I/O, while reliable for intra-rack connections, falters under the demands of multi-rack AI clusters. Power consumption soars, latency spikes, and bandwidth caps out, rendering electrical solutions obsolete for hyperscale datacenters. Enter the strategic collaboration between Alchip Technologies and Ayar Labs, unveiled in September 2025, which promises to shatter these barriers through co-packaged optics (CPO) and advanced packaging innovations.

At the TSMC North America Open Innovation Platform (OIP) Ecosystem Forum on September 26, the partnership fuses Alchip’s expertise in high-performance ASIC design and 2.5D/3D packaging with Ayar Labs’ pioneering optical I/O chiplets. This isn’t mere integration; it’s a holistic ecosystem leveraging TSMC’s COUPE (Co-packaged Optics with Unified Packaging and Electronics) technology to embed optical engines directly onto AI accelerator packages. The result? A reference design platform that enables seamless, multi-rack scale-up networks, transforming AI infrastructure from rigid, power-hungry monoliths into flexible, composable architectures.

At the heart of this solution lies Ayar Labs’ TeraPHY™ optical engines, silicon photonics-based chiplets that replace cumbersome pluggable optics with in-package optical I/O. Each TeraPHY engine employs a stacked Electronic Integrated Circuit (EIC) and Photonic Integrated Circuit (PIC) architecture, utilizing microring modulators for dense, efficient light-based data transmission. The EIC, fabricated on advanced nodes, handles protocol-specific features like UCIe-A (Universal Chiplet Interconnect Express-Advanced) for logic protocols such as CHI, while the PIC manages optical signaling. A detachable optical connector simplifies manufacturing, assembly, and testing, ensuring high-volume scalability. Protocol-agnostic by design, TeraPHY supports endpoints like UALink, PCIe, and Ethernet, with forward error correction (FEC) and retimer logic delivering raw bit error rates below 10^-6 for PAM4 CWDM optics—achieving single-hop latencies of 100-200 nanoseconds. Future DWDM variants promise even lower 20-30 ns latencies and BERs under 10^-12.

Alchip complements this with its I/O protocol converter chiplets, bridging UCIe-A (streaming mode) to scale-up protocols, and integrated passive devices (IPDs) that optimize signal integrity through custom capacitors. Their prototype, showcased at Booth 319 in Taipei and Silicon Valley, integrates two full-reticle AI accelerators, four protocol converters, eight TeraPHY engines, and eight HBM stacks on a common substrate. This configuration unlocks over 100 Tbps of scale-up bandwidth per accelerator and more than 256 optical ports, dwarfing electrical I/O’s limits. Power density remains manageable, as optics reduce end-to-end energy per bit by minimizing electrical trace lengths and avoiding the thermal overhead of pluggables.

The implications for AI workloads are profound. In scale-up networks, where XPUs (AI processing units) must act as unified entities—scaling from 100 to 1,000 units—the joint solution enables XPU-to-XPU, XPU-to-switch, and switch-to-switch connectivity with path diversity for ultra-low latency. Extended memory hierarchies, pooling DRAM across racks via optical links, boost application metrics like training throughput by 2-3x, per preliminary simulations. Energy efficiency improves dramatically: Optical I/O consumes up to 10x less power than copper equivalents, critical as AI racks approach 100kW densities. For hyperscalers like those deploying GPT-scale models, this means greener, more interactive datacenters capable of real-time inference at exascale.

This collaboration underscores a broader industry shift toward disaggregated, photonics-driven computing. By addressing reach limitations beyond copper’s 1-2 meter horizon and enhancing radix for massive parallelism, Alchip and Ayar Labs are not just solving today’s challenges but future-proofing AI. As Vladimir Stojanovic, Ayar Labs’ CTO and co-founder, notes, “AI has reached an inflection point where traditional interconnects limit performance, power, and scalability.” Erez Shaizaf, Alchip’s CTO, echoes this, emphasizing the need for “innovative, collaborative advanced packaging.” With production-ready test programs and reliability qualifications, the duo is poised to accelerate adoption, potentially slashing AI deployment costs by 30-50% through efficiency gains.

Bottom line: This partnership heralds a new era of AI infrastructure: scalable, flexible, and composable. As models grow unabated, optical CPO will be indispensable, and Alchip-Ayar Labs’ blueprint offers a proven path forward. Hyperscalers take note—this is the optics revolution AI has been waiting for.

Contact Alchip

Also Read:

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation

Alchip Launches 2nm Design Platform for HPC and AI ASICs, Eyes TSMC N2 and A16 Roadmap

Alchip’s Technology and Global Talent Strategy Deliver Record Growth


AI Everywhere in the Chip Lifecycle: Synopsys at AI Infra Summit 2025

AI Everywhere in the Chip Lifecycle: Synopsys at AI Infra Summit 2025
by Kalar Rajendiran on 09-30-2025 at 10:00 am

Godwin Talk Summary AI Infra Summit 2025

At the AI Infra Summit 2025, Synopsys showed how artificial intelligence has become inseparable from the process of creating advanced silicon. The company’s message was clear: AI is an end-to-end engine that drives every phase of chip development. Three Synopsys leaders illustrated this from distinct vantage points. Godwin Maben, Synopsys Fellow, delivered a talk on designing ultra–energy-efficient AI chips. Arun Venkatachar, Vice President of AI & Central Engineering, joined a dynamic panel on the impact of AI on semiconductor startups. And Frank Schirrmeister, Executive Director of Strategic Programs, gave a talk on meeting the quadrillion-cycle verification demands of modern datacenter designs. Together, their sessions formed a comprehensive narrative of how Synopsys enables greener, faster, and more reliable silicon.

Designing for Energy Efficiency

Godwin set the context with a stark statistic: AI workloads are projected to grow fifty-fold by 2028, threatening to overwhelm datacenter power budgets. He described how next-generation system-on-chip (SoC) designs must balance unprecedented performance with aggressive energy targets. Domain-specific accelerators, near-memory compute, and chiplet-based 3D integration all play a role, but Godwin emphasized that the real breakthrough comes when AI-driven electronic design automation (EDA) is applied at every level. Machine-learning-guided synthesis, floor-plan optimization, and continuous energy regression allow architects to reduce power per watt while meeting tight performance and area goals. His talk highlighted that sustainable AI silicon isn’t a final step—it’s architected in from the start.

Accelerating the Design Flow

In the panel session “The Impact of AI on Semiconductor Startups,” Arun explained how Synopsys began weaving machine learning into its design tools nearly a decade ago, well before today’s hype cycle. That early start is paying off. AI now informs power, performance, and area (PPA) optimization, verification, and manufacturing test, enabling some customers to cut design times by as much as 40 percent. Arun stressed that this isn’t about replacing engineers; rather, AI acts as a force multiplier, automating routine tasks and surfacing optimal design choices so human experts can focus on system-level creativity. He painted a picture of a continuous AI-assisted pipeline where improvements in one stage cascade into the next, shortening schedules and reducing the risk of late-stage surprises. For startups, this means faster paths to market and the ability to compete with established players.

Arun also discussed how cloud-based EDA is enabling startups to accelerate time to results and drive improvements by leveraging scalable, AI-powered development tools that empower small teams to achieve breakthrough innovation more efficiently.

Verifying at Quadrillion-Cycle Scale

Frank turned the spotlight on the most daunting phase: verification. Modern datacenter chips integrate multi-die architectures, Arm-based compute subsystems, and heterogeneous accelerators. Ensuring these complex systems meet power, performance, and reliability targets requires quadrillions of verification cycles. Traditional simulation alone can’t keep pace. Frank outlined how Synopsys addresses the challenge with hardware-assisted verification (HAV), including ZeBu emulation, HAPS prototyping, and EP-Ready hardware that seamlessly switches between emulation and prototyping. These platforms support early software bring-up, high-fidelity power analysis, and rapid coverage closure, allowing design teams to meet yearly product refresh cycles without compromising quality or budget.

Customer proof points from companies like Microsoft, NVIDIA, and AMD underscore the productivity gains.

A Unified AI-Driven Pipeline

What emerges from these three perspectives is a powerful theme: AI is now the connective fabric of the semiconductor lifecycle. Godwin’s focus on energy efficiency, Arun’s account of AI-assisted design flows, and Frank’s hardware-accelerated verification all reveal different facets of the same strategy. Synopsys doesn’t merely provide point tools; it delivers a continuous, AI-enabled ecosystem that shortens time-to-market, minimizes energy consumption, and raises confidence in first-silicon success.

The Summit sessions also highlighted Synopsys’s broader collaborations—with cloud providers for scalable compute, with Arm for ready-to-integrate IP, and with hyperscalers eager to validate enormous AI workloads. These partnerships reinforce the company’s role as the hub where cutting-edge design, advanced verification, and manufacturing readiness converge.

Summary

AI Infra Summit 2025 confirmed that designing the next generation of silicon is no longer about isolated breakthroughs. It is about orchestrating every step—architecture, layout, verification, and manufacturing—as one AI-driven continuum. Synopsys has been preparing for this moment for years, and the results are clear: faster design cycles, lower power footprints, and reliable chips that scale from edge devices to the largest datacenters.

By bringing together energy-efficient architecture, accelerated design flows, and quadrillion-cycle verification under a single AI umbrella, Synopsys demonstrated that “AI everywhere in the chip lifecycle” is the roadmap for the future of semiconductor innovation.

Also Read:

Synopsys Announces Expanding AI Capabilities and EDA AI Leadership

The Rise, Fall, and Rebirth of In-Circuit Emulation (Part 1 of 2)

eBook on Mastering AI Chip Complexity: Pathways to First-Pass Silicon Success


Neurosymbolic code generation. Innovation in Verification

Neurosymbolic code generation. Innovation in Verification
by Bernard Murphy on 09-30-2025 at 6:00 am

Innovation New

Early last year we talked about state space models, a recent advance over large language modeling with some appealing advantages. In this blog we introduce neurosymbolic methods, another advance in foundation technologies, here applied to automated code generation. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Neural Program Generation Modulo Static Analysis. The authors are from Rice University, UT Austin and the University of Wisconsin. The paper was published in Neural Information Processing Systems 2021 and has 27 citations.

Amazing as they are, LLMs and neural networks have limitations becoming apparent in “almost correct” code generation from CoPilot, the underwhelming release of GPT5 and other examples. Unsurprising. Neural nets excel at perception-centric learning from low level detail, but they do not excel at reasoning based on prior knowledge, an area where symbolic methods fare better. Neurosymbolic methods fuse the two approaches to leverage complementary strengths. Neurosymbolic research and applications are not new but have been overshadowed by LLM advances until recently.

This month’s blog uses a neurosymbolic approach to improve accuracy in automated software generation for Java methods.

Paul’s view

LLM-based coding assistants are rapidly becoming mainstream. These assistants rely mostly on their underlying LLM’s ability to generate code from a user text prompt and surrounding other code already written. Under the hood, these LLMs are “next word” predictors that write code one word at a time, beginning with the prompt as their input and then consecutively appending each word generated so far to the prompt to form a successor prompt which is then used to generate the next word and so on.

This paper observes that unlike natural language, all programming languages must conform to a formal grammar (search “BNF grammar” in your browser). These grammars map source code into a “syntax tree” structure. It’s entirely possible to make a neural network that is a syntax tree generator rather than a word generator. Such a network recursively calls itself to build a syntax tree in a left-to-right depth-first search approach.

The authors further propose to annotate nodes in a syntax tree with a “symbol table” containing all declared variables and their types from surrounding code already written and portion of the syntax tree generated so far. The symbol table is created by a traditional non-AI algorithm as would be done by a software compiler and is used during training of the network as a weak supervisor – code generated that assigns variables or function arguments in violation of the symbol table are labeled as bad code.

The authors train and benchmark a 60M parameter “neurosymbolic grammar” (NSG) syntax tree-based code generator for Java code generation. They use a large database of java classes with 1.6M methods total, randomly removing the body of one method from a class, and asking their NSG to re-generate code for that method based only on the surrounding code for the rest of the class, including its comments. They compare NSG to a variety of baseline LLMs from 125M to 1.3B parameters using a combination of syntax correctness checks and checks for similarity to the golden method used for training. NSG is a lot better: 86% of NSG generated code passes all syntax correctness checks vs. 67% from the best alternative (CodeGPT 125M parameters), and NSG generated code has a 40% average similarity score to golden vs. 22% from the best alternative (GPTNeo 1.3B parameters).

Of course, with today’s 100B+ parameter LLMs using multi-shot reasoning, which can include feeding software compiler errors back to the LLM and asking it to fix them, the benchmark results could prove less compelling. As the authors themselves point out in this paper, more research here would be welcome!

Raúl’s view

Neuro-symbolic approaches in artificial intelligence combine neural methods such as deep learning with symbolic reasoning based on formal languages and logic. The goal is to overcome the weaknesses of both approaches: neural networks excel at pattern recognition from large datasets but cannot easily take advantage of coded expert knowledge and are “black boxes” that make understanding their decision-making processes hard; symbolic systems can encode precise rules and constraints, but they are brittle and hard to scale. The first paper in the trio we blog about this week gives a brief introduction to this topic.

The monograph “Neurosymbolic Programming”  more specifically addresses integrating deep learning and program synthesis. Strategies include neural program synthesis, where neural networks are trained to generate programs directly; learning to specify in which models learn to complete or disambiguate incomplete specifications; neural relaxations aim at using the parameters of a neural network to approximately represent a set of programs; and distillation where trained neural networks are converted back into symbolic programs approximating their behavior.

Against this backdrop, the NeurIPS 2021 paper Neural Program Generation Modulo Static Analysis presents one specific approach to program (Java methods) generation using a neuro-symbolic approach.  The authors argue that large language models of code (e.g., GPT-Neo, Codex) often fail to produce semantically valid long-form code such as full method bodies, and contain basic errors such as uninitialized variables, type mismatches, and invalid method calls. The key thesis of the paper is that static program analysis provides semantic relationships “for free” that are otherwise very hard for neural networks to infer. The paper is self-contained but assumes knowledge of compilation of formal languages and neural models to fully understand the approach.

The models built, Neurosymbolic Attribute Grammars (NSGs), extend context-free grammars with attributes derived from static analysis such as symbol tables, type information, and scoping. During generation, the neural model chooses not only on syntactic context but also based on these semantic attributes (“weak supervision”). This hybrid system improves the model’s ability to respect language rules while still benefiting from statistical learning.

The system was evaluated on the task of generating Java method bodies. Training used 1.57 million Java methods, with a grammar supporting a subset of Java. The NSG model itself had 63 million parameters, which is modest compared to billion-parameter transformers like GPT-Neo and Codex. NSGs substantially outperform these larger baselines on static checks (ensuring no undeclared variables, type safety, initialization, etc.) and fidelity measures (similarity of generated code to ground truth, Abstract Syntax Tree (AST) structure, execution paths). For example, NSGs achieved 86% of generated methods passing all static checks, compared to ~65% for GPT-Neo and ~68% for CodeGPT. On fidelity metrics, NSGs nearly doubled the performance of transformers, showing they not only generate valid code but also code that more closely matches intended behavior.

This work illustrates the power of neuro-symbolic methods in generating programming languages where semantics matter deeply; unlike natural language, code is governed by strict syntactic and semantic rules. Verification and generation of digital systems, e.g., (System)Verilog, obviously benefit from such techniques.

Also Read:

Cadence’s Strategic Leap: Acquiring Hexagon’s Design & Engineering Business

Cocotb for Verification. Innovation in Verification

A Big Step Forward to Limit AI Power Demand


Analog Bits Steps into the Spotlight at TSMC OIP

Analog Bits Steps into the Spotlight at TSMC OIP
by Mike Gianfagna on 09-29-2025 at 10:00 am

Analog Bits Steps into the Spotlight at TSMC OIP

The TSMC Open Innovation Platform (OIP) Ecosystem Forum kicked off on September 24 in Santa Clara, CA. This is the event where TSMC recognizes and promotes the vast ecosystem the company has created. After watching this effort grow over the years, I feel that there is nothing the group can’t accomplish thanks to the alignment and leadership provided by TSMC. Some ecosystem members are new and are finding their place in the organization. Others are familiar names who have provided consistent excellence over the years. Analog Bits is one of the companies in this latter category. Let’s examine what happens when Analog Bits steps into the spotlight at TSMC OIP.

What Was Announced, Demonstrated and Discussed

Analog Bits always arrives at industry events like this with exciting news about new IP and industry collaboration. At TSMC OIP, the company announced its newest LDO, power supply droop detectors, and embedded clock LC PLL’s on the TSMC N3P process.  Clocking, high accuracy PVT, and droop detectors were also announced on the TSMC N2P process.

Here is a bit of information about these fully integrated IP titles:

  • The scalable LDO (low drop-out) regulator macro addresses typical SoC power supply and other voltage regulator needs.
  • The droop detector macro addresses SoC power supply and other voltage droop monitoring needs. It includes an internal bandgap style voltage reference circuit which is used as a trimmed reference to compare the sampled voltage against.
  • The PVT sensor is a highly integrated macro for monitoring process, voltage, and temperature on chip, allowing very high precision even in untrimmed usage. The device consumes very little power even in operational mode, and leakage power only when temperature measurement is complete.
  • The PLL addresses a large portfolio of applications, ranging from simple clock de-skew and non-integer clock multiplication to programmable clock synthesis for multi-clock generation.

These announcements were also backed up with live demonstrations in the Analog Bits booth at the show. The demos included:

  • High accuracy PVT sensors, high performance clocks, droop detectors, and more on the TSMC N2P process
  • Programmable LDO, droop detector, high accuracy sensors, low jitter LC PLL and more on the TSMC N3P process
  • Automotive grade pinless high accuracy PVT, pinless PLL, PCIe SERDES on the TSMC N5A process
Analog Bits booth at TSMC OIP

Analog Bits also participated in the technical program at OIP with two joint papers. 

One with Socionext titled “Pinless PLL, PVT Sensor and Power Supply Spike Detectors for Datacenter, AI and Automotive Applications”.

The other was with Cerebras titled “On-Die Power Management for SoCs and Chiplet” at the virtual event.

While discussing Analog Bits’ new intelligent energy and power management strategy, Mahesh Tirupattur, CEO at Analog Bits commented:

“Whether you are designing advanced datacenters, AI/ML applications, or automotive SoC’s, managing power is no longer an afterthought, it has to be done right at the architectural phase or deal with the consequences of not doing so . We have collaborated with TSMC and trailblazed on our IP development with advanced customers to pre-qualify novel power management IP’s such as LDO, droop detectors, and high-accuracy sensors along with our sophisticated PLL’s for low jitter. We welcome customers and partners to see our latest demos at the Analog Bits booth during this year’s TSMC OIP event.”

Recognition

TSMC also recognizes outstanding achievement by its ecosystem partners with a series of awards that are announced at the show. For the second year in a row, Analog Bits received the 2025 OIP Partner of the Year Award from TSMC in the Analog IP category for enabling customer designs with broad portfolio of IPs to accelerate design creation. This is quite an accomplishment. Pictured at the right is Mahesh Tirupattur receiving the award at OIP.

Mahesh also created a short video for the TSMC event. In that video, he discusses the significance of the collaboration with TSMC, not just in 2025 but over the past two decades. He talks about the age of AI explosion, and the focus Analog Bits has to deliver safe, reliable, observable and efficient power. He talks about the benefits of delivering advanced low power mixed signal IP on TSMC’s 2 and 3 nm technologies. It’s a great discussion, and you can now view it here.

To Learn More

You can learn more about what Analog Bits is doing around the industry on SemiWiki here. You can begin exploring the billions of IP solutions Analog Bits has delivered on the company’s website here.  The TSMC Open Innovation Platform Ecosystem Forum will be held in other locations around the world. Analog Bits will be attending as well. You can learn more about this important industry event here.  And that’s what happens when Analog Bits steps into the spotlight at TSMC OIP.


Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions

Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions
by Daniel Nenni on 09-29-2025 at 6:00 am

synopsys tsmc oip 2025 leading the next wave of ai and multi die innovation for tsmc advanced node designs

Synopsys has deepened its collaboration with TSMC certifying the Ansys portfolio of simulation and analysis tools for TSMC’s cutting-edge manufacturing processes including N3C, N3P, N2P, and A16. This partnership empowers chip designers to perform precise final checks on designs, targeting applications in AI acceleration, high-speed communications, and advanced computing. Additionally, the companies have developed an AI-assisted design flow for TSMC’s Compact Universal Photonic Engine (COUPE™) platform, streamlining photonic design and enhancing efficiency.

Multiphysics and AI-Driven Design Innovations

Synopsys and TSMC are advancing multiphysics analysis for complex, hierarchical 3DIC designs. The multiphysics flow integrates tools like Ansys RedHawk-SC™, Ansys RedHawk-SC Electrothermal™, and Synopsys 3DIC Compiler™ to enable thermal-aware and voltage-aware timing analysis. This approach accelerates convergence for large-scale 3DIC designs, addressing challenges in thermal management and signal integrity critical for high-performance chips.

For TSMC’s COUPE platform, Synopsys leverages AI-driven tools like Ansys optiSLang® and Ansys Zemax OpticStudio® to optimize optical coupling systems. These tools, combined with Ansys Lumerical FDTD™ for photonic inverse design, allow engineers to create custom components, suhttps://www.ansys.com/products/connect/ansys-optislangch as grating couplers, while reducing design cycle times and improving design quality through sensitivity analysis. This AI-assisted workflow is transformative for photonic applications, enabling faster development of high-speed communication interfaces.

Certifications for Advanced Process Technologies

The collaboration includes certifications for key Synopsys tools across TSMC’s advanced nodes. Ansys RedHawk-SC and Ansys Totem™ are certified for power integrity verification on TSMC’s N3C, N3P, N2P, and A16™ processes, ensuring reliable chip performance. Ansys HFSS-IC Pro™, designed for electromagnetic modeling, is certified for TSMC’s N5 and N3P processes, supporting system-on-chip electromagnetic extraction. These certifications enable designers to meet stringent requirements for AI, high-performance computing (HPC), 5G/6G, and automotive electronics.

Additionally, Ansys PathFinder-SC™ is certified for TSMC’s N2P process, offering electrostatic discharge current density (ESD CD) and point-to-point (P2P) checking. This tool enhances chip resilience against electrical overstress, accelerating early-stage design validation and improving product durability, particularly for complex 3DIC and multi-die systems. Synopsys is also working with TSMC to develop a photonic design kit for the A14 process, expected in late 2025, further expanding support for photonic applications.

Industry Impact and Strategic Partnership

This collaboration underscores Synopsys’ leadership in providing design solutions for next-generation technologies.

“Synopsys provides a broad range of design solutions to help semiconductor and system designers tackle the most advanced and innovative products for AI enablement, data center, telecommunications, and more,” said John Lee, vice president and general manager of the semiconductor, electronics, and optics business unit at Synopsys. “Our strong and continuous partnership with TSMC has been a key factor in maintaining our position at the forefront of technology while providing consistent value to our shared customers.”

“TSMC’s advanced process, photonics, and packaging innovations are accelerating the development of high-speed communication interfaces and multi-die chips that are essential for high-performance, energy-efficient AI systems,” said Aveek Sarkar, director of the ecosystem and alliance management division at TSMC. “Our collaboration with OIP ecosystem partners such as Synopsys has delivered an advanced thermal, power and signal integrity analysis flow, along with an AI-driven photonics optimization solution for the next generation of designs.”

Bottom line: By combining Synopsys’ simulation expertise with TSMC’s advanced process technologies this partnership accelerates the development of robust, high-performance chips, solidifying both companies’ roles in shaping the future of semiconductor design, absolutely.

The full press release is here.

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the leader in engineering solutions from silicon to systems, enabling customers to rapidly innovate AI-powered products. We deliver industry-leading silicon design, IP, simulation and analysis solutions, and design services. We partner closely with our customers across a wide range of industries to maximize their R&D capability and productivity, powering innovation today that ignites the ingenuity of tomorrow. Learn more at www.synopsys.com.

Also Read:

Synopsys Announces Expanding AI Capabilities and EDA AI Leadership

The Rise, Fall, and Rebirth of In-Circuit Emulation (Part 1 of 2)

448G: Ready or not, here it comes!


Via Multipatterning Regardless of Wavelength as High-NA EUV Lithography Becomes Too Stochastic

Via Multipatterning Regardless of Wavelength as High-NA EUV Lithography Becomes Too Stochastic
by Fred Chen on 09-28-2025 at 10:00 am

Fred Chen EUV 2025

For the so-called “2nm” node or beyond, the minimum metal pitch is expected to be 20 nm or even less, while at the same time, contacted gate pitch is being pushed to 40 nm [1]. Therefore, we expect via connections that can possibly be as narrow as 10 nm (Figure 1)! For this reason, it is natural to expect High-NA EUV lithography as the go-to method for patterning at such small scales. However, High-NA’s resolution benefit is offset by its depth of focus [2,3]. A 20 nm defocus can be expected to reduce the width of a spot projected by a High-NA EUV system by 10%. Therefore, the resist cannot be expected to be thicker than 20 nm. This in turn, reduces the density of absorbed photons. A 20 nm thick chemically amplified resist is expected to absorb only 10% of the incident dose. For an incident dose of 60 mJ/cm2, only 4 photons are absorbed per square nanometer! The Poisson statistics of shot noise entails 2sigma of 100%! This will be detrimental for edge roughness (Figure 1). The edge stochastics becomes prohibitive for High-NA EUV.

Figure 1. Stochastic EUV photon absorption (6mJ/cm2 averaged over 80 nm x 40 nm). The illumination in the High-NA EUV system is shown on the left.

In fact, even with an 0.33 NA EUV system for 44 nm center-to-center vias or smaller, stochastic edge placement error (EPE) is projected to cause violation of a 5 nm total EPE spec [4]. Therefore, 2nm via patterning is expected to involve serious multipatterning, regardless of whether EUV is used or not.

Figure 2 shows a representative layout for via connections to gates and diffusion areas, where the minimum center-to-center via spacing is 40 nm, and the track pitch is 20 nm. Due to the 40 nm minimum center-to-center spacing constraint, two masks are needed for the gate contacts, while four masks are needed for the source-drain contacts. Note, this number is the same for ArF immersion or EUV. Two vias separated by 40 nm will require two separate masks whether using DUV or EUV.

Figure 2. A representative layout for via connections to gate and diffusion areas for contacted gate pitch of 40 nm and track pitch of 20 nm. Different number labels on the vias indicate different masks used for that layer.

For the BEOL, we expect fully self-aligned vias to be used [5]. We can also expect vias to be placed on a crossed diagonal grid [6]. When these arrangements are combined, the number of multipatterning masks can be minimized [7]. Note that multipatterning is still necessary even with EUV as the expected center-to-center distances are still too small (Figure 3). One mask would be needed to overlay a diagonal line grid that blocks half of the crosspoint locations where the two adjacent metal layers overlap (Figure 4). The pitch of this diagonal line grid will require self-aligned quadruple patterning (SAQP) with ArF immersion lithography, and the stochastic defect density cliff below 36 nm pitch [8,9] would also likely force EUV to use self-aligned double patterning (SADP). From the remaining locations, two ArF immersion masks or one EUV mask would be used to select which ones would be kept for forming vias (Figure 5). Note that this would leave a diagonal line signature on the vias (Figure 6).

Figure 3. The center-to-center distances above are too small to allow single exposure patterning.

Figure 4. Diagonal line block mask blocks half of the crosspoint locations where the two adjacent metal layers overlap. The blocked overlap locations are in red.

Figure 5. After the diagonal line block grid is in place, a keep mask or masks would select which metal layer locations would be kept for forming vias.

Figure 6. A diagonal line signature is left on the formed vias.

If we note the possible center-to-center distances on the crossed diagonal grid, we can see that relying on the basic repeated litho-etch (LE) approach to multipatterning will lead to up to three masks for EUV, and up to four masks for DUV, making use of an underlying 80 nm x 80 nm pitch tiling (Figure 7). The DUV LE4 approach would definitely be more cost-efficient than EUV LE3 [10]. Hence, any approach to make the requisite multipatterning more efficient, such as the diagonal line grid approach above or even directed self-assembly (DSA) [11], would help ensure getting to 20 nm track pitch and 40 nm contacted gate pitch.

Figure 7. Brute force repeated LE approach could require up to three EUV masks or four DUV masks.

References

[1] I. Cuttress (TechTechPotato), How We Get Down to 0.2nm CPUs and GPUs.

[2] A. Burov, A. V. Pret, R. Gronheid, “Depth of focus in high-NA EUV lithography: a simulation study,” Proc. SPIE 12293, 122930V (2022).

[3] F. Chen, High-NA Hard Sell: EUV Multipatterning Practices Revealed, Depth of Focus Not Mentioned.

[4] W. Gao et al., “Simulation investigation of enabling technologies for EUV single exposure of via patterns in 3nm logic technology,” Proc. SPIE 11323, 113231L (2020).

[5] V. Vashishtha, L. T. Clark, “ASAP5: A predictive PDK for the 5 nm node,” Microel. J. 126, 105481 (2022).

[6] Y-C. Hsiao, W. M. Chan, K-H. Hsieh,US Patent 9530727, assigned to TSMC; S-W. Peng, C-M. Hsiao, C-H. Chang, J-T. Tzeng, US Patent Application US20230387002; F. Chen, Routing and Patterning Simplification with a Diagonal Via Grid.

[7] F. Chen, Multipatterning Reduction with Gridded Cuts and Vias; F. Chen, Exploring Grid-Assisted Multipatterning Scenarios for 10A-14A Nodes.

[8] Y-P. Tsai et al., “Study of EUV stochastic defect on wafer yield,” Proc. SPIE 12954, 1295404 (2024).

[9] Y. Li, Q. Wu, Y. Zhao, “A Simulation Study for Typical Design Rule Patterns and Stochastic Printing Failures in a 5 nm Logic Process with EUV Lithography,” CSTIC 2020.

[10] L-Å. Ragnarsson et al., “The Environmental Impact of CMOS Logic Technologies,” EDTM 2022; E. Vidal-Russell, “Curvilinear masks extend lithography options for advanced node memory roadmaps,” J. Micro/Nanopattern. Mater. Metrol. 23, 041504 (2024).

[11] Z. Wu et al., “Quadruple-hole multiplication by directed self-assembly of block copolymer,” Proc. SPIE 13423, 134231O (2024).


CEO Interview with Jiadi Zhu of CDimension 

CEO Interview with Jiadi Zhu of CDimension 
by Daniel Nenni on 09-28-2025 at 8:00 am

Jiadi headshot

Jiadi Zhu is the CEO and founder of CDimension, a company rethinking chip design to shape the next generation of computing. Under his leadership, CDimension is creating the next generation of building blocks for chips, starting with materials and scaling up to full systems that can power everything from today’s AI and advanced computing to the quantum breakthroughs of tomorrow.

With a Ph.D. in Electrical Engineering from MIT and over a decade of work at the frontier of 2D materials and 3D integration, Jiadi has been widely recognized for his originality in device design and novel semiconductor materials. His research has been published in top journals like Nature Nanotechnology and presented at leading conferences including IEEE’s International Electron Devices Meeting.

Tell us about your company?

CDimension is pioneering a bottom-up strategy for the future of computing, beginning at the foundation of the chip stack: materials. We develop and supply next-generation two-dimensional (2D) semiconductors and insulators, enabling breakthroughs in both semiconductor circuits and quantum computing.

On the semiconductor side, our atomically thin 2D semiconductors significantly reduce power consumption and help overcome the global power wall in computing. On the quantum side, our single-crystalline 2D materials reduce noise at the source,enabling longer coherence times, higher fidelities, and scalable integration of quantum qubits.

Backed by more than 20 patents and a growing base of industrial and academic customers, our roadmap delivers wafer-scale materials today and pilot-scale quantum chips within 18 months. With this foundation, CDimension is positioned as the chip provider of the quantum era, building the material and device layer that will power tomorrow’s computing revolution.

What problem is CDimension solving?

The core issue we are addressing is that silicon has hit its limits. Put simply, chips are getting more power-hungry and harder to integrate. Most of the industry’s fixes we’ve seen up to now are top-down, things like reconfiguring GPUs; but those only deliver incremental gains, not the 100x or even 1000× leap future applications in AI and quantum will require over the next decade.

That’s why we believe the solution has to start at the bottom, with the materials themselves. For decades, 2D materials were stuck in the lab as tiny flakes with high defects and no scalability. We’ve broken that barrier by producing wafer-scale, high-quality semiconductors and insulators that customers can actually build on today.

I like to think of it this way: trying to build the future of chips with silicon alone is like trying to make a book out of bricks. It worked for a long time, but now the bricks are too bulky to stack any further. Our 2D materials are like pages: ultra-thin, smooth, and stackable. From there, you can build the books, or integrated circuits, and eventually even the writing inside, which is applications like quantum computing.

What application areas are your strongest?

Currently, our technology is making the biggest difference in quantum. While the field is gaining momentum, progress has been stalled by one critical bottleneck: noise. These random disturbances cause errors that prevent systems from scaling. Existing approaches struggle to fix it because they rely on imperfect materials or lack robustness at the materials level.

Our wafer-scale, single-crystalline 2D insulators reduce noise at the source. Instead of trying to scale by simply adding more qubits, we’re improving each individual qubit by lowering noise and extending coherence time. That allows more qubits to be connected reliably and being more robust to the surrounding environment, which is the critical step toward practical, large-scale quantum computing.

What keeps your customers up at night?

What keeps them up at night is the realization that the conventional approaches, such as silicon circuits and oxide dielectrics, can’t solve these problems on their own. Even the best design tweaks or architectural optimizations only squeeze out marginal gains, while demand from AI, data centers, and quantum continues to grow exponentially. Without a new foundation, they risk hitting hard limits: power budgets that data centers can’t sustain and quantum devices that can’t scale beyond prototypes.

That’s why CDimension resonates with them. We deliver wafer-scale, defect-free 2D materials that deliver a fundamental shift. For semiconductors, that means atomically thin transistors that slash power use. For quantum, it means ultra-smooth, single-crystalline insulators that suppress noise at the source. Together, these advances open the path from today’s bottlenecks to tomorrow’s scalable, high-performance systems.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape is crowded at the system and algorithm layers of computing- everyone from hyperscalers to startups is working on GPUs, AI accelerators, or new quantum modalities. But very few are tackling the materials bottleneck head-on, because 2D materials were long assumed to be stuck in the lab: fragile, defect-prone flakes with no path to scale.

CDimension is different. We’ve broken through that barrier, delivering wafer-scale, defect-free semiconductors and insulators, protected by more than 20 patents and developed by a team that combines world-class materials science with deep semiconductor engineering. This foundation lets us uniquely address the toughest pain points in the industry- power, noise, and robustness- at their source.

In short, while others focus on optimizing the top of the stack, CDimension is building the bottom layer, the material and device foundation that makes those optimizations truly scalable.

What new features/technology are you working on?

Our primary focus right now is quantum hardware. We’ve begun supplying single-crystalline 2D insulators that directly reduce noise in quantum devices, a critical step toward scalable, error-corrected quantum systems. These ultra-smooth interfaces extend coherence, improve fidelities, and enable more robust coupling, pushing quantum closer to real-world integration. Looking ahead, our goal is hybrid semiconductor–quantum systems within 2–3 years, with scalable quantum hardware arriving much sooner than most expect.

In parallel, we’re expanding a full suite of 2D materials-  semiconductors, insulators, and conductors- all designed to integrate seamlessly with existing silicon workflows. Our first commercial release, announced recently, was ultra-thin MoS₂ grown directly on silicon wafers using a proprietary low-temperature process. These monolayers demonstrated up to 1,000× improvement in transistor-level energy efficiency compared to silicon and are already being sampled by customers across academia and industry. From MoS₂ to n-type, p-type, metallic, and insulating films at wafer scale, our platform is building the materials backbone for vertically integrated chips that unify compute, memory, and power in a single architecture.

How do customers normally engage with your company?

Today, customers are already purchasing our 2D materials for their own device research. Universities like Carnegie Mellon, Duke, and UC San Diego are currently using our wafers to bypass material synthesis and focus directly on building and testing new devices. Industry R&D teams are doing the same, evaluating our wafers for integration into semiconductor and quantum workflows.

But our offering is not just about supplying wafers; it’s about building the foundation for the next era of computing. As customers themselves scale, we aim to be a true technical partner, providing equipment, supporting integration, and eventually working together on full system solutions.

Also Read:

CEO Interview with Howard Pakosh of TekStart

CEO Interview with Barun Kar of Upscale AI

CEO Interview with Adam Khan of Diamond Quanta


Podcast EP308: How Clockwork Optimizes AI Clusters with Dan Zheng

Podcast EP308: How Clockwork Optimizes AI Clusters with Dan Zheng
by Daniel Nenni on 09-26-2025 at 10:00 am

Daniel is joined by Dan Zheng, VP of Partnerships and Operations at Clockwork. Dan was the General Manager for Product and Partnerships at Urban Engines which was acquired by Google in 2016. He has also held roles at Stanford University and Google.

Dan explores the challenges of operating massive AI hardware infrastructure at scale with Daniel. It turns out it can be quite difficult to operate modern GPU clusters efficiently. Communication bottlenecks within clusters and between clusters, stalled pipelines, network issues, and memory issues can all contribute to the problem. Debugging these issues can be difficult and Dan explains that re-starts from prior checkpoints can happen many times in large AI clusters and each of these events can waste many thousands of GPU hours.

Dan also describes Clockwork’s FleetIQ Platform and how this technology addresses these situations by providing nano-second accurate visibility correlated across the stack. The result is more efficient and productive AI clusters allowing far more work to be accomplished. This provides more AI capability with the same hardware, essentially democratizing access to AI.

Contact Clockwork

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Howard Pakosh of TekStart

CEO Interview with Howard Pakosh of TekStart
by Daniel Nenni on 09-26-2025 at 6:00 am

!HP biopic 2

Howard Pakosh is a serial entrepreneur and angel investor. Mr. Pakosh is also Founder & CEO of the TekStart Group, a Toronto-based boutique incubator focusing on Fractional-C business development support, as well as developing, promoting and licensing technology into markets such as blockchain, Internet-of-Things (IoT) and Semiconductor Intellectual Property (SIP). TekStart is currently an early-stage investor with Piera Systems (CleanTech), Acrylic Robotics (Robotics), Low Power Futures  (Semiconductors), ChipStart (Semiconductors), and Freedom Laser (Digital Health).

Mr. Pakosh has been involved in all phases of semiconductor development for over 30 years and was instrumental in the delivery of the first commercially available USB Subsystem at his first IP startup called Xentec and Elliptic Technologies Inc. (both sold to Synopsys Inc.). Other ventures he’s recently led, include the development of Micron’s Hybrid Memory Cube controller and the development of the most power-efficient crypto processing ASIC’s for SHA-256 and SCRYPT algorithms.

Tell us about your company.

When I started TekStart® in 1998, the mission was clear: give bold ideas the resources and leadership they need to become thriving businesses. The semiconductor field has always been high-stakes, demanding both creativity and flawless execution. Over time, TekStart has shifted from a commercialization partner to a true venture builder, now concentrating heavily on semiconductors and AI. Our purpose hasn’t changed. We exist to help innovators succeed. What has changed are our methods, which have adapted to an industry that’s become more global, more competitive, and far more complex.

What new features or technology are you working on?

What I am excited to share is an exciting breakthrough we have achieved through our ChipStart® business unit. With Cognitum by ChipStart, we’ve proven we’re not only enabling innovation but driving it ourselves. Achieving up to 65 TOPS of performance at under 2 watts is a leap forward, unlocking a new level of performance-per-watt that opens doors to applications once thought impossible.

What problems are you solving?

The semiconductor industry faces three defining challenges: fragile supply chains, the demand for radical energy efficiency, and the relentless race to market. Cognitum by ChipStart is built to meet these challenges head-on. Instead of designs tied to exotic nodes, we enable resilient architectures that keep innovation moving, even in uncertain times. Instead of incremental power gains, we push for performance that redefines efficiency by delivering more capability per watt. Instead of waiting on the pace of new fabs, we help innovators leap from concept to production silicon faster than ever. Cognitum isn’t just solving today’s problems. It’s shaping the future of how chips get built.

What application areas are your strongest?

We see the greatest impact in Edge devices that demand real-time intelligence without relying on the cloud. Security and surveillance systems, for example, need to analyze video on-site to detect threats instantly, without the latency of sending data off-premise. In agriculture, sensors and vision systems powered by AI can monitor crops, optimize water use, and detect early signs of disease, helping farmers boost yields sustainably. AR/VR wearables require high-performance AI that runs efficiently in small, battery-constrained form factors, enabling immersive experiences without bulky hardware. And in industrial automation, factories are increasingly reliant upon AI-driven systems to inspect products, predict equipment failures, and streamline processes. These are just a few of the areas where Edge AI is not just useful but transformative, and where Cognitum by ChipStart is purpose-built to deliver.

What keeps your customers up at night?

The pace of innovation in semiconductors and AI has never been faster, and it’s only accelerating. Our customers worry about launching a product only to find it outdated months later. Staying relevant requires moving from concept to market at unprecedented speed – and doing so without compromising quality or performance. That’s where TekStart, through Cognitum by ChipStart, makes a real difference. We partner closely with innovators to compress development cycles and deliver silicon that keeps pace with today’s AI-driven world. By helping our partners beat obsolescence, we ensure they stay ahead in markets where timing is everything.

What does the competitive landscape look like and how do you differentiate?

Competition in our space revolves around two unforgiving dimensions: time-to-market and innovation. Both demand relentless execution to stay ahead. We differentiate by combining deep semiconductor expertise with an ecosystem of partners who bring complementary strengths in design, manufacturing, and deployment. Our team has decades of hands-on experience across ASIC design, operations, and AI applications. When combined with our extended network, we’re able to anticipate shifts in technology and deliver solutions that arrive ahead of the curve. This balance of speed and foresight is what keeps our customers competitive and what sets us apart in a crowded landscape.

How do customers normally engage with your company?

We typically engage through close collaboration across the semiconductor supply chain. That means working side-by-side with fab houses, manufacturers, and technology partners to ensure our products integrate seamlessly into their final deliverables. By embedding our solutions at the heart of their systems – whether it’s in smart cameras, connected devices, or industrial machinery – we help our partners to accelerate their own roadmaps. These collaborations go beyond transactions. They’re strategic partnerships designed to align our innovation with their market needs.

Also Read:

TekStart Group Joins Canada’s Semiconductor Council

CEO Interview with Barun Kar of Upscale AI

CEO Interview with Adam Khan of Diamond Quanta


SkyWater Technology Update 2025

SkyWater Technology Update 2025
by Daniel Nenni on 09-25-2025 at 10:00 am

Skywater Technologies HQ

SkyWater Technology, a U.S. based pure-play semiconductor foundry, has made significant strides in 2025 reinforcing its position as a leader in domestic semiconductor manufacturing. Headquartered in Bloomington, Minnesota, SkyWater specializes in advanced innovation engineering and high volume manufacturing of differentiated integrated circuits. The company’s Technology as a Service model streamlines development and production, serving diverse markets including aerospace, defense, automotive, biomedical, industrial, and quantum computing.

A major milestone in 2025 was SkyWater’s acquisition of Infineon Technologies’ 200 mm semiconductor fab in Austin, Texas (Fab 25), completed on June 30. This acquisition added approximately 400,000 wafer starts per year, significantly boosting SkyWater’s capacity. Fab 25 enhances the company’s ability to produce foundational chips for embedded processors, memory, mixed-signal, RF, and power applications. By converting this facility into an open-access foundry SkyWater strengthens U.S. semiconductor independence aligning with national security and reshoring trends. The acquisition, funded through a $350 million senior secured revolving credit facility, also added about 1,000 employees to SkyWater’s workforce, bringing the total to approximately 1,700.

On July 29, SkyWater announced a license agreement with Infineon Technologies, granting access to a robust library of silicon-proven mixed-signal design IP. Originally developed by Cypress Semiconductor, this IP is validated for high-volume automotive-grade applications and is integrated into SkyWater’s S130 platform. The portfolio includes ADCs, DACs, power management, timing, and communications modules, enabling customers to design high-reliability mixed-signal System-on-Chips within a secure U.S. supply chain. This move positions SkyWater as a trusted partner for both commercial and defense markets thus reducing design risk and accelerating time to market.

SkyWater’s financial performance in 2025 reflects steady progress. The company reported second-quarter results at the upper end of expectations, with a trailing 12-month revenue of $290 million as of June 30. However, its Advanced Technology Services segment faced near-term softening due to federal budget delays impacting Department of Defense funding. Despite this, SkyWater remains confident in achieving record ATS revenue in 2025 provided funding issues are resolved. The company’s stock price stands at around $10.56 with a market capitalization of $555 million and 48.2 million shares outstanding.

Strategically, SkyWater is capitalizing on emerging technologies. Its collaboration with PsiQuantum to develop silicon photonic chips for utility-scale quantum computing highlights its expertise in cutting-edge applications. Additionally, SkyWater adopted YES RapidCure systems for its M-Series fan-out wafer level packaging (FOWLP) in partnership with Deca Technologies, enhancing prototyping speed and reliability for advanced packaging. These initiatives align with SkyWater’s focus on high-margin, innovative solutions, positioning it as a strategic partner in quantum computing and photonics.

SkyWater’s commitment to U.S.-based manufacturing and its DMEA-accredited Category 1A Trusted Foundry status underscore its role in supporting critical domestic markets. The company’s facilities are certified for aerospace (AS9100), medical (ISO13485), automotive (IATF16949), and environmental (ISO14001) standards, ensuring high-quality production. Despite challenges like funding delays and integration risks from the Fab 25 acquisition, SkyWater’s focus on innovation, strategic partnerships, and capacity expansion positions it for long-term growth. Analysts view SkyWater as a strong investment, with a 24-36 month price target of $20, reflecting confidence in its de-risked business model and alignment with U.S. reshoring trends.

Also Read:

Podcast EP307: An Overview of SkyWater Technology and its Goals with Ross Miller

Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability