SiC Forum2025 8 Static v3

Neurosymbolic code generation. Innovation in Verification

Neurosymbolic code generation. Innovation in Verification
by Bernard Murphy on 09-30-2025 at 6:00 am

Innovation New

Early last year we talked about state space models, a recent advance over large language modeling with some appealing advantages. In this blog we introduce neurosymbolic methods, another advance in foundation technologies, here applied to automated code generation. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Neural Program Generation Modulo Static Analysis. The authors are from Rice University, UT Austin and the University of Wisconsin. The paper was published in Neural Information Processing Systems 2021 and has 27 citations.

Amazing as they are, LLMs and neural networks have limitations becoming apparent in “almost correct” code generation from CoPilot, the underwhelming release of GPT5 and other examples. Unsurprising. Neural nets excel at perception-centric learning from low level detail, but they do not excel at reasoning based on prior knowledge, an area where symbolic methods fare better. Neurosymbolic methods fuse the two approaches to leverage complementary strengths. Neurosymbolic research and applications are not new but have been overshadowed by LLM advances until recently.

This month’s blog uses a neurosymbolic approach to improve accuracy in automated software generation for Java methods.

Paul’s view

LLM-based coding assistants are rapidly becoming mainstream. These assistants rely mostly on their underlying LLM’s ability to generate code from a user text prompt and surrounding other code already written. Under the hood, these LLMs are “next word” predictors that write code one word at a time, beginning with the prompt as their input and then consecutively appending each word generated so far to the prompt to form a successor prompt which is then used to generate the next word and so on.

This paper observes that unlike natural language, all programming languages must conform to a formal grammar (search “BNF grammar” in your browser). These grammars map source code into a “syntax tree” structure. It’s entirely possible to make a neural network that is a syntax tree generator rather than a word generator. Such a network recursively calls itself to build a syntax tree in a left-to-right depth-first search approach.

The authors further propose to annotate nodes in a syntax tree with a “symbol table” containing all declared variables and their types from surrounding code already written and portion of the syntax tree generated so far. The symbol table is created by a traditional non-AI algorithm as would be done by a software compiler and is used during training of the network as a weak supervisor – code generated that assigns variables or function arguments in violation of the symbol table are labeled as bad code.

The authors train and benchmark a 60M parameter “neurosymbolic grammar” (NSG) syntax tree-based code generator for Java code generation. They use a large database of java classes with 1.6M methods total, randomly removing the body of one method from a class, and asking their NSG to re-generate code for that method based only on the surrounding code for the rest of the class, including its comments. They compare NSG to a variety of baseline LLMs from 125M to 1.3B parameters using a combination of syntax correctness checks and checks for similarity to the golden method used for training. NSG is a lot better: 86% of NSG generated code passes all syntax correctness checks vs. 67% from the best alternative (CodeGPT 125M parameters), and NSG generated code has a 40% average similarity score to golden vs. 22% from the best alternative (GPTNeo 1.3B parameters).

Of course, with today’s 100B+ parameter LLMs using multi-shot reasoning, which can include feeding software compiler errors back to the LLM and asking it to fix them, the benchmark results could prove less compelling. As the authors themselves point out in this paper, more research here would be welcome!

Raúl’s view

Neuro-symbolic approaches in artificial intelligence combine neural methods such as deep learning with symbolic reasoning based on formal languages and logic. The goal is to overcome the weaknesses of both approaches: neural networks excel at pattern recognition from large datasets but cannot easily take advantage of coded expert knowledge and are “black boxes” that make understanding their decision-making processes hard; symbolic systems can encode precise rules and constraints, but they are brittle and hard to scale. The first paper in the trio we blog about this week gives a brief introduction to this topic.

The monograph “Neurosymbolic Programming”  more specifically addresses integrating deep learning and program synthesis. Strategies include neural program synthesis, where neural networks are trained to generate programs directly; learning to specify in which models learn to complete or disambiguate incomplete specifications; neural relaxations aim at using the parameters of a neural network to approximately represent a set of programs; and distillation where trained neural networks are converted back into symbolic programs approximating their behavior.

Against this backdrop, the NeurIPS 2021 paper Neural Program Generation Modulo Static Analysis presents one specific approach to program (Java methods) generation using a neuro-symbolic approach.  The authors argue that large language models of code (e.g., GPT-Neo, Codex) often fail to produce semantically valid long-form code such as full method bodies, and contain basic errors such as uninitialized variables, type mismatches, and invalid method calls. The key thesis of the paper is that static program analysis provides semantic relationships “for free” that are otherwise very hard for neural networks to infer. The paper is self-contained but assumes knowledge of compilation of formal languages and neural models to fully understand the approach.

The models built, Neurosymbolic Attribute Grammars (NSGs), extend context-free grammars with attributes derived from static analysis such as symbol tables, type information, and scoping. During generation, the neural model chooses not only on syntactic context but also based on these semantic attributes (“weak supervision”). This hybrid system improves the model’s ability to respect language rules while still benefiting from statistical learning.

The system was evaluated on the task of generating Java method bodies. Training used 1.57 million Java methods, with a grammar supporting a subset of Java. The NSG model itself had 63 million parameters, which is modest compared to billion-parameter transformers like GPT-Neo and Codex. NSGs substantially outperform these larger baselines on static checks (ensuring no undeclared variables, type safety, initialization, etc.) and fidelity measures (similarity of generated code to ground truth, Abstract Syntax Tree (AST) structure, execution paths). For example, NSGs achieved 86% of generated methods passing all static checks, compared to ~65% for GPT-Neo and ~68% for CodeGPT. On fidelity metrics, NSGs nearly doubled the performance of transformers, showing they not only generate valid code but also code that more closely matches intended behavior.

This work illustrates the power of neuro-symbolic methods in generating programming languages where semantics matter deeply; unlike natural language, code is governed by strict syntactic and semantic rules. Verification and generation of digital systems, e.g., (System)Verilog, obviously benefit from such techniques.

Also Read:

Cadence’s Strategic Leap: Acquiring Hexagon’s Design & Engineering Business

Cocotb for Verification. Innovation in Verification

A Big Step Forward to Limit AI Power Demand


Analog Bits Steps into the Spotlight at TSMC OIP

Analog Bits Steps into the Spotlight at TSMC OIP
by Mike Gianfagna on 09-29-2025 at 10:00 am

Analog Bits Steps into the Spotlight at TSMC OIP

The TSMC Open Innovation Platform (OIP) Ecosystem Forum kicked off on September 24 in Santa Clara, CA. This is the event where TSMC recognizes and promotes the vast ecosystem the company has created. After watching this effort grow over the years, I feel that there is nothing the group can’t accomplish thanks to the alignment and leadership provided by TSMC. Some ecosystem members are new and are finding their place in the organization. Others are familiar names who have provided consistent excellence over the years. Analog Bits is one of the companies in this latter category. Let’s examine what happens when Analog Bits steps into the spotlight at TSMC OIP.

What Was Announced, Demonstrated and Discussed

Analog Bits always arrives at industry events like this with exciting news about new IP and industry collaboration. At TSMC OIP, the company announced its newest LDO, power supply droop detectors, and embedded clock LC PLL’s on the TSMC N3P process.  Clocking, high accuracy PVT, and droop detectors were also announced on the TSMC N2P process.

Here is a bit of information about these fully integrated IP titles:

  • The scalable LDO (low drop-out) regulator macro addresses typical SoC power supply and other voltage regulator needs.
  • The droop detector macro addresses SoC power supply and other voltage droop monitoring needs. It includes an internal bandgap style voltage reference circuit which is used as a trimmed reference to compare the sampled voltage against.
  • The PVT sensor is a highly integrated macro for monitoring process, voltage, and temperature on chip, allowing very high precision even in untrimmed usage. The device consumes very little power even in operational mode, and leakage power only when temperature measurement is complete.
  • The PLL addresses a large portfolio of applications, ranging from simple clock de-skew and non-integer clock multiplication to programmable clock synthesis for multi-clock generation.

These announcements were also backed up with live demonstrations in the Analog Bits booth at the show. The demos included:

  • High accuracy PVT sensors, high performance clocks, droop detectors, and more on the TSMC N2P process
  • Programmable LDO, droop detector, high accuracy sensors, low jitter LC PLL and more on the TSMC N3P process
  • Automotive grade pinless high accuracy PVT, pinless PLL, PCIe SERDES on the TSMC N5A process
Analog Bits booth at TSMC OIP

Analog Bits also participated in the technical program at OIP with two joint papers. 

One with Socionext titled “Pinless PLL, PVT Sensor and Power Supply Spike Detectors for Datacenter, AI and Automotive Applications”.

The other was with Cerebras titled “On-Die Power Management for SoCs and Chiplet” at the virtual event.

While discussing Analog Bits’ new intelligent energy and power management strategy, Mahesh Tirupattur, CEO at Analog Bits commented:

“Whether you are designing advanced datacenters, AI/ML applications, or automotive SoC’s, managing power is no longer an afterthought, it has to be done right at the architectural phase or deal with the consequences of not doing so . We have collaborated with TSMC and trailblazed on our IP development with advanced customers to pre-qualify novel power management IP’s such as LDO, droop detectors, and high-accuracy sensors along with our sophisticated PLL’s for low jitter. We welcome customers and partners to see our latest demos at the Analog Bits booth during this year’s TSMC OIP event.”

Recognition

TSMC also recognizes outstanding achievement by its ecosystem partners with a series of awards that are announced at the show. For the second year in a row, Analog Bits received the 2025 OIP Partner of the Year Award from TSMC in the Analog IP category for enabling customer designs with broad portfolio of IPs to accelerate design creation. This is quite an accomplishment. Pictured at the right is Mahesh Tirupattur receiving the award at OIP.

Mahesh also created a short video for the TSMC event. In that video, he discusses the significance of the collaboration with TSMC, not just in 2025 but over the past two decades. He talks about the age of AI explosion, and the focus Analog Bits has to deliver safe, reliable, observable and efficient power. He talks about the benefits of delivering advanced low power mixed signal IP on TSMC’s 2 and 3 nm technologies. It’s a great discussion, and you can now view it here.

To Learn More

You can learn more about what Analog Bits is doing around the industry on SemiWiki here. You can begin exploring the billions of IP solutions Analog Bits has delivered on the company’s website here.  The TSMC Open Innovation Platform Ecosystem Forum will be held in other locations around the world. Analog Bits will be attending as well. You can learn more about this important industry event here.  And that’s what happens when Analog Bits steps into the spotlight at TSMC OIP.


Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions

Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions
by Daniel Nenni on 09-29-2025 at 6:00 am

synopsys tsmc oip 2025 leading the next wave of ai and multi die innovation for tsmc advanced node designs

Synopsys has deepened its collaboration with TSMC certifying the Ansys portfolio of simulation and analysis tools for TSMC’s cutting-edge manufacturing processes including N3C, N3P, N2P, and A16. This partnership empowers chip designers to perform precise final checks on designs, targeting applications in AI acceleration, high-speed communications, and advanced computing. Additionally, the companies have developed an AI-assisted design flow for TSMC’s Compact Universal Photonic Engine (COUPE™) platform, streamlining photonic design and enhancing efficiency.

Multiphysics and AI-Driven Design Innovations

Synopsys and TSMC are advancing multiphysics analysis for complex, hierarchical 3DIC designs. The multiphysics flow integrates tools like Ansys RedHawk-SC™, Ansys RedHawk-SC Electrothermal™, and Synopsys 3DIC Compiler™ to enable thermal-aware and voltage-aware timing analysis. This approach accelerates convergence for large-scale 3DIC designs, addressing challenges in thermal management and signal integrity critical for high-performance chips.

For TSMC’s COUPE platform, Synopsys leverages AI-driven tools like Ansys optiSLang® and Ansys Zemax OpticStudio® to optimize optical coupling systems. These tools, combined with Ansys Lumerical FDTD™ for photonic inverse design, allow engineers to create custom components, suhttps://www.ansys.com/products/connect/ansys-optislangch as grating couplers, while reducing design cycle times and improving design quality through sensitivity analysis. This AI-assisted workflow is transformative for photonic applications, enabling faster development of high-speed communication interfaces.

Certifications for Advanced Process Technologies

The collaboration includes certifications for key Synopsys tools across TSMC’s advanced nodes. Ansys RedHawk-SC and Ansys Totem™ are certified for power integrity verification on TSMC’s N3C, N3P, N2P, and A16™ processes, ensuring reliable chip performance. Ansys HFSS-IC Pro™, designed for electromagnetic modeling, is certified for TSMC’s N5 and N3P processes, supporting system-on-chip electromagnetic extraction. These certifications enable designers to meet stringent requirements for AI, high-performance computing (HPC), 5G/6G, and automotive electronics.

Additionally, Ansys PathFinder-SC™ is certified for TSMC’s N2P process, offering electrostatic discharge current density (ESD CD) and point-to-point (P2P) checking. This tool enhances chip resilience against electrical overstress, accelerating early-stage design validation and improving product durability, particularly for complex 3DIC and multi-die systems. Synopsys is also working with TSMC to develop a photonic design kit for the A14 process, expected in late 2025, further expanding support for photonic applications.

Industry Impact and Strategic Partnership

This collaboration underscores Synopsys’ leadership in providing design solutions for next-generation technologies.

“Synopsys provides a broad range of design solutions to help semiconductor and system designers tackle the most advanced and innovative products for AI enablement, data center, telecommunications, and more,” said John Lee, vice president and general manager of the semiconductor, electronics, and optics business unit at Synopsys. “Our strong and continuous partnership with TSMC has been a key factor in maintaining our position at the forefront of technology while providing consistent value to our shared customers.”

“TSMC’s advanced process, photonics, and packaging innovations are accelerating the development of high-speed communication interfaces and multi-die chips that are essential for high-performance, energy-efficient AI systems,” said Aveek Sarkar, director of the ecosystem and alliance management division at TSMC. “Our collaboration with OIP ecosystem partners such as Synopsys has delivered an advanced thermal, power and signal integrity analysis flow, along with an AI-driven photonics optimization solution for the next generation of designs.”

Bottom line: By combining Synopsys’ simulation expertise with TSMC’s advanced process technologies this partnership accelerates the development of robust, high-performance chips, solidifying both companies’ roles in shaping the future of semiconductor design, absolutely.

The full press release is here.

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the leader in engineering solutions from silicon to systems, enabling customers to rapidly innovate AI-powered products. We deliver industry-leading silicon design, IP, simulation and analysis solutions, and design services. We partner closely with our customers across a wide range of industries to maximize their R&D capability and productivity, powering innovation today that ignites the ingenuity of tomorrow. Learn more at www.synopsys.com.

Also Read:

Synopsys Announces Expanding AI Capabilities and EDA AI Leadership

The Rise, Fall, and Rebirth of In-Circuit Emulation (Part 1 of 2)

448G: Ready or not, here it comes!


Via Multipatterning Regardless of Wavelength as High-NA EUV Lithography Becomes Too Stochastic

Via Multipatterning Regardless of Wavelength as High-NA EUV Lithography Becomes Too Stochastic
by Fred Chen on 09-28-2025 at 10:00 am

Fred Chen EUV 2025

For the so-called “2nm” node or beyond, the minimum metal pitch is expected to be 20 nm or even less, while at the same time, contacted gate pitch is being pushed to 40 nm [1]. Therefore, we expect via connections that can possibly be as narrow as 10 nm (Figure 1)! For this reason, it is natural to expect High-NA EUV lithography as the go-to method for patterning at such small scales. However, High-NA’s resolution benefit is offset by its depth of focus [2,3]. A 20 nm defocus can be expected to reduce the width of a spot projected by a High-NA EUV system by 10%. Therefore, the resist cannot be expected to be thicker than 20 nm. This in turn, reduces the density of absorbed photons. A 20 nm thick chemically amplified resist is expected to absorb only 10% of the incident dose. For an incident dose of 60 mJ/cm2, only 4 photons are absorbed per square nanometer! The Poisson statistics of shot noise entails 2sigma of 100%! This will be detrimental for edge roughness (Figure 1). The edge stochastics becomes prohibitive for High-NA EUV.

Figure 1. Stochastic EUV photon absorption (6mJ/cm2 averaged over 80 nm x 40 nm). The illumination in the High-NA EUV system is shown on the left.

In fact, even with an 0.33 NA EUV system for 44 nm center-to-center vias or smaller, stochastic edge placement error (EPE) is projected to cause violation of a 5 nm total EPE spec [4]. Therefore, 2nm via patterning is expected to involve serious multipatterning, regardless of whether EUV is used or not.

Figure 2 shows a representative layout for via connections to gates and diffusion areas, where the minimum center-to-center via spacing is 40 nm, and the track pitch is 20 nm. Due to the 40 nm minimum center-to-center spacing constraint, two masks are needed for the gate contacts, while four masks are needed for the source-drain contacts. Note, this number is the same for ArF immersion or EUV. Two vias separated by 40 nm will require two separate masks whether using DUV or EUV.

Figure 2. A representative layout for via connections to gate and diffusion areas for contacted gate pitch of 40 nm and track pitch of 20 nm. Different number labels on the vias indicate different masks used for that layer.

For the BEOL, we expect fully self-aligned vias to be used [5]. We can also expect vias to be placed on a crossed diagonal grid [6]. When these arrangements are combined, the number of multipatterning masks can be minimized [7]. Note that multipatterning is still necessary even with EUV as the expected center-to-center distances are still too small (Figure 3). One mask would be needed to overlay a diagonal line grid that blocks half of the crosspoint locations where the two adjacent metal layers overlap (Figure 4). The pitch of this diagonal line grid will require self-aligned quadruple patterning (SAQP) with ArF immersion lithography, and the stochastic defect density cliff below 36 nm pitch [8,9] would also likely force EUV to use self-aligned double patterning (SADP). From the remaining locations, two ArF immersion masks or one EUV mask would be used to select which ones would be kept for forming vias (Figure 5). Note that this would leave a diagonal line signature on the vias (Figure 6).

Figure 3. The center-to-center distances above are too small to allow single exposure patterning.

Figure 4. Diagonal line block mask blocks half of the crosspoint locations where the two adjacent metal layers overlap. The blocked overlap locations are in red.

Figure 5. After the diagonal line block grid is in place, a keep mask or masks would select which metal layer locations would be kept for forming vias.

Figure 6. A diagonal line signature is left on the formed vias.

If we note the possible center-to-center distances on the crossed diagonal grid, we can see that relying on the basic repeated litho-etch (LE) approach to multipatterning will lead to up to three masks for EUV, and up to four masks for DUV, making use of an underlying 80 nm x 80 nm pitch tiling (Figure 7). The DUV LE4 approach would definitely be more cost-efficient than EUV LE3 [10]. Hence, any approach to make the requisite multipatterning more efficient, such as the diagonal line grid approach above or even directed self-assembly (DSA) [11], would help ensure getting to 20 nm track pitch and 40 nm contacted gate pitch.

Figure 7. Brute force repeated LE approach could require up to three EUV masks or four DUV masks.

References

[1] I. Cuttress (TechTechPotato), How We Get Down to 0.2nm CPUs and GPUs.

[2] A. Burov, A. V. Pret, R. Gronheid, “Depth of focus in high-NA EUV lithography: a simulation study,” Proc. SPIE 12293, 122930V (2022).

[3] F. Chen, High-NA Hard Sell: EUV Multipatterning Practices Revealed, Depth of Focus Not Mentioned.

[4] W. Gao et al., “Simulation investigation of enabling technologies for EUV single exposure of via patterns in 3nm logic technology,” Proc. SPIE 11323, 113231L (2020).

[5] V. Vashishtha, L. T. Clark, “ASAP5: A predictive PDK for the 5 nm node,” Microel. J. 126, 105481 (2022).

[6] Y-C. Hsiao, W. M. Chan, K-H. Hsieh,US Patent 9530727, assigned to TSMC; S-W. Peng, C-M. Hsiao, C-H. Chang, J-T. Tzeng, US Patent Application US20230387002; F. Chen, Routing and Patterning Simplification with a Diagonal Via Grid.

[7] F. Chen, Multipatterning Reduction with Gridded Cuts and Vias; F. Chen, Exploring Grid-Assisted Multipatterning Scenarios for 10A-14A Nodes.

[8] Y-P. Tsai et al., “Study of EUV stochastic defect on wafer yield,” Proc. SPIE 12954, 1295404 (2024).

[9] Y. Li, Q. Wu, Y. Zhao, “A Simulation Study for Typical Design Rule Patterns and Stochastic Printing Failures in a 5 nm Logic Process with EUV Lithography,” CSTIC 2020.

[10] L-Å. Ragnarsson et al., “The Environmental Impact of CMOS Logic Technologies,” EDTM 2022; E. Vidal-Russell, “Curvilinear masks extend lithography options for advanced node memory roadmaps,” J. Micro/Nanopattern. Mater. Metrol. 23, 041504 (2024).

[11] Z. Wu et al., “Quadruple-hole multiplication by directed self-assembly of block copolymer,” Proc. SPIE 13423, 134231O (2024).


CEO Interview with Jiadi Zhu of CDimension 

CEO Interview with Jiadi Zhu of CDimension 
by Daniel Nenni on 09-28-2025 at 8:00 am

Jiadi headshot

Jiadi Zhu is the CEO and founder of CDimension, a company rethinking chip design to shape the next generation of computing. Under his leadership, CDimension is creating the next generation of building blocks for chips, starting with materials and scaling up to full systems that can power everything from today’s AI and advanced computing to the quantum breakthroughs of tomorrow.

With a Ph.D. in Electrical Engineering from MIT and over a decade of work at the frontier of 2D materials and 3D integration, Jiadi has been widely recognized for his originality in device design and novel semiconductor materials. His research has been published in top journals like Nature Nanotechnology and presented at leading conferences including IEEE’s International Electron Devices Meeting.

Tell us about your company?

CDimension is pioneering a bottom-up strategy for the future of computing, beginning at the foundation of the chip stack: materials. We develop and supply next-generation two-dimensional (2D) semiconductors and insulators, enabling breakthroughs in both semiconductor circuits and quantum computing.

On the semiconductor side, our atomically thin 2D semiconductors significantly reduce power consumption and help overcome the global power wall in computing. On the quantum side, our single-crystalline 2D materials reduce noise at the source,enabling longer coherence times, higher fidelities, and scalable integration of quantum qubits.

Backed by more than 20 patents and a growing base of industrial and academic customers, our roadmap delivers wafer-scale materials today and pilot-scale quantum chips within 18 months. With this foundation, CDimension is positioned as the chip provider of the quantum era, building the material and device layer that will power tomorrow’s computing revolution.

What problem is CDimension solving?

The core issue we are addressing is that silicon has hit its limits. Put simply, chips are getting more power-hungry and harder to integrate. Most of the industry’s fixes we’ve seen up to now are top-down, things like reconfiguring GPUs; but those only deliver incremental gains, not the 100x or even 1000× leap future applications in AI and quantum will require over the next decade.

That’s why we believe the solution has to start at the bottom, with the materials themselves. For decades, 2D materials were stuck in the lab as tiny flakes with high defects and no scalability. We’ve broken that barrier by producing wafer-scale, high-quality semiconductors and insulators that customers can actually build on today.

I like to think of it this way: trying to build the future of chips with silicon alone is like trying to make a book out of bricks. It worked for a long time, but now the bricks are too bulky to stack any further. Our 2D materials are like pages: ultra-thin, smooth, and stackable. From there, you can build the books, or integrated circuits, and eventually even the writing inside, which is applications like quantum computing.

What application areas are your strongest?

Currently, our technology is making the biggest difference in quantum. While the field is gaining momentum, progress has been stalled by one critical bottleneck: noise. These random disturbances cause errors that prevent systems from scaling. Existing approaches struggle to fix it because they rely on imperfect materials or lack robustness at the materials level.

Our wafer-scale, single-crystalline 2D insulators reduce noise at the source. Instead of trying to scale by simply adding more qubits, we’re improving each individual qubit by lowering noise and extending coherence time. That allows more qubits to be connected reliably and being more robust to the surrounding environment, which is the critical step toward practical, large-scale quantum computing.

What keeps your customers up at night?

What keeps them up at night is the realization that the conventional approaches, such as silicon circuits and oxide dielectrics, can’t solve these problems on their own. Even the best design tweaks or architectural optimizations only squeeze out marginal gains, while demand from AI, data centers, and quantum continues to grow exponentially. Without a new foundation, they risk hitting hard limits: power budgets that data centers can’t sustain and quantum devices that can’t scale beyond prototypes.

That’s why CDimension resonates with them. We deliver wafer-scale, defect-free 2D materials that deliver a fundamental shift. For semiconductors, that means atomically thin transistors that slash power use. For quantum, it means ultra-smooth, single-crystalline insulators that suppress noise at the source. Together, these advances open the path from today’s bottlenecks to tomorrow’s scalable, high-performance systems.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape is crowded at the system and algorithm layers of computing- everyone from hyperscalers to startups is working on GPUs, AI accelerators, or new quantum modalities. But very few are tackling the materials bottleneck head-on, because 2D materials were long assumed to be stuck in the lab: fragile, defect-prone flakes with no path to scale.

CDimension is different. We’ve broken through that barrier, delivering wafer-scale, defect-free semiconductors and insulators, protected by more than 20 patents and developed by a team that combines world-class materials science with deep semiconductor engineering. This foundation lets us uniquely address the toughest pain points in the industry- power, noise, and robustness- at their source.

In short, while others focus on optimizing the top of the stack, CDimension is building the bottom layer, the material and device foundation that makes those optimizations truly scalable.

What new features/technology are you working on?

Our primary focus right now is quantum hardware. We’ve begun supplying single-crystalline 2D insulators that directly reduce noise in quantum devices, a critical step toward scalable, error-corrected quantum systems. These ultra-smooth interfaces extend coherence, improve fidelities, and enable more robust coupling, pushing quantum closer to real-world integration. Looking ahead, our goal is hybrid semiconductor–quantum systems within 2–3 years, with scalable quantum hardware arriving much sooner than most expect.

In parallel, we’re expanding a full suite of 2D materials-  semiconductors, insulators, and conductors- all designed to integrate seamlessly with existing silicon workflows. Our first commercial release, announced recently, was ultra-thin MoS₂ grown directly on silicon wafers using a proprietary low-temperature process. These monolayers demonstrated up to 1,000× improvement in transistor-level energy efficiency compared to silicon and are already being sampled by customers across academia and industry. From MoS₂ to n-type, p-type, metallic, and insulating films at wafer scale, our platform is building the materials backbone for vertically integrated chips that unify compute, memory, and power in a single architecture.

How do customers normally engage with your company?

Today, customers are already purchasing our 2D materials for their own device research. Universities like Carnegie Mellon, Duke, and UC San Diego are currently using our wafers to bypass material synthesis and focus directly on building and testing new devices. Industry R&D teams are doing the same, evaluating our wafers for integration into semiconductor and quantum workflows.

But our offering is not just about supplying wafers; it’s about building the foundation for the next era of computing. As customers themselves scale, we aim to be a true technical partner, providing equipment, supporting integration, and eventually working together on full system solutions.

Also Read:

CEO Interview with Howard Pakosh of TekStart

CEO Interview with Barun Kar of Upscale AI

CEO Interview with Adam Khan of Diamond Quanta


Podcast EP308: How Clockwork Optimizes AI Clusters with Dan Zheng

Podcast EP308: How Clockwork Optimizes AI Clusters with Dan Zheng
by Daniel Nenni on 09-26-2025 at 10:00 am

Daniel is joined by Dan Zheng, VP of Partnerships and Operations at Clockwork. Dan was the General Manager for Product and Partnerships at Urban Engines which was acquired by Google in 2016. He has also held roles at Stanford University and Google.

Dan explores the challenges of operating massive AI hardware infrastructure at scale with Daniel. It turns out it can be quite difficult to operate modern GPU clusters efficiently. Communication bottlenecks within clusters and between clusters, stalled pipelines, network issues, and memory issues can all contribute to the problem. Debugging these issues can be difficult and Dan explains that re-starts from prior checkpoints can happen many times in large AI clusters and each of these events can waste many thousands of GPU hours.

Dan also describes Clockwork’s FleetIQ Platform and how this technology addresses these situations by providing nano-second accurate visibility correlated across the stack. The result is more efficient and productive AI clusters allowing far more work to be accomplished. This provides more AI capability with the same hardware, essentially democratizing access to AI.

Contact Clockwork

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Howard Pakosh of TekStart

CEO Interview with Howard Pakosh of TekStart
by Daniel Nenni on 09-26-2025 at 6:00 am

!HP biopic 2

Howard Pakosh is a serial entrepreneur and angel investor. Mr. Pakosh is also Founder & CEO of the TekStart Group, a Toronto-based boutique incubator focusing on Fractional-C business development support, as well as developing, promoting and licensing technology into markets such as blockchain, Internet-of-Things (IoT) and Semiconductor Intellectual Property (SIP). TekStart is currently an early-stage investor with Piera Systems (CleanTech), Acrylic Robotics (Robotics), Low Power Futures  (Semiconductors), ChipStart (Semiconductors), and Freedom Laser (Digital Health).

Mr. Pakosh has been involved in all phases of semiconductor development for over 30 years and was instrumental in the delivery of the first commercially available USB Subsystem at his first IP startup called Xentec and Elliptic Technologies Inc. (both sold to Synopsys Inc.). Other ventures he’s recently led, include the development of Micron’s Hybrid Memory Cube controller and the development of the most power-efficient crypto processing ASIC’s for SHA-256 and SCRYPT algorithms.

Tell us about your company.

When I started TekStart® in 1998, the mission was clear: give bold ideas the resources and leadership they need to become thriving businesses. The semiconductor field has always been high-stakes, demanding both creativity and flawless execution. Over time, TekStart has shifted from a commercialization partner to a true venture builder, now concentrating heavily on semiconductors and AI. Our purpose hasn’t changed. We exist to help innovators succeed. What has changed are our methods, which have adapted to an industry that’s become more global, more competitive, and far more complex.

What new features or technology are you working on?

What I am excited to share is an exciting breakthrough we have achieved through our ChipStart® business unit. With Newport by ChipStart, we’ve proven we’re not only enabling innovation but driving it ourselves. Achieving up to 65 TOPS of performance at under 2 watts is a leap forward, unlocking a new level of performance-per-watt that opens doors to applications once thought impossible.

What problems are you solving?

The semiconductor industry faces three defining challenges: fragile supply chains, the demand for radical energy efficiency, and the relentless race to market. Newport by ChipStart is built to meet these challenges head-on. Instead of designs tied to exotic nodes, we enable resilient architectures that keep innovation moving, even in uncertain times. Instead of incremental power gains, we push for performance that redefines efficiency by delivering more capability per watt. Instead of waiting on the pace of new fabs, we help innovators leap from concept to production silicon faster than ever. Newport isn’t just solving today’s problems. It’s shaping the future of how chips get built.

What application areas are your strongest?

We see the greatest impact in Edge devices that demand real-time intelligence without relying on the cloud. Security and surveillance systems, for example, need to analyze video on-site to detect threats instantly, without the latency of sending data off-premise. In agriculture, sensors and vision systems powered by AI can monitor crops, optimize water use, and detect early signs of disease, helping farmers boost yields sustainably. AR/VR wearables require high-performance AI that runs efficiently in small, battery-constrained form factors, enabling immersive experiences without bulky hardware. And in industrial automation, factories are increasingly reliant upon AI-driven systems to inspect products, predict equipment failures, and streamline processes. These are just a few of the areas where Edge AI is not just useful but transformative, and where Newport by ChipStart is purpose-built to deliver.

What keeps your customers up at night?

The pace of innovation in semiconductors and AI has never been faster, and it’s only accelerating. Our customers worry about launching a product only to find it outdated months later. Staying relevant requires moving from concept to market at unprecedented speed – and doing so without compromising quality or performance. That’s where TekStart, through Newport by ChipStart, makes a real difference. We partner closely with innovators to compress development cycles and deliver silicon that keeps pace with today’s AI-driven world. By helping our partners beat obsolescence, we ensure they stay ahead in markets where timing is everything.

What does the competitive landscape look like and how do you differentiate?

Competition in our space revolves around two unforgiving dimensions: time-to-market and innovation. Both demand relentless execution to stay ahead. We differentiate by combining deep semiconductor expertise with an ecosystem of partners who bring complementary strengths in design, manufacturing, and deployment. Our team has decades of hands-on experience across ASIC design, operations, and AI applications. When combined with our extended network, we’re able to anticipate shifts in technology and deliver solutions that arrive ahead of the curve. This balance of speed and foresight is what keeps our customers competitive and what sets us apart in a crowded landscape.

How do customers normally engage with your company?

We typically engage through close collaboration across the semiconductor supply chain. That means working side-by-side with fab houses, manufacturers, and technology partners to ensure our products integrate seamlessly into their final deliverables. By embedding our solutions at the heart of their systems – whether it’s in smart cameras, connected devices, or industrial machinery – we help our partners to accelerate their own roadmaps. These collaborations go beyond transactions. They’re strategic partnerships designed to align our innovation with their market needs.

Also Read:

TekStart Group Joins Canada’s Semiconductor Council

CEO Interview with Barun Kar of Upscale AI

CEO Interview with Adam Khan of Diamond Quanta


SkyWater Technology Update 2025

SkyWater Technology Update 2025
by Daniel Nenni on 09-25-2025 at 10:00 am

Skywater Technologies HQ

SkyWater Technology, a U.S. based pure-play semiconductor foundry, has made significant strides in 2025 reinforcing its position as a leader in domestic semiconductor manufacturing. Headquartered in Bloomington, Minnesota, SkyWater specializes in advanced innovation engineering and high volume manufacturing of differentiated integrated circuits. The company’s Technology as a Service model streamlines development and production, serving diverse markets including aerospace, defense, automotive, biomedical, industrial, and quantum computing.

A major milestone in 2025 was SkyWater’s acquisition of Infineon Technologies’ 200 mm semiconductor fab in Austin, Texas (Fab 25), completed on June 30. This acquisition added approximately 400,000 wafer starts per year, significantly boosting SkyWater’s capacity. Fab 25 enhances the company’s ability to produce foundational chips for embedded processors, memory, mixed-signal, RF, and power applications. By converting this facility into an open-access foundry SkyWater strengthens U.S. semiconductor independence aligning with national security and reshoring trends. The acquisition, funded through a $350 million senior secured revolving credit facility, also added about 1,000 employees to SkyWater’s workforce, bringing the total to approximately 1,700.

On July 29, SkyWater announced a license agreement with Infineon Technologies, granting access to a robust library of silicon-proven mixed-signal design IP. Originally developed by Cypress Semiconductor, this IP is validated for high-volume automotive-grade applications and is integrated into SkyWater’s S130 platform. The portfolio includes ADCs, DACs, power management, timing, and communications modules, enabling customers to design high-reliability mixed-signal System-on-Chips within a secure U.S. supply chain. This move positions SkyWater as a trusted partner for both commercial and defense markets thus reducing design risk and accelerating time to market.

SkyWater’s financial performance in 2025 reflects steady progress. The company reported second-quarter results at the upper end of expectations, with a trailing 12-month revenue of $290 million as of June 30. However, its Advanced Technology Services segment faced near-term softening due to federal budget delays impacting Department of Defense funding. Despite this, SkyWater remains confident in achieving record ATS revenue in 2025 provided funding issues are resolved. The company’s stock price stands at around $10.56 with a market capitalization of $555 million and 48.2 million shares outstanding.

Strategically, SkyWater is capitalizing on emerging technologies. Its collaboration with PsiQuantum to develop silicon photonic chips for utility-scale quantum computing highlights its expertise in cutting-edge applications. Additionally, SkyWater adopted YES RapidCure systems for its M-Series fan-out wafer level packaging (FOWLP) in partnership with Deca Technologies, enhancing prototyping speed and reliability for advanced packaging. These initiatives align with SkyWater’s focus on high-margin, innovative solutions, positioning it as a strategic partner in quantum computing and photonics.

SkyWater’s commitment to U.S.-based manufacturing and its DMEA-accredited Category 1A Trusted Foundry status underscore its role in supporting critical domestic markets. The company’s facilities are certified for aerospace (AS9100), medical (ISO13485), automotive (IATF16949), and environmental (ISO14001) standards, ensuring high-quality production. Despite challenges like funding delays and integration risks from the Fab 25 acquisition, SkyWater’s focus on innovation, strategic partnerships, and capacity expansion positions it for long-term growth. Analysts view SkyWater as a strong investment, with a 24-36 month price target of $20, reflecting confidence in its de-risked business model and alignment with U.S. reshoring trends.

Also Read:

Podcast EP307: An Overview of SkyWater Technology and its Goals with Ross Miller

Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability


TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging
by Daniel Nenni on 09-25-2025 at 8:00 am

TSMC OIP 2025

In his keynote at the TSMC OIP Ecosystem Forum Dr. LC Lu, TSMC Senior Fellow and Vice President, Research & Development / Design & Technology Platform, highlighted the exponential rise in power demand driven by AI proliferation. AI is embedding itself everywhere, from hyperscale data centers to edge devices, fueling new applications in daily life.

Evolving models, including embodied AI, chain-of-thought reasoning, and agentic systems, demand larger datasets, more complex computations, and extended processing times. This surge has led to AI accelerators consuming 3x more power per package in five years, with deployments scaling 8x in three years, making energy efficiency paramount for sustainable AI growth.

TSMC’s strategy focuses on advanced logic and 3D packaging innovations, coupled with ecosystem collaborations, to tackle this challenge. Starting with logic scaling, TSMC’s roadmap is robust: N2 will enter volume production in the second half of 2025, N2P slated for next year, A16 with backside power delivery by late 2026, and A14 progressing smoothly.

Enhancements to N3 and N5 continue to add value. From N7 to A14, speed at iso-power rises 1.8x, while power efficiency improves 4.2x, with each node offering about 30% power reduction over its predecessor. A16’s backside power targets AI and HPC chips with dense networks, yielding 8-10% speed gains or 15-20% power savings versus N2P.

N2 Nanoflex DTCO optimizes designs for dual high-speed and low-power cells, achieving 15% speed boosts or 25-30% power reductions. Foundation IP innovations further enhance efficiency. Optimized transmission gate flip-flops cut power by 10% with minimal speed (2%) and area (6%) trade-offs, sometimes outperforming state gate variants.

Dual-rail SRAM with turbo/nominal modes delivers 10% higher efficiency and 150mV lower Vmin, with area penalties optimized away. Compute-In-Memory stands out: TSMC’s digital CIM based Deep Learning Accelerator offers 4.5x TOPS/W and 7.8x TOPS/mm² over traditional 4nm DLAs, scaling from 22nm to 3nm and beyond. TSMC invites partnerships for further CIM advancements.

AI-driven design tools amplify these gains. Synopsys’ DSO.AI is the leader with reinforcement learning for PPA optimization, improving power efficiency by 5% in APR flows and 2% in metal stacks, totaling 7%. For analog designs integrations with TSMC APIs yield 20% efficiency boosts and denser layouts. AI assistants accelerate analysis 5-10x via natural language queries for power distribution insights.

Shifting to 3D packaging, TSMC’s 3D Fabric includes SoIC for silicon stacking, InFO for mobile/HPC chiplets, CoWoS for logic-HBM integration, and SoW for wafer-scale AI systems. Energy-efficient communication sees 2.5D CoWoS improving 1.6x with microbump pitches from 45µm to 25µm. 3D SoIC boosts efficiency 6.7x over 2.5D, though with smaller integration areas (1x reticle vs. 9.5x). Die-to-die IPs, aligned with UCIE standards, are available from partners like AlphaWave and Synopsys.

HBM integration advances: HBM4 on TSMC’s N12 logic base die provides 1.5x bandwidth and efficiency over HBM3e DRAM dies. N3P custom bases reduce voltage from 1.1V to 0.75V. Silicon photonics via co-packaged optics offers 5-10x efficiency, 10-20x lower latency, and compact forms versus pluggables. AI optimizations from Synopsys/ANSYS enhance this by 1.2x through co-design.

Decoupling capacitance innovations using Ultra High-Performance Metal-Insulator-Metal plus Embedded Deep Trench Capacitor enables 1.5x power density without integrity loss, modeled by Synopsys/ANSYS tools. EDA-AI automates EDTC insertion (10x productivity) and substrate routing (100x, with optimal signal integrity).

Bottom line: Moore’s Law is alive and well. Logic scaling delivers 4.2x efficiency from N7 to A14, CIM adds 4.5x IP/design innovations contribute 7-20%. Packaging yields 6.7x from 2.5D to 3D, 5-10x from photonics, and 1.5-2x from HBM/ Decoupling Capacitor advances, with AI boosting productivity 10-100x.

TSMC honored partners with the 2025 OIP Awards for contributions in A14/A16 infrastructure, multi-die solutions, AI design, RF migration, IP, 3D Fabric, and cloud services. It is all about the ecosystem, absolutely.

Exponential AI power needs demand such innovations. TSMC’s collaborations drive 5-10x gains fostering efficient, productive AI ecosystems. Looking ahead, deeper partnerships will unlock even more iterations for sustainable AI advancement.

Also Read:

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion


Semiconductor Equipment Spending Healthy

Semiconductor Equipment Spending Healthy
by Bill Jewell on 09-24-2025 at 4:00 pm

Semiconductor Equipment Spend 2H 2025

Global spending on semiconductor manufacturing equipment totaled $33.07 billion in the 2nd quarter of 2025, according to SEMI and SEAJ. 2Q 2025 spending was up 23% from 2Q 2024. China had the largest spending at $11.36 billion, 34% of the total. However, China spending in 2Q 2025 was down 7% from 2Q 2024. Taiwan had the second largest amount and experienced the fastest growth, with 2Q 2025 spending $8.77 billion, up 125% from 2Q 2024. TSMC was the major driver of the increase in Taiwan, with its capital expenditures (CapEx) up 62% in the first half of 2025 versus the first half of 2024. South Korea spending was the third largest at $5.91 billion, up 31% from a year earlier.

North America showed the fastest growth in semiconductor equipment spending in 2024, with 4Q 2024 spending of $4.98 billion up 163% from $1.89 billion in 1Q 2024. However, North America spending in 1Q 2025 was $2.93 billion, down 41% from 4Q 2024. 2Q 2025 spending was down again at $2.76 billion. The spending drop can be attributed to delays in planned wafer fabs in the U.S. Intel has delayed completion of its wafer fab in New Albany, Ohio, until 2031 from its initial plan of 2025. Groundbreaking on Micron Technology’s wafer fab in Clay, New York, has been delayed until late 2025 from its original target of June 2024. Samsung reportedly delayed initial production at its new wafer fab in Taylor, Texas, to 2027 from an original goal of 2024.

Semiconductor equipment spending in Japan in 2Q 2025 was 2.68 billion, up 66% from 2Q 2024. Europe spending in 2Q 2025 was 0.72 billion, down 23% from a year earlier. Spending in the rest of the world (ROW) was 0.87 billion, down 28%.

The outlook for total semiconductor capital expenditures (CapEx) in 2025 remains essentially the same as our Semiconductor Intelligence estimates published in March 2025. We still project 2025 CapEx of $160 billion, up 3% from $155 billion in 2024. The outlook for 2026 CapEx is mixed. Intel expects CapEx to be lower in 2026 than its expected $18 billion in 2025. Micron Technology reported $13.8 billion in CapEx for its fiscal year ended in August 2025 and plans higher spending in fiscal year 2026. Texas Instruments projects 2026 CapEx of $2 billion to $5 billion compared to $5 billion in 2025. The company with the largest CapEx, TSMC projects a range from $38 billion to $42 billion in 2025. TSMC has not provided CapEx estimates for 2026, but investment bank Needham and Company predicts TSMC will increase CapEx to $45 billion in 2026 and $50 billion in 2027.

The U.S. CHIPS and Science Act was passed in 2022 to boost semiconductor manufacturing in the U.S. As reported by IEEE, most of the $30 billion proposed in the CHIPS Act was awarded in the two months after President Trump’s election in November 2024 and before his inauguration in January 2025. The Trump administration wants to revise the CHIPS Act but has not offered specific plans. In August, the U.S. government made an $8.9 billion investment in Intel for a 9.9% stake in the company. $5.7 billion of the investment came from grants approved but not yet awarded to Intel under the CHIPS Act. The remaining $3.2 billion in funding came from the Secure Enclave program which was awarded to Intel in September 2024. A contributor to Forbes questions the wisdom of the Intel investment.

U.S. Commerce Secretary Howard Lutnick is reportedly considering the U.S. government taking shares in other companies which have received money under the CHIPS Act. Thus, the Trump administration seems to be changing the terms of the CHIPS Act which was approved by Congress in 2022. Without any approval from Congress, the Trump administration is apparently taking back grant money and using it for equity investments.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Still Strong in 2025

U.S. Imports Shifting

Electronics Up, Smartphones down