RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Formal Verification Best Practices

Formal Verification Best Practices
by Daniel Payne on 03-17-2026 at 10:00 am

formal verification

How do I know when my hardware design is correct and meets all of the specifications? For many years the answer was simple, simulate as much as you can in the time allowed in the schedule and then hope for the best when silicon arrives for testing. There is a complementary method for ensuring that hardware design meets the specifications by using formal verification, a mathematical technique to prove that the design functions properly under all possible cases. Nicolae Tusinschi, Siemens EDA wrote a white paper on this topic, so this blog shares what I learned about formal verification.

Simulation only tests what you provide as sequential inputs to a design, hoping that you may get close to covering all of the states in a design. Formal verification is an exhaustive approach, analyzing all possible states and input combinations, finding any potential violations to intended behavior.

For formal verification engineers do write assertions, assumptions and cover properties with languages like SystemVerilog Assertions (SVA) or the Property Specification Language (PSL), depending on preferences. An assertion states the expected behavior of a design. Constraints are used to limit the formal analysis to only be valid input sequences or states. Cover properties tell you how completely your design has been verified. These are the pieces that direct the formal verification methodology, showing that the design will work as specified through all conditions.

Pre-built formal applications make formal use easier than gaining formal experience. One formal app can check for clock domain crossing issues, identifying synchronization problems without any special knowledge. Control logic was an early use for formal verification with its limited state space, yet new formal tools can also verify data paths with their larger number of data values and operations. Siemens offers a full suite of formal verification solutions with Questa One Static and Formal Verification (SFV)..

Yes, formal verification has limits in terms of memory usage and total state space, so on large designs is recommended to use formal selectively on critical components, then rely on simulation for the remaining components.

Formal analysis can get stuck and report inconclusive results if the design is highly complex, or assertions are beyond the tool capacity to compute in one run. Limiters to formal analysis can be a large state space, deep sequential depth or property complexity. Using bounded proofs in formal verification will check an assertion only within a certain number of clock cycles, producing results in a more feasible amount of time. On the other hand, formal tools help identify limits by reporting the “cone of influence”, which is the logic that affects each property being verified. Questa One SFV shows the logic cone of influence, listing assumptions and signals, allowing you to address the complexity and maybe remove or add some assumptions.

For best results it’s recommended that assertions be written simply and with short sequential depth. Decomposition examples were provided for modular partitioning across multiple sub-modules. Counters create large sequential depth but can be abstracted by replacing them with a smaller one or using a non-deterministic model. Large memories can create a giant number of state bits, so you can either black box the memory model or reduce the memory size.

An example of abstracting memory is when there are 128 entries with a 64 bit data width. Memory addresses that are not 100 can be abstracted using this netlist cutpoint command. This abstracted memory has the number of state bits reduced to just 64 from 8,192, which improves the formal runtime speed.

for {set i 0} {$i < 128} {incr i} { 
    if {$i != 100} { 
        netlist cutpoint memory_instance.mem[$i] 
    } 
}

Summary

This white paper on formal verification shares many examples to explain where and how to use the technology most effectively. Best practices include strategically applying formal, combining both formal and simulation, iteratively refining where formal is used, and documenting to other team members the assertions, assumptions and abstraction usages. Formal verification uses a mathematical proof of your design being correct, instead of relying on probably being correct. Designs that are safety-critical or require high-reliability benefit greatly from higher levels of assurance provided by formal verification.

The larger the design and the more complex the design, the greater the role that formal verification provides. Formal apps cut down the learning curve and produce faster verification results.

Read the entire 19 page white paper here.

Related Blogs


The First Real RISC-V AI Laptop

The First Real RISC-V AI Laptop
by Jonah McLeod on 03-17-2026 at 6:00 am

DC ROMA

At a workshop in Boston on February 27, something subtle but important happened. Developers sat down in front of a RISC-V laptop, installed Fedora, and ran a local large language model. No simulation. No dev board tethered to a monitor. A laptop.

For more than a decade, RISC-V advocates have promised that the open instruction set would eventually reach mainstream computing devices. Until now the reality has mostly been evaluation boards, embedded systems, and research platforms. The ROMA II laptop changes that equation. Developers can treat it like a normal PC—boot it, install Linux, run software, try AI. The Boston event, part of World RISC-V Days and co-sponsored by DeepComputing, Red Hat, and RISC-V International, was less a product launch than a proving ground. Attendees worked directly with the hardware, tuned the operating system, and pushed the machine hard enough to reveal what works and what still doesn’t. In any ecosystem, a platform becomes real the moment developers start breaking it.

The machine itself is built around the SpacemiT K1, a RISC-V system-on-chip aimed at edge AI and general computing. It isn’t trying to compete with Apple’s M-series or Qualcomm’s new AI PC processors; the ambition is different. This is an open-ISA developer machine, designed to explore what an AI laptop built around RISC-V actually looks like. The architecture combines three compute domains: an eight-core 64-bit  complex running around the 2 GHz class; a 256-bit implementation of the RISC-V Vector Extension (RVV 1.0); and a fixed-function neural processor called the AI Fusion Engine delivering roughly two tera-operations per second.

The scalar cores run the operating system and application logic, the vector engine handles the messy middle ground of AI workloads—quantization, dequantization, normalization, and data reshaping—while the NPU accelerates the dense matrix multiplications that dominate transformer inference. Readers unfamiliar with RVV can find a practical introduction in Dr. Thang Tran’s RISC-V Vector Primer on GitHub (https://github.com/simplex-micro/riscv-vector-primer). Memory comes from LPDDR4X, up to sixteen gigabytes, paired with NVMe storage, all packaged inside a Framework-compatible modular chassis. It is very clearly a developer’s laptop.

The Boston workshop centered on Fedora Linux, and that choice was deliberate. Red Hat has been quietly treating RISC-V as a serious upstream architecture target, and the event exposed how far that effort has progressed. Participants booted Fedora on the ROMA II hardware, examined kernel support, checked package coverage, and explored the gaps that still need attention. For the first time, a mainstream Linux distribution ran interactively on a RISC-V laptop in a public developer workshop. A few years ago that alone would have been notable; what came next mattered even more.

The demonstration shifted quickly from operating systems to AI. Developers loaded compact language models—roughly one to three billion parameters—and ran inference locally. Tokens appeared in real time. Quantization settings changed. Thermal behavior became visible. The point wasn’t to prove that RISC-V could compete with GPU servers; the goal was simpler: show that local AI actually works on the platform. Several patterns emerged almost immediately. The NPU proved essential; CPU-only inference slows dramatically once models move beyond trivial size. The vector engine quietly handled much of the surrounding workload—quantization, KV-cache updates, normalization, reshaping—exactly the kind of glue logic modern AI systems require. The execution model looked familiar: CPU orchestrates, NPU performs the heavy math, vector units handle the data transformations in between.

The real constraint turned out to be memory bandwidth. LPDDR4X limits throughput once models approach roughly three billion parameters, which is one reason DeepComputing positions ROMA II as a developer platform rather than a consumer AI laptop. Even so, the system proved stable under sustained load. Developers ran inference long enough to observe predictable thermal throttling behavior, stable kernel drivers, and no crashes or hangs. For a first-generation RISC-V laptop platform, that level of stability matters more than benchmark numbers.

The machine already demonstrates several things the ecosystem has been waiting for: it runs Fedora natively, executes real LLM workloads locally, and operates within a fully open instruction-set ecosystem. The modular Framework chassis makes it attractive for engineers working on kernels, drivers, and machine-learning software. At the same time, its limits are obvious. Two TOPS of NPU performance supports small models but not larger seven-billion-parameter networks; CPU performance sits in the mid-range compared with modern laptop processors; memory bandwidth constrains scaling; the GPU contributes little to machine-learning workloads for now. ROMA II is not a consumer AI laptop—it is a developer workstation for the RISC-V ecosystem.

Still, the Boston workshop signals something broader. For years, discussions about RISC-V laptops lived mostly in presentations and roadmaps. Here developers were installing Linux, compiling software, and running AI on real hardware. That combination changes the conversation. When engineers can treat a platform like a normal computer—boot it, modify it, push it until it breaks—the architecture stops being a research topic and becomes an engineering target.

DeepComputing’s roadmap already points toward the next step. The upcoming DC-ROMA AI PC moves to an ESWIN dual-die system-on-chip with eight SiFive P550 cores, roughly forty TOPS of NPU performance, and thirty-two to sixty-four gigabytes of LPDDR5 memory, alongside a custom vector processing cluster and compatibility with the Framework Laptop 13 chassis. That level of compute should support four-to-seven-billion-parameter models comfortably. Seen in that light, ROMA II is less an endpoint than a bridge.

What happened in Boston may look small from the outside—a room full of developers installing Linux and running a language model—but these moments are how ecosystems turn. A laptop boots, software runs, developers start experimenting. At that point the architecture stops being hypothetical, and RISC-V personal computing starts to look real.

Also Read:

The Evolution of RISC-V and the Role of Andes Technology in Building a Global Ecosystem

The Launch of RISC-V Now! A New Chapter in Open Computing

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension


AI-Driven Automation in Semiconductor Design: The Fuse EDA AI Agent

AI-Driven Automation in Semiconductor Design: The Fuse EDA AI Agent
by Daniel Nenni on 03-16-2026 at 1:30 pm

The semiconductor industry is experiencing unprecedented growth in complexity as advanced process nodes, heterogeneous integration, and AI-driven workloads demand increasingly sophisticated chip designs. At the same time, semiconductor companies face rising design costs, increasing engineering workloads, and a shrinking talent pool. To address these challenges, Siemens has introduced the Fuse EDA AI Agent, an agentic artificial intelligence system designed to automate and optimize electronic design automation (EDA) workflows. This platform represents a major step toward AI-native semiconductor design by enabling end-to-end automation across the entire chip development lifecycle.

One of the key drivers behind the development of the Fuse EDA AI Agent is the rapid escalation in design complexity for modern system-on-chip (SoC) devices. As semiconductor process nodes shrink from 28 nm to advanced nodes such as 3 nm and below, the number of engineering hours required for design and verification increases significantly. The cost of developing a leading-edge SoC can exceed $300 million, making productivity improvements essential for maintaining innovation and competitiveness. Additionally, workforce shortages in the semiconductor industry further increase pressure on design teams.

Artificial intelligence has emerged as a promising solution to improve design efficiency and productivity. According to industry projections, AI-powered EDA tools could deliver more than a 50% productivity boost for chip designers by automating repetitive tasks, accelerating analysis, and enabling smarter decision-making throughout the design process.

The Fuse EDA AI Agent builds on this concept by introducing agentic automation that can plan, orchestrate, and execute complex design workflows.

Traditional EDA workflows involve many steps, including data preparation, tool configuration, simulation, verification, and reporting. These tasks often require engineers to manually coordinate multiple software tools, which can significantly slow development cycles. The Fuse EDA AI Agent addresses this limitation by integrating AI agents capable of managing these processes automatically. These agents can analyze design data, launch simulation tools, validate results, and generate reports without continuous human intervention. By automating these tasks, engineers can focus on higher-level design innovation rather than repetitive operational activities.

The architecture of the Fuse EDA AI system is built on several core technological pillars. These include agent-native workflows, multimodal data management, flexible deployment, granular access control, and multiple integration points for design tools and development environments. Together, these components enable the platform to support complex semiconductor design environments while maintaining high levels of security and scalability.

A key feature of the system is its multimodal EDA data lake, which aggregates large volumes of design data from various sources. Semiconductor design workflows generate diverse data formats such as netlists, layout files, simulation logs, and waveform data. The AI system is capable of parsing and analyzing these formats using specialized domain knowledge trained on semiconductor design workflows. This capability allows the AI agents to interpret design information accurately and generate actionable insights.

Another major innovation of the Fuse platform is its integration with existing Siemens EDA tools, including Calibre, Questa, Tessent, Aprisa, and Xpedition. The system can also interface with third-party development tools through standardized APIs. This open architecture ensures that engineers can adopt AI automation without replacing their existing toolchains. By integrating seamlessly with established EDA environments, Fuse enhances productivity while preserving established design methodologies.

The Fuse EDA AI Agent also introduces agentic workflows, in which multiple AI agents collaborate to complete design tasks. Instead of performing isolated operations, these agents can plan tasks, execute tool operations, analyze results, and iterate on design improvements. Over time, the system can deploy parallel teams of AI agents to address multiple design challenges simultaneously. This distributed AI approach allows semiconductor companies to scale their design processes and reduce overall development time.

Another critical component of the system is its reliance on high-performance computing infrastructure. GPU-accelerated hardware and advanced AI models enable faster simulation and analysis, reducing runtimes that previously required weeks to just hours or minutes. This acceleration significantly shortens design cycles and allows engineers to explore more design alternatives during development.

Ultimately, the Fuse EDA AI Agent aims to deliver three primary benefits: improved design productivity, higher design quality, and an open development ecosystem. By automating complex workflows and leveraging domain-specific AI intelligence, the platform helps engineers produce more reliable designs while reducing time-to-market. At the same time, its open architecture enables collaboration between EDA vendors, foundries, and semiconductor companies, creating a more integrated design ecosystem.

Bottom line: The Fuse EDA AI Agent represents a significant evolution in electronic design automation. By combining agentic AI, domain-specific knowledge, and high-performance computing, the platform transforms how semiconductor devices are designed and verified. As chip complexity continues to increase and AI-driven applications demand more advanced hardware, solutions like the Fuse EDA AI Agent will play a crucial role in enabling the next generation of semiconductor innovation.

 Siemens launches Fuse EDA AI Agent | Siemens

Also Read:

Siemens Reveals Agentic Questa

Functional Safety Analysis of Electronic Systems

Perforce and Siemens Collaborate on 3DIC Design at the Chiplet Summit


TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation
by Daniel Nenni on 03-16-2026 at 10:00 am

TSMC Technology Symposium 2026

One of my favorite times of the year is coming (sailing season) and my favorite event of the year is coming as the company I most respect will host the best international semiconductor networking event starting here in Silicon Valley.

The 32nd annual TSMC Technology Symposium represents one of the most influential events in the global semiconductor industry. Organized annually, the symposium brings together semiconductor designers, technology partners, researchers, and industry leaders to discuss the latest advancements in chip manufacturing, packaging technologies, and system integration. The 2026 symposium continues this tradition by highlighting major developments in advanced semiconductor nodes, AI computing, and system-level innovations that will shape the future of electronics.

To me this really is a collaboration victory lap inside the semiconductor ecosystem acknowledging the amazing products we as semiconductor professionals have brought to life. World Changing Technology, and if I may say, World Saving Technology that allows us to live the lives we live today, absolutely.

The event will be held as part of TSMC’s global symposium series, beginning at my favorite location, the Santa Clara, California, and followed by additional sessions in Asia and Europe. These events provide customers and technology partners with updates on TSMC’s semiconductor roadmap and opportunities to collaborate on next-generation chip designs. The symposium focuses on both process technology improvements and the ecosystem required to support modern integrated circuit development.

One of the central themes of the 2026 Technology Symposium will be the rapid growth of artificial intelligence and HPC applications. AI workloads demand extremely powerful processors capable of handling massive data processing and machine learning tasks. TSMC emphasized how advanced semiconductor manufacturing nodes enable higher transistor densities, improved performance, and lower energy consumption, critical requirements for AI data centers, cloud computing infrastructure, and edge devices. The symposium demonstrated how TSMC’s technologies are designed to support these increasingly complex workloads.

Another important focus is the advancement of 2nm-class and angstrom-era semiconductor technologies. TSMC has been preparing for the transition from FinFET to angstrom-scale processes with advanced packaging, representing the next stage of semiconductor scaling. A notable technology in this roadmap is the A16 process, which is expected to enter production in the second half of 2026. This node introduces innovations such as nanosheet transistor structures and backside power delivery, known as Super Power Rail. By delivering power from the backside of the chip rather than the front, this architecture improves signal routing efficiency and supports the high current requirements of advanced processors used in AI and high-performance computing systems.

The symposium will also highlight the importance of system-level innovation, not just transistor scaling. Modern semiconductor performance improvements increasingly rely on advanced packaging technologies, heterogeneous integration, and chiplet-based architectures. Instead of building a single large monolithic chip, designers can combine multiple specialized chiplets in one package to achieve higher performance and flexibility. TSMC’s advanced packaging solutions enable this integration while maintaining high bandwidth communication between chip components.

Another significant aspect of the event is the emphasis on the TSMC ecosystem. Semiconductor manufacturing requires collaboration between many companies, including EDA vendors, IP providers, and system developers. The Technology Symposium allows these partners to demonstrate how their tools and technologies work together with TSMC’s process nodes. In addition, the event often features an Innovation Zone, where startups and emerging companies showcase new semiconductor technologies and design solutions.

The broader semiconductor market context will also influence discussions at the symposium. Demand for advanced chips has increased dramatically due to the growth of AI, data centers, and high-performance computing systems. TSMC has responded by rapidly expanding its manufacturing capacity and investing heavily in new fabrication facilities worldwide. These investments are intended to ensure that the company can meet the rising demand for advanced nodes while maintaining its leadership in semiconductor manufacturing.

Bottom line: The 2026 TSMC Technology Symposium highlights the rapid evolution of semiconductor technology and the critical role that advanced manufacturing plays in enabling future computing systems. From breakthroughs in angstrom-scale process nodes to innovations in packaging and AI computing, the event demonstrated how TSMC continues to push the boundaries of chip design and production. As computing demands continue to grow, the technologies presented at the symposium will play a vital role in shaping the next generation of electronic devices, data centers, and intelligent systems throughout the world.

I hope to see you there!

Also Read:

Global 2nm Supply Crunch: TSMC Leads as Intel 18A, Samsung, and Rapidus Race to Compete

TSMC Process Simplification for Advanced Nodes

TSMC and Cadence Strengthen Partnership to Enable Next-Generation AI and HPC Silicon


Synopsys Explores AI/ML Impact on Mask Synthesis at SPIE 2026

Synopsys Explores AI/ML Impact on Mask Synthesis at SPIE 2026
by Mike Gianfagna on 03-16-2026 at 6:00 am

Synopsys Explores AI:ML Impact on Mask Synthesis at SPIE 2026

The SPIE Advanced Lithography + Patterning Symposium recently concluded. This is a popular event where leading researchers gather. Challenges such as optical and EUV lithography, patterning technologies, metrology, and process integration for semiconductor manufacturing and adjacent applications are all covered. This was the 50th anniversary event and it was held in San Jose.

Synopsys had a major presence at the event, but the company went a step further by holding a special Lithography VIP Symposium coinciding with the show. Synopsys and its industry partners gave several excellent presentations on EUV mask making and computational lithography. More on that in a moment. The event concluded with a spirited panel discussion that explored how much of AI/ML for mask making is real today and what the practical impact could be. I was honored to host the panel, AI/ML in Mask Synthesis: Hype vs. Reality for Manufacturing. Let’s review how Synopsys explores AI/ML impact on mask synthesis at SPIE 2026.

The Panel

The panel was composed of senior executives from photomask operations, wafer fabs, and EDA. Together, these folks represent a substantial cross-section of the supply chain for advanced mask making. The panelists were:

Representing photomask

  • Dr. Kent Nakagawa, Technology Marketing Director, Tekscend Photomask US Inc.
  • Dr. Arvind Sundaramurthy, Technology Development and Yield Manager, Intel Mask Operations 

Representing wafer fab

  • Dr. Hyung-Joon Chu, Technical Vice President, Foundry OPC Samsung Electronics Device Solutions Division
  • Dr. Dan J. Dechene, Director of Technology Readiness & Digital Transformation, IBM
  • Dr. Seung-Hune Yang, Master, VP of Technology, Optical Proximity Correction Samsung Electronics

Representing EDA for manufacturing

  • Dr. Larry Melvin, Senior Director of Technical Product Management, Synopsys

The panelists are shown below.

This is a formidable group of highly technical and very smart people. When we were done with the introductions, I resisted the temptation to say, is there a doctor in the house? The comments these panelists made over the course of about an hour taught me a lot and gave me great hope for the future.

The Discussion

To kick things off, I asked, What are the most valuable applications in mask solutions and design enablement that AI and GPUs can unlock? I specifically referenced GPUs in the question. Advanced hardware is the key to making all AI relevant in the real world and I wanted to introduce that reference early.

Kent kicked off the panel with a discussion of the exploding complexity of mask requirements, not just at high NA EUV, but also standard EUV and leading-edge  immersion technologies. The new structures that are needed for advanced AI create this challenge. He went on to say that these requirements are often unique to each customer design, so the complexity is driven by designs and not fab processes. Managing all this to deliver high precision masks requires a new approach, and that’s where AI and special purpose hardware will be needed to move forward.

Arvind went next, and he focused on how GPUs are used in the mask shop to enable the required, highly complex processes such as optical simulation for mask defect prediction. The challenges here include data complexity of course, but also data consistency. He said it wasn’t possible to imagine dealing with these problems just a few years ago. GPUs have been instrumental in paving the way forward.

As Hyung-Joon spoke, a pattern began to emerge. He also focused on the critical requirements of complexity management. He explained that when he onboards new staff members, he tells them that OPC (optical proximity correction) really stands for optimization, prediction and correction. He went on to discuss some of the substantial challenges advanced technology presents. He felt AI and GPUs hold the key to deal with these challenges. He also discussed functional AI (e.g., things like resist and etch models) and agentic AI (to automate the process).

He felt today that a solid base of functional AI was most important. He mentioned the significant challenges posed by changes such as moving from conventional OPC to advanced inverse lithography technology (ILT) and dealing with stochastic vs. deterministic models. Agentic will add efficiency later.

Dan observed a different aspect of the problem. He discussed the move to 3D design and the substantial challenges required to tame 3D metrology. AI and GPUs again were cited as the way forward. Dan also brought in the importance of collaboration across the supply chain. He pointed out that every company represented on the panel had a piece of the budget required to solve this problem. If the supply chain could collaborate to understand the details of the 3D stack, all involved would benefit and stay in business for a very long time.

Seung-Hune focused on the difficulties of delays in production cycle time. Items such as reticle issues can take a long time (months) to correct. So, using AI and GPUs to increase the maturity of the design would have a significant impact.

Larry stepped back and characterized the problem in a fundamental way. He focused on the requirements of model accuracy in the sub-nanometer range. This physically represents two crystal lattice lengths of silicon.  Looking closer, we’re attempting to control many atoms on a layer, all going to the same place at the same time thousands of times per hour. To collect, process and analyze the massive amount of data required to achieve this can only be done with advanced AI algorithms running on the most advanced hardware.  He went on to point out that all this information must be shared up and down the supply chain from design to manufacturing. This is the only way to achieve enough understanding of the whole process to make it work.

I provided all the details of these responses to paint a picture of the overall mood of the panel. It was one of substantial reliance on advanced AI and the associated GPU hardware to continue moving forward. That flattens the question of hype vs. reality for AI/ML in mask synthesis. The panel agreed the technology is real and increasingly necessary, particularly at the leading edge.

My second question examined collaboration aspects: What role do partnerships between EDA vendors, fabs, and equipment suppliers play in accelerating AI/ML innovation for mask solutions and design enablement?

The response from all panelists was quite consistent here. The overall sentiment was that complexity drives the need for more collaboration and partnerships are critical. Recall the group already focused on the need for end-to-end analysis of data. This is only achievable with substantial efforts across the supply chain. A more direct way to say this is: what are the pain points you have associated with AI and how can we verify what we are doing? That is, what are we getting out of the AI?

The problem of highly sensitive fab data (think defect density) was also brought up. There was a genuine focus on how to minimize this problem. That is, how to make sure AI models are trained with accurate data. Otherwise, the usefulness of these models is quite limited. Think garbage in, garbage out. Having lived in semiconductors and EDA for many years, the tone of this discussion was quite uplifting for me. This group truly believed that better collaboration is a must to tame the substantial problems before us. I can tell you it wasn’t always like this.

I’ll conclude with the overall response to my last question, How will Al/ML impact mask and lithography workflows over the next five years?

While panelists approached the problem from different parts of the value chain, there was strong alignment on direction and priorities.  Impact always starts at the leading edge. There is an overall conservative attitude in this group. The stakes are too high to do it any other way. This means the leading edge will see benefit in the next five years, but broader impact will likely take longer. In terms of the technology, the feeling was that generative AI will enable better understanding of the models and result in a wider impact, and so that will lead the way. Agentic AI will face larger deployment challenges and will come later.

It was a genuine pleasure to lead this panel discussion. I believe we all came away with an optimistic view of the future. A future where the impact of AI was understood and valued, and the importance of collaboration was also understood and valued. Below is a photo of the panel and our executive host from Synopsys, Dr. Kostas Adam (far left).

The Rest of the Session

The Synopsys Lithography VIP Symposium also contained several excellent technical presentations. Here is a summary:

  • Enabling the Future: How GPUs are reshaping computational lithography. Michael Lam, Senior Director of Modeling at Synopsys.
  • Advances in EUV Mask Manufacturing. Arvind Sundaramurthy, Head of Integration and Yield at Intel.
  • The Dimensional Explosion: Navigating Super-Linear Computational Complexity in Semiconductor Scaling. Ryoung-Han Kim, Litho Program Director at imec.
  • Automating EDA with AI – Are agents going to replace engineers? Thomas Andersen, VP Engineering, AI and Innovation at Synopsys.

To Learn More

Synopsys offered many presentations at the SPIE Advanced Lithography + Patterning Symposium. You can see a summary of those presentations here. And that’s how Synopsys explores AI/ML impact on mask synthesis at SPIE 2026.

Also Read:

Agentic AI and the Future of Engineering

Ravi Subramanian on Trends that are Shaping AI at Synopsys

Efficient Bump and TSV Planning for Multi-Die Chip Designs


Unraveling Dose Reduction in Metal Oxide Resists via Post-Exposure Bake Environment

Unraveling Dose Reduction in Metal Oxide Resists via Post-Exposure Bake Environment
by Daniel Nenni on 03-15-2026 at 4:00 pm

Unraveling Dose Reduction in Metal Oxide Resists via Post Exposure Bake Environment

In the realm of extreme ultraviolet (EUV) lithography, metal oxide resists (MORs) have emerged as promising candidates for advanced semiconductor patterning. However, their stability poses challenges, particularly interactions with clean-room environments like humidity and airborne molecular contaminants (AMCs) post-exposure. Researchers at imec, led by Ivan Pollentier, Fabian Holzmeier, Hyo Seon Suh, and Kevin Dorney, have developed a novel platform called BEFORCE to probe these effects. Presented at SPIE Advanced Lithography + Patterning in February 2026, their work unveils a dose reduction strategy by optimizing the atmospheric conditions during post-exposure delay (PED) and post-exposure bake (PEB).

BEFORCE integrates a bake and EUV system with Fourier-transform infrared (FTIR) spectroscopy and outgas measurements, enabling precise control over environmental variables. This setup allows evaluation of MOR in controlled atmospheres, addressing stability concerns that arise mostly after EUV exposure. The platform’s design facilitates experiments where gases like nitrogen (N2), carbon dioxide (CO2), and clean air (CA) are mixed with controlled relative humidity (RH%) and oxygen (O2) levels via mass flow controllers (MFCs). Initial findings from imec’s press release on February 25, 2026, highlight BEFORCE’s potential to enhance MOR performance.

A key focus is enhancing EUV dose response through PED/PEB environments. Dose-to-gel (D2G), a metric of photo-speed, serves as the primary indicator. Experiments show that oxygen concentration significantly influences condensation and dose requirements. In atmospheres with less oxygen than standard air (21% O2), condensation is minimal, but increasing O2 to 50% yields a 25-30% reduction in D2G. This suggests oxygen’s role is not saturated at nominal levels; higher concentrations accelerate the photochemical reactions leading to gelation, thus lowering the required EUV dose.

Humidity’s impact is nuanced and interdependent with oxygen. At low O2 levels, higher humidity improves photo-speed, mainly by aiding condensation in oxygen-scarce environments. Graphs from the study depict D2G decreasing sharply with rising humidity under low O2, but the effect plateaus or reverses in high-O2 settings. For instance, at 5% RH, increasing O2 from 0% to 50% reduces D2G by up to 30%. Conversely, in low-O2 conditions, humidity drives a steeper drop in D2G, indicating it compensates for oxygen’s absence in promoting resist cross-linking.

To disentangle PED and PEB contributions, the team conducted separate environment tests. Using a model MOR, they varied conditions: PED in air (21% O2, 45% RH) followed by PEB in vacuum, or vice versa. Results reveal that PEB atmosphere dominates condensation. PEB in air promotes significant film thickness changes indicative of condensation, while vacuum PEB suppresses it, regardless of PED conditions. Preliminary data with 120-second PED/PEB cycles underscore this: vacuum PEB yields higher D2G (slower photo-speed), but air PEB enhances sensitivity. This implies chemical transformations during baking are more sensitive to ambient gases than during delay.

Further inter-relations emerge with PEB temperature and time. At ~5% RH, oxygen trends hold across temperatures, but in zero-O2 environments, photo-speed remains stable, suggesting temperature independence without oxygen. With O2 present, longer PEB times significantly boost photo-speed, hinting at kinetic chemical effects. Kevin Dorney’s related talk (SPIE 13983-50) explores these origins, proposing mechanisms like ligand exchanges in MOR structures (e.g., OH to other groups).

The study opens avenues for co-optimization: tuning O2, humidity, temperature, and time could reduce doses by 25-30%, improving throughput in EUV lithography. For commercial MORs, humidity aids condensation without strong oxygen dependence, aligning with model resists but showing subtler effects.

Bottom Line: imec’s BEFORCE demonstrates that PEB environment is pivotal for MOR dose reduction. By elevating O2 and modulating humidity, manufacturers can enhance sensitivity without compromising stability. Acknowledgments go to Intel and resist suppliers for materials. Funded by the EU’s Chips Joint Undertaking and partners like Belgium and France, this research paves the way for efficient, environmentally tuned EUV processes, potentially revolutionizing high-volume semiconductor fabrication.

More information

Also Read:

TSMC Process Simplification for Advanced Nodes

CEO Interview with Dr. Heinz Kaiser of Schott

Kirin 9030 Hints at SMIC’s Possible Paths Toward >300 MTr/mm2 Without EUV


CEO Interview with Dr. Mohammad Rastegari of Elastix.AI

CEO Interview with Dr. Mohammad Rastegari of Elastix.AI
by Daniel Nenni on 03-15-2026 at 2:00 pm

mohammad headshot

Mohammad Rastegari is a prominent AI researcher and entrepreneur currently serving as the CEO and Co-Founder of Elastix.AI. Based in the Greater Seattle Area, he also holds the position of Affiliate Assistant Professor at the University of Washington’s Electrical & Computer Engineering Department. His professional background includes high-level leadership roles as a Distinguished AI Scientist at Meta and a Principal AI/ML Manager at Apple. Previously, he was a Research Scientist at the Allen Institute for AI and the Co-founder and CTO of Xnor.ai, which was acquired by Apple in 2020.

Tell us about your company?

Elastix.AI is building a new class of AI inference infrastructure designed to dramatically improve the efficiency, scalability, and adaptability of large-scale model deployment. We combine advanced model optimization with reconfigurable hardware—primarily FPGAs—to deliver high-performance inference without the cost, rigidity, and power constraints of traditional GPU-based systems.

Our mission is to make AI infrastructure fundamentally more efficient and future-proof, enabling organizations to deploy and evolve models at scale without being locked into a single hardware generation.

What problems are you solving?

AI inference is becoming the dominant cost driver in large-scale AI deployments, and current infrastructure is not built for it.

Today’s GPU-centric approach is:
• Expensive (both capex and opex)
• Power-hungry
• Rigid (requires new silicon for every major model shift)

We address these challenges by:
• Reducing cost per inference by up to 10x
• Improving power efficiency by up to 5x
• Enabling hardware reconfiguration as models evolve

This fundamentally changes the economics of deploying LLMs and other generative AI systems at scale.

What application areas are your strongest?

Our strongest focus is large-scale AI inference, particularly:
• Large Language Models (LLMs)
• Generative AI (text, code, multimodal)
• Enterprise AI copilots
• Real-time and latency-sensitive inference workloads

We are especially strong in environments where cost, power, and scalability constraints make GPU-only solutions unsustainable.

What keeps your customers up at night?

Our customers are facing a structural problem:
• Exploding inference costs as usage scales
• Power and data center constraints limiting growth
• Hardware lock-in to GPU vendors
• Rapid model evolution that outpaces hardware refresh cycles

They are asking a fundamental question:

How do we scale AI economically without rebuilding infrastructure every 12–18 months?

What does the competitive landscape look like and how do you differentiate?

The landscape is dominated by GPU vendors and a growing set of ASIC-based accelerators.
• GPUs offer limitted flexibility but are expensive and power-inefficient for inference at scale
• ASICs improve efficiency but are rigid and take years to develop

Elastix.AI sits in a unique position:
• We leverage existing, deployable FPGA infrastructure
• We provide software-hardware-ML co-optimization rather than just silicon
• We enable post-deployment adaptability, not just point-in-time optimization

Our key differentiation is reconfigurability at scale—we can adapt to new models, architectures, and optimizations without requiring new hardware.

What new features/technology are you working on?

We are advancing several areas:
• Next-generation ML optimization techniques tailored for reconfigurable hardware
• Dynamic model-to-hardware mapping, enabling real-time adaptation to workload changes
• Inference orchestration across heterogeneous infrastructure
• Turnkey deployment platforms that abstract hardware complexity from customers

Our goal is to make high-efficiency AI infrastructure as easy to consume as cloud GPUs—but significantly more efficient.

How do customers normally engage with your company?

Customers potentially can engage with us in three ways:
1. Token-as-a-Service
They can sign up directly to buy tokens from our maintained inference services.
2. Software Subscription
They can buy the hardware and sign up for a monthly/yearly subscription for our inference software
3. Leasing or Purchase
They can order a purchase or lease of a prebuilt rack of our inference infrastructure, fully enabled and ready to generate tokens.

We also partner closely with:
• Cloud providers
• Data center operators
• AI model companies

This allows us to integrate into existing ecosystems while delivering immediate value.

CONTACT ELASTIX.AI

Also Read:

CEO Interview with Jerome Paye of TAU Systems

CEO Interview with Juniyali Nauriyal of Photonect

CEO Interview with Aftkhar Aslam of yieldWerx

 


Tesla and Samsung Relationship Update

Tesla and Samsung Relationship Update
by Daniel Nenni on 03-15-2026 at 8:00 am

Elon Musk Lip Bu Tan Wafer Deal

The majority of my 40+ year career has been spent managing the relationship between leading-edge semiconductor design and manufacture, working with just about every commercial foundry and top customer in one way or another. It’s my thing—it fascinates me. I’m also a fan of disruption, and the latest disruptions the semiconductor industry has been experiencing are fiercely entertaining.

Which brings us to the Tesla and Samsung relationship update.

In 2025, Tesla and Samsung Electronics announced a deep strategic partnership centered on manufacturing Tesla-designed AI chips used for self-driving vehicles, robotics, and AI infrastructure. The partnership is expected to evolve across multiple hardware generations and already includes one of the largest semiconductor supply agreements ever signed by an automaker. Below is a breakdown of the relationship covering its history, the technical structure of the chips, the manufacturing arrangements, and the strategic reasons behind the partnership.

Key Takeaways

The Tesla–Samsung relationship is a strategic semiconductor alliance built around three core elements:

  • Fabless model: Tesla designs AI chips; Samsung manufactures them.
  • Massive manufacturing deal: A $16.5 billion multiyear contract centered on Tesla’s AI6 chips.
  • AI ecosystem support: Chips produced by Samsung power Tesla’s vehicles, robots, and AI data centers.

The partnership also reflects a broader trend: automakers becoming major semiconductor designers as vehicles increasingly rely on AI and custom compute hardware.

Now let’s look at the relationship from the perspective of a working semiconductor professional—based on observations, experiences, and opinions from inside the industry.

Samsung Foundry Background

Samsung truly became a major force in the foundry business through its early relationship with Apple. Early Apple iProducts were manufactured in partnership with Samsung Foundry.

Samsung Foundry operates as an IDM-style foundry, unlike pure-play foundries such as TSMC or GlobalFoundries. Some of the Samsung content in the iProduct BoMs were from Samsung directly. I am only speaking about the SoC that apple designs and manufactures at a foundry.

Years ago, I served as Director of Foundry Relationships at an IP company, which meant spending a significant amount of time in South Korea, Taiwan, and China. Even though the companies were in the same market segment, doing business with different companies in different countries —especially when compared to working with a U.S.-based foundry was a big challenge.

From my personal experience with Samsung Foundry, I can say I worked with some extremely intelligent engineers developing bleeding-edge technology at a break neck pace. The teams worked incredibly hard and achieved a reasonable amount of market success.

However, there was one consistent challenge: transparency. Samsung Foundry often struggled with openness when issues arose—particularly if those issues might bring embarrassment to the company. Yield challenges are a classic example.

Now mix that culture with Elon Musk’s operating style and you can probably imagine the potential for a highly combustible situation—which is where we are today. Among many colleagues I’ve spoken with, very few believed this partnership had a strong probability of long-term success.

Delays and Industry Reality

Delays with Tesla’s AI6 chip have begun surfacing, and Samsung is—unsurprisingly—being blamed, as every foundry tends to be when a chip misses its schedule. From what I hear, however, the delays are occurring on both sides, so the finger-pointing continues.

This dynamic is nothing new in the semiconductor world. Chip programs are extraordinarily complex, and when schedules slip, the tension between design teams and manufacturing teams can escalate quickly.

As a result, Elon Musk is now talking about Mega Fabs.

The Mega Fab Idea

The “Mega Fab” concept refers to a proposed large-scale semiconductor manufacturing facility dedicated to producing AI chips for Tesla. Musk has suggested that Tesla may eventually build its own advanced chip fabrication plant to support the enormous computing demand required for autonomous driving, robotics, and AI training.

By the way, those of us inside the semiconductor industry would simply call that an IDM mega fab.

And who is the most dominant semiconductor IDM in the history of the industry?

Intel Foundry

If Elon Musk does not work with Lip-Bu Tan and Intel on a project like this, I would give the effort a very small chance of success—absolutely.

Come on, Elon. The Silicon Heartland project in Ohio is ready for mega fabs. America is ready to lead semiconductor manufacturing again.

Let’s get this done before you see something shiny and wander away.

I will write more about a Tesla and Intel relationship next.

Also Read:
Musk says Tesla’s mega AI chip fab project to launch in seven days
Tesla AI6 chip delayed ~6 months as Samsung 2nm production slips

Podcast EP335: The Far Reaching Impact of UCIe with Dr. Debendra Das Sharma

Podcast EP335: The Far Reaching Impact of UCIe with Dr. Debendra Das Sharma
by Daniel Nenni on 03-13-2026 at 10:00 am

Daniel is joined by Dr. Debendra Das Sharma, a Senior Fellow and Chief I/O architect in the Data Platforms and Artificial Intelligence Group at Intel. He is a member of the National Academy of Engineering (NAE), Fellow of IEEE, and Fellow of International Academy of AI Sciences. He is a leading expert on I/O subsystem and interface architecture. He co-invented the chiplet interconnect standard UCIe and is the chair of the UCIe consortium.

Dan and Debendra explore the evolution of the UCIe standard and the work of the UCIe consortium. In a very broad and informative discussion Debendra describes the history and impact of UCIe. He details the specification’s evolution from 1.0 to 3.0, explaining the interoperability each version enabled. He explores how UCIe defines key target metrics for chiplets. He also explains how to achieve a plug and play environment. Dan also explores how UCIe is being deployed in the ecosystem with Debendra as well as how the standard is evolving to meet future needs.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Agentic AI and the Future of Engineering

Agentic AI and the Future of Engineering
by Daniel Nenni on 03-13-2026 at 6:00 am

sassine announces agentic ai hires converge 2026

Sassine Ghazi Synopsys Converge Keynote

Agentic AI emerges in this Synopsys Converge keynote not as a futuristic add-on, but as a practical response to the growing complexity of engineering. In the speaker’s view, the traditional way of designing chips, systems, and intelligent products is no longer sufficient for the era of physical AI. Engineers are now dealing with software-defined systems, advanced silicon, multi-physics constraints, verification challenges, and ever-shorter development cycles. In that environment, agentic AI becomes essential because it helps “re-engineer engineering” itself. Rather than replacing engineers, it is presented as a new layer of intelligence that works alongside them, extending human capability and allowing organizations to handle more ambitious projects with the same or limited engineering resources.

A key tension in the speech is the mixture of excitement and fear surrounding agentic AI. At the user and engineering level, many people worry about how this technology may change their jobs. That concern is understandable, because engineering has long depended on deep human expertise, judgment, and careful iteration. At the same time, the speaker stresses that management and organizational leaders are enthusiastic because they see agentic AI as a productivity multiplier. Companies today are often limited not by ideas, but by the number of engineers available and the amount of time required to turn complex ideas into working products. In that sense, agentic AI is framed less as a threat than as a force multiplier: it can help teams do more, move faster, and explore more design possibilities than would be possible through manual effort alone.

One of the most important parts of the keynote is the framework of autonomy levels from L1 to L5. This structure shows that Synopsys is not treating agentic AI as a vague concept, but as a staged engineering roadmap. At L1, there are co-pilots, which assist users across different parts of the design flow. These tools help engineers interact with software more efficiently and automate limited actions. At L2, the system moves to task agents, where each agent is responsible for a specific task. Here the human engineer still acts as the orchestrator, assigning work to multiple specialized agents. At L3, multi-agent workflows appear, meaning that groups of agents can coordinate with one another to complete broader processes. By L4, the vision becomes much more ambitious: a cognitive layer dynamically orchestrates multiple agents with contextual awareness, allowing the system to reason across tasks instead of simply executing isolated commands. L5 remains an early pathfinding stage, but it represents the long-term vision of far greater autonomy in engineering workflows.

What makes this model especially compelling is that it preserves a central role for the human engineer. The keynote explicitly argues that human engineers become more, not less, important in this environment. Their role shifts upward: instead of performing every task manually, they guide, supervise, and accelerate innovation through collaboration with agent systems. This reflects a broader change in knowledge work. The engineer of the future may spend less time on repetitive implementation details and more time defining intent, setting constraints, evaluating outcomes, and steering exploration. Agentic AI, then, is not just about automation; it is about elevating engineering work to a more strategic and creative level.

The concrete example given in the keynote is the path from specification to RTL. This workflow involves many steps: architectural specification, RTL design, test creation, test planning, formal verification, static verification, coverage analysis, and debugging. Traditionally, these are labor-intensive tasks that often require repeated iterations across different teams and tools. In the agentic model, each of these can be assigned to a dedicated task agent. A higher-level reasoning agent then orchestrates those specialists, ensuring that the outputs align and that the resulting RTL is of sufficient quality for the next design phase. This example shows why agentic AI matters: engineering is not usually blocked by one giant problem, but by the coordination of many interdependent smaller problems. Agentic AI addresses that coordination challenge directly.

Another significant idea in the speech is optionality. Synopsys does not present its agentic platform as a closed black box. Customers may bring their own agents, their own data, and their own infrastructure, whether on-premises or in the cloud. This matters because engineering organizations have different security requirements, workflows, and intellectual property concerns. By allowing customers to plug their own systems into the Synopsys stack, the company acknowledges that the future of agentic AI will likely be open, modular, and multi-model rather than standardized around a single monolithic system.

Bottom line: The keynote connects agentic AI to a broader transformation in science and engineering. In a recorded conversation with Microsoft CEO Satya Nadella, Sassine suggests that the next frontier is not merely natural-language assistance, but systems that can plan, execute, and verify complex engineering tasks using deep domain knowledge. The future will depend on combining general-purpose language models with specialized physics and design models. That vision is especially powerful in fields like EDA, where the tools, feedback loops, and verification frameworks already exist to support highly structured automation. In this sense, agentic AI is not just a productivity tool. It is the beginning of a new engineering paradigm, where human expertise and intelligent agents work together to build the increasingly complex systems of the future.

Also Read:

Efficient Bump and TSV Planning for Multi-Die Chip Designs

Reducing Risk Early: Multi-Die Design Feasibility Exploration

Building the Interconnect Foundation: Bump and TSV Planning for Multi-Die Systems