SNPS1670747138 DAC 2025 800x100px HRes

A Perfect Storm for EUV Lithography

A Perfect Storm for EUV Lithography
by Fred Chen on 04-03-2025 at 6:00 am

EUV Made Easy 1

Electron blur, stochastics, and now polarization, are all becoming stronger influences in EUV lithography as pitch continues to shrink

As EUV lithography continues to evolve, targeting smaller and smaller pitches, new physical limitations continue to emerge as formidable obstacles. While stochastic effects have long been recognized as a critical challenge [1,2], and electron blur more recently has been considered in depth [3], polarization effects [4,5] are now becoming a growing concern in image degradation. As the industry moves beyond 2nm node, these influences create a perfect storm that threatens the quality of EUV-printed features. Loss of contrast from blur and polarization make it more likely for stochastic fluctuations to cross the printing threshold [3].

Figure 1 shows the combined effects of polarization, blur, and stochastics for 18 nm pitch as expected on a 0.55 NA EUV lithography system. Dipole-induced fading [6] is ignored as a relatively minor effect. There is a 14% loss of contrast if unpolarized light is assumed [5], but electron blur has a more significant impact (~50% loss of contrast) in aggravating stochastic electron behavior in the image. The total loss of contrast is obtained by multiplying the contrast reduction from polarization by the contrast reduction from electron blur.

Figure 1. 9 nm half-pitch image as projected by a 0.55 NA 13.5 nm wavelength EUV lithography system. No dipole-induced image fading [6] is included. The assumed electron blur is shown on the right. The stochastic electron density plot in the center assumes unpolarized light (50% TE, 50% TM) [5]. A 20 nm thick metal oxide resist (20/um absorption) was assumed.
The edge “roughness” is severe enough to count as being defective. The probability of a stochastic fluctuation crossing the printing threshold is not negligible. As pitch decreases, we should expect this to grow worse, due to the more severe impact of electron blur [3] as well as the loss of contrast for unpolarized light [4,5] (Figure 2).

Figure 2. Reduction of image contrast worsens with smaller pitch. The stochastic fluctuations in electron density also grow correspondingly more severe. Aside from pitch, the same assumptions were used as in Figure 1.

Note that even for the 14 nm pitch case, the 23% loss of contrast from going from TE-polarized to unpolarized is still less than the loss of contrast from electron blur (~60%). As pitch continues to decrease, the polarization contribution will grow, along with the increasing impact from blur. As noted in the examples considered above, although polarization is recognized within the lithography community as a growing concern, the contrast reduction from electron blur is still more significant. Therefore, we must expect any useful analysis of EUV feature printability and stochastic image fluctuations to include a realistic electron blur model.

References

[1] P. de Bisschop, “Stochastic effects in EUV lithography: random, local CD variability, and printing failures,” J. Micro/Nanolith. MEMS MOEMS 16, 041013 (2017).

[2] F. Chen, Stochastic Effects Blur the Resolution Limit of EUV Lithography.

[3] F. Chen, A Realistic Electron Blur Function Shape for EUV Resist Modeling.

[4] F. Chen, The Significance of Polarization in EUV Lithography.

[5] H. J. Levinson, “High-NA EUV lithography: current status and outlook for the future,” Jpn. J. Appl. Phys. 61 SD0803 (2022).

[6] T. A. Brunner, J. G. Santaclara, G. Bottiglieri, C. Anderson, P. Naulleau, “EUV dark field lithography: extreme resolution by blocking 0th order,” Proc. SPIE 11609, 1160906 (2021).

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work.

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work. Pledge your support

Also Read:

Variable Cell Height Track Pitch Scaling Beyond Lithography

A Realistic Electron Blur Function Shape for EUV Resist Modeling

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution

Rethinking Multipatterning for 2nm Node


Podcast EP280: A Broad View of the Impact and Implications of Industrial Policy with Economist Ian Fletcher

Podcast EP280: A Broad View of the Impact and Implications of Industrial Policy with Economist Ian Fletcher
by Daniel Nenni on 04-02-2025 at 10:00 am

Dan is joined by economist Ian Fletcher. Ian is on the Coalition for a Prosperous America Advisory Board. He is the author of Free Trade Doesn’t Work , coauthor of The Conservative Case against Free Trade, and his new book Industrial Policy for the United States Winning the Competition for Good Jobs and High-Value Industries. He has been senior economist at the Coalition, a research fellow at the US Business and Industry Council, an economist in private practice, and an IT consultant.

In this far-reaching and insightful discussion, Dan explores the history, impact and future implications of the industrial policies of the US and other nations around the world with Ian. Ian explains the beginnings of industrial policy efforts in the US and the impact these programs have had across a wide range of technologies and industries. Ian provides his views of what has worked and what needs re-focus to achieve the desired results.

Through a series of historical and potential future scenarios Ian illustrates the complexity of industrial policy and the substantial impacts it has had on the world around us.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Big Picture PSS and Perspec Deployment

Big Picture PSS and Perspec Deployment
by Bernard Murphy on 04-02-2025 at 6:00 am

semiconductor design realization

I met Moshik Rubin (Sr. Group Director, Product Marketing and BizDev in the System Verification Group at Cadence) at DVCon to talk about PSS (the Portable Stimulus Standard) and Perspec, Cadence’s platform to support PSS.  This was the big picture view I was hoping for, following more down in the details views from earlier talks.

The standard and supporting tools can do many things but all technologies have compelling sweet spots, something you probably couldn’t do any other way. Moshik provided some big picture answers to this question in what Advantest and Qualcomm are doing today. Both have built bridges in testing objectives, in one case for hardware/software integration, in the other case between pre- and post-silicon testing. Each providing a clear answer to the question: “where is PSS the only reasonable solution?”

Qualcomm automating hardware/software integration testing

Memory-mapped hardware (most hardware these days) interacts with embedded hardware functions (video, audio, AI, etc) through memory-mapped registers. A register has an address in the memory map along with a variety of properties; software interacts with the hardware by writing/reading this address. This interface definition is the critical bridge between hardware and software and must be validated thoroughly.

I remember many years ago system AEs wrote apps to generate these definitions as header files and macros, together with documentation to guide driver/firmware developers. As the design evolved, they would update the app to reflect changes. This worked well, but the bridge was manually built and maintained. As the number of registers and properties on those registers grew, opportunities for mistakes also grew. (One of my favorites, should a flag be implemented as “read” or “clear on read”? Clear on read seems an easy and fast choice but can hide some difficult bugs.)

Qualcomm chose to automate this testing through a single source of truth flow based on PSS and Perspec. They first develop PSS descriptions of use-case scenarios and leaf-level (atomic) behaviors, abstracted from detailed implementation, then develop test realizations (mapping the PSS level to target test engine) for each target. These are a native mode (C running on the host processor interacting with the rest of the SoC), a UVM mode which can interact directly with a UVM testbench, and a firmware reference mode which generates documentation to be used by driver/software developers. As the design evolves, the PSS definition is updated (intentionally, or to fix bugs exposed in regression testing), and all these levels are updated in sync.

Incidentally, I know as I’m sure Qualcomm knows that there are already tools to build register descriptions, header files, and test suites. I see Qualcomm’s approach as complementary. They need PSS suites to test across the vertical range of design applications and to define synthetic tests which must probe system-level behaviors not fully comprehended in register descriptions. Seems like an opportunity for those register tools to integrate in some way with this PSS direction.

This is a big step forward from the ad-hoc support I remember.

Advantest automating pre-/post-silicon testing

Advantest showed a demo of their flow at DVCon, apparently very well attended. Connecting pre- and post-silicon testing seems to be a hot button for a lot of folks. Historically it has been difficult to automate a bridge between these domains. Pre-silicon verification could generate flat files of test vectors that could be run on an ATE tester or in a bench setup, but that was always cumbersome and limited. Now Cadence (and others) have worked with Advantest to directly use the PSS developed in pre-silicon testing for post-silicon validation. The Advantest solution (SiConic) unifies pre-silicon and post-silicon in an automated, and versatile environment by connecting the device functional interfaces (USB, PCIe, ETH) to external interfaces such as JTAG, SPI, UART, I2C, enabling rich PSS content to execute directly against silicon. That’s a major advance for post silicon testing, now advancing beyond post-silicon exercisers in the complexity of tests that can be run, and in helping to help isolate root causes for failures.

I should add one more important point. It seems tedious these days to say that development cycles are being squeezed hard, but for the hyperscalers and other big system vendors this has never been more true. They are tied to market and Wall Street cycles, requiring that they deliver new advances each year. That puts huge pressure on all in-house development, on test development as much as design development. Anywhere design teams can find canned, proven content, they are going to snatch it up. In test they are looking for more test libraries, VIP, and system VIP. Perspec is supported by extensive content for Arm, RISC-V, and x86 platforms, including System VIP building blocks for system testbench generation, traffic generation, performance analysis and score boarding.

You can learn more about Cadence Perspec HERE.

Also Read:

Metamorphic Test in AMS. Innovation in Verification

Compute and Communications Perspectives on Automotive Trends

Bug Hunting in Multi Core Processors. Innovation in Verification


Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs
by Robert Maire on 04-01-2025 at 8:00 am

Trump Elon Doge

– Trump gives CHIPS Act & AI oversight to DOGE/Musk-“Tech Support”
– CHIPS Act to switch from incentive based to tariff/punitive based
– Musk to be responsible for US AI policy & security- Will rule ChatGPT
– Talks underway to relocate Taiwan fabs & staff to avoid China threat

Donald Trump gives CHIPS Act & AI problem to his “tech support” guru, Musk

It was announced by the White House late Monday that the Trump administration is moving responsibility for the much maligned and disliked CHIPS Act out from under the Commerce Department to be under DOGE’s responsibility.

As part of the reorganization, all computer and AI security and policy, most of which was under NIST, which is also part of the Department of Commerce, will also move to be under the control of DOGE.

This will give Musk effective control of Robert Altman and ChatGPT and all things AI which he has long sought by various means including purchase and lawsuits.

It seems that this is a huge reward by Trump for Musk’s support and loyalty to Trump as well Musk being the point man for cutting government spending.

Trump said ” I can think of nobody, in the world, better suited than Elon to take on these highly complex problems with computer chips and artificial intelligence” , he went on to say ” Elon will turn the money losing, stupid CHIPS Act into a tariff driven, money making, American job making thing of beauty”, he further said ” Nobody knows more about artificial intelligence than Elon and his Tesler cars with computer “brains” that can drive themselves”

In discussing the transfer of CHIPS Act & AI to DOGE from the Department of Commerce, White House press secretary Leavitt pointed to Musk’s very strong technology background versus Commerce Department head Lutnick’s primarily financial acumen.

A potential solution to the Taiwan issue as well?

In prior discussions, Musk had commented that the primary reason for China wanting to regain Taiwan was for China to get the critically important semiconductor manufacturing located there. It has been reported by several sources that Musk is putting together a potential plan to move most of the critical, advanced, semiconductor manufacturing out of Taiwan thereby reducing China’s desire to retake the island.

The plan would entail moving the most advanced fabs in Taiwan first, followed by the less capable fabs later. This would obviously be a huge undertaking but would likely be much less costly than a full scale war over Taiwan between the US and China.

Much of the equipment could be moved into already planned fabs in Arizona & Ohio etc. New fab shells would take one to two years to build to house the moved equipment.

Perhaps the bigger issue is where to house all the Taiwanese engineers and their families that would move along with the equipment & fabs. Estimates are that over 300,000 people would have to eventually emigrate to the US. The administration would likely make room for them by the far larger number of illegal immigrants expected to be deported, much of which is already underway.

Make Greenland Green again!

Trump’s interest in Greenland may have a lot more to it than meets the eye. Greenland is rich in rare earth elements, critical to the electronics industry. Greenland is not really green but rather ice covered with hundreds of miles of glaciers amid cold climates. Greenland has plenty of hydroelectric and water, coincidentally what AI data centers and semiconductor fabs need most. In fact semiconductor fabs and power hungry data centers would be perfect in a place that has excess water, electric and perhaps most importantly low temperatures to cool those power hungry, overheated facilities. The heat from those data centers and fabs would likely melt much of the ice cover in Greenland thereby producing more needed water. In the end , the added heating could help turn Greenland “greener ” from its current arctic facade (so much for global warming concerns). Indeed Greenland might be an alternative place to move some relocated Taiwanese fabs and their engineers, they would just have to acclimate to the colder environment.

TaiwanTechTransfer working group & signal chat

There is a secret Signal chat group that is overseeing the semiconductor technology transfer out of Taiwan and Elon Musk is the moderator of the group. Here is your private secret invite link:

TaiwanTechTransfer Secret Signal Chat

Remember, its top secret, don’t share it with anyone!

Merger of Global Foundries & UMC makes for a GLUM foundry

It has been reported in various news sources that Global Foundries and UMC are discussing a merger with the combined entity to be renamed and trading under the ticker symbol GLUM. The combined market share of 6% each would make for a total of 12% market share thereby surpassing Samsung’s roughly 10% market share in foundry. However both foundries produce primarily middling to trailing edge devices that are under attack from the quickly growing China fab capacity thus the name GLUM is appropriate given the future prospects of the market they serve.

Happy April Fools Day!!!!!!
About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

About Semiwatch

Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch.

Visit Our Website

Also Read:

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary

Trump whacking CHIPS Act? When you hold the checkbook, you make up the new rules

AMAT- In line QTR – poor guide as China Chops hit home- China mkt share loss?


Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures
by Kalar Rajendiran on 04-01-2025 at 6:00 am

STAR Memory System (SMS) Solution

Memory testing in the early days of computing was a relatively straightforward process. Designers relied on simple, deterministic approaches to verify the functionality of memory modules. However, as memory density increased and systems became more complex, the likelihood of faults also rose. With advancements in memory technologies, more sophisticated testing strategies emerged. Error correction codes were introduced, and self-repair strategies were developed alongside increasingly automated methods.

I spoke with Pawini Mahajan, Sr. Staff Product Manager, Memory Test & Repair Solutions at Synopsys to discuss the evolution of memory test and repair. The discussion also touched on the impact of AI-driven workloads, the challenges introduced by modern memory technologies, and the features and approaches needed for effective solutions.

The Evolution of Memory Test and Repair

Historically, memory test and repair techniques were designed for simpler architectures. As semiconductor technology advanced, memory densities increased, requiring more sophisticated test and error correction mechanisms. A notable advancement was the introduction of Built-In Self-Test (BIST) techniques, which allowed memory systems to test themselves autonomously. BIST mechanisms integrated test patterns into the memory’s design, enabling self-diagnostics during operation. This capability reduced the need for external testing and provided a more robust way to identify memory faults before they caused significant system failures.

A major breakthrough came with the introduction of the Synopsys STAR Memory System™ (SMS), which revolutionized memory test and repair technology in the industry. Yervant Zorian, the Chief Architect and Fellow at Synopsys, was the visionary behind this breakthrough. SMS provides integrated BIST, error correction, redundancy allocation, and self-repair functionalities, making it a game-changer for embedded memory solutions. Unlike earlier solutions, SMS offers continuous monitoring, identifying potential problems before they escalate into system failures. If a fault is detected, SMS can automatically apply redundant memory mappings or other repair strategies to ensure system functionality without requiring a restart or manual repair. This innovation significantly improves manufacturing yield, in-field reliability, and system-level performance.

Synopsys STAR Memory System (SMS)

The Synopsys SMS is a comprehensive, silicon-proven test, repair, and diagnostics solution designed for both Synopsys and third-party memories. It incorporates a test wrapper around each memory instance, enabling controlled access during test mode. These wrappers connect to the SMS processor, which handles test execution, failure diagnosis, and redundancy analysis. The system automates the integration of test and repair IP at the RTL level, ensuring correct connectivity through automated test-bench verification. The SMS processor interfaces with the SMS server via the IEEE 1500 standard, utilizing a TAP controller for test access and scheduling. Additionally, SMS generates tester-ready patterns in STIL, WGL, or SVF formats and features advanced diagnostics that allow SoC designers and test engineers to pinpoint the exact physical location of failing bitcells. Furthermore, SMS provides interactive silicon debugging capabilities in a lab setup without requiring a production tester, streamlining the debugging process and accelerating time-to-market.

Challenges Introduced by Modern Technologies

The rapid advancement of computing, particularly with the rise of AI-driven workloads, has significantly reshaped the landscape of memory test and repair. Traditional memory architectures are being replaced with high-performance solutions such as High Bandwidth Memory (HBM) and Compute High Bandwidth Memory (cHBM) to meet the demands of modern applications. However, these advancements introduce new challenges in defect detection, repair strategies, and real-time optimization.

Modern technologies, such as Gate-All-Around (GAA) and multi-die systems, have also significantly increased the complexity of memory design and testing. These advanced architectures, with densely packed memory configurations, heighten the risk of faults, making traditional testing methods insufficient. New fault types introduced by these technologies are difficult for conventional algorithms to detect. Additionally, multi-die architectures complicate fault isolation and repair, and the scale of memory required for AI/ML workloads, such as HBM and cHBM, makes comprehensive testing increasingly difficult. As systems operate under heavy workloads, particularly in cloud or edge environments, traditional offline diagnostics are inadequate. There is a growing need for in-field diagnostics and self-repair capabilities to minimize downtime and ensure continuous performance. Moreover, the reuse of design IPs for faster time-to-market creates challenges in ensuring compatibility and reliability within new memory configurations.

Approaches to Address These Challenges

To tackle these challenges, modern memory systems must employ several advanced features. On-chip memory diagnostic (OCMD) capabilities enable real-time fault monitoring without external testers, which is particularly useful for AI/ML applications. For multi-die systems, SMS pattern diagnosis and debug capabilities help address the complexities of interconnected chiplets. Additionally, quality-of-results (QoR) optimization ensures high-performance memory for demanding workloads. Flexible repair hierarchies allow for efficient, targeted repairs without disrupting the entire system, and special testing methods are used to address defects in abutted designs, which maximize silicon area. Native IEEE 1687 support ensures seamless integration of testing across all components, while APB integration allows for comprehensive diagnostics across different memory hierarchy levels. Finally, features such as configurable e-fuse drivers and support for specialized memory types, such as banked memory, enable flexible adjustments to enhance performance and fault tolerance. Together, these solutions ensure that memory systems can meet the performance and reliability needs of next-generation computing.

The Future of Memory Test and Repair: AI-Integrated Systems

As memory demands continue to grow, the need for real-time fault detection, repair, and optimization will only become more critical. The SMS’s ability to seamlessly integrate with modern AI workloads ensures that it will continue to evolve and meet the needs of cutting-edge systems. The next phase will involve integrating AI and ML even further into memory management, enabling intelligent fault prediction, self-optimizing repair strategies, and enhanced performance monitoring. These advancements will ensure that future memory architectures can sustain the increasing computational demands of AI-driven applications.

Watch for Synopsys announcements later this year, extending and expanding their SMS solutions addressing AI-driven workloads.

To learn more about Synopsys STAR Memory System solution, click here.

Also Read:

DVCon 2025: AI and the Future of Verification Take Center Stage

Synopsys Expands Hardware-Assisted Verification Portfolio to Address Growing Chip Complexity

How Synopsys Enables Gen AI on the Edge


CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Matthew Stephens of Impact Nano
by Daniel Nenni on 03-31-2025 at 10:00 am

Matt website

Matthew Stephens, co-founder and CEO of Impact Nano, brings over 20 years of experience commercializing advanced materials. Prior to co-founding Impact Nano, Matt was VP Sales and Products at Air Liquide Advanced Materials and held C-level leadership roles at Voltaix and Metem. Matt has a Ph.D. in Chemistry from the University of Wisconsin and an MBA from INSEAD. He started his career as an industrial research scientist in the Boston area and is a co-inventor of over a dozen U.S. patents.

Tell us about your company.

Impact Nano is a leading tier 2, North American supplier of advanced materials to the semiconductor industry. We develop and manufacture a range of products that are used in the most advanced chip manufacturing processes including EUV photoresists and ALD precursors. Our products enable faster, higher storage density computer chips with lower power-consumption.

Our expertise in ligand, organometallic, silicon and fluorine chemistries, and our ability to safely and sustainably scale-up production of ultra-high purity materials, allow us to support critical innovations in the semiconductor and other high-tech industries. Other applications for our products include nanometer films and coatings for the electronics and automotive industries, energy storage applications, and pharmaceuticals.

To expand these capabilities, we recently created Impact Chemistry, an independent subsidiary for research and development and kilo-scale production in Kingston, Ontario. Impact Chemistry specializes in product development and custom synthesis services for leading companies in the semiconductor industry. The development team has strong expertise in organometallic, inorganic and materials chemistry, safely synthesizing challenging precursors, and developing tailored processes for its customers.

Impact Chemistry’s and Impact Nano’s capabilities and offerings are highly complementary. Our shared focus on the customer positions us to support all their needs through any stage of the product lifecycle from bench-scale R&D work through to larger volume manufacturing.

What problems are you solving?
  • Materials Innovation: True innovation requires experience and expertise that can be challenging for companies to resource internally. We discover and develop new materials that enable advancements in semiconductor performance and efficiency.
  • Scale-up: Innovation is only the first step. Our ability to take products from discovery to bench-top to industrial scale production allows for atomic-level control and chemical fingerprinting by design.
  • Manufacturing: Manufacturing options in North America for these specialized materials are limited. Our fully equipped and qualified manufacturing facility in Orange, MA, allows for large scale production of these materials for our clients.
  • Sustainability: We’re committed to pursuing materials advancements that enable the green energy transition, reduce the energy demands of computing, and help to decrease the environmental impact of semiconductor manufacturing processes.

What application areas are your strongest?

Our expertise and product portfolio have positioned us as leading suppliers of the materials for several key end-use applications in the semiconductor industry including:

  • EUV photoresist materials
  • ALD/CVD precursors for Si, wide band gap, and neuromorphic devices
  • Etchants for 3D architectures

Impact Nano has expertise in chemical synthesis and characterization, equipment design and fabrication, process development, chemical packaging, and chemical manufacturing operations. We are ISO-9001certified.

What keeps your customers up at night?

  • Achieving breakthroughs in semiconductor materials performance. Chipmakers and equipment manufacturers require new innovative materials to reduce power consumption, increase performance, or reduce area cost. They need to find suppliers who can address the material challenges of creating chip features at the nanometer scale.
  • Access to scale-up and manufacturing capabilities. Great innovations are not valuable if they cannot be scaled up and manufactured at high volumes.
  • Supply chain reliability. Reliable, ethical, more sustainable supply chains are critical to the industry. Traditional sources of the required materials are often no longer viable for political or environmental reasons. Impact Nano is located in the US and Canada.

What does the competitive landscape look like and how do you differentiate?

The ecosystem of semiconductor materials suppliers exhibits a tiered structure.  Tier 1 suppliers are typically multinational companies that offer a broad array of products, many of which they source from tier 2 suppliers.  Tier 2 suppliers typically possess chemical expertise or equipment, but rarely possess applications insight, scale-up engineering expertise, or the quality mindset required to support atomic level control of thin film deposition.

In contrast, Impact Nano was founded by semiconductor materials supplier veterans who have commercialized several dozen thin film deposition materials and etchants from the lab to HVM in semiconductor fabs. Embedded in the DNA of Impact Nano are the safety and quality mindsets required to safely scale up and automate materials synthesis and purification technologies to serve semiconductor applications.

Our combination of deep expertise in synthetic and analytical chemistry, combined with our scale-up and automation capabilities for large-volume manufacturing give us the ability to provide customers with control of the chemical fingerprint of a material at all scales of manufacture.

Impact Nano has demonstrated the ability to manufacture materials at scales ranging from a few grams to over 700 tons per year for demanding semiconductor applications including silicon epitaxy.

What new features/technology are you working on?

  • We are currently working with clients to scale and manufacture a wide range of innovative materials. These include innovative ALD precursors, coating formulations, advanced catalysts, and upstream pharmaceutical reagents.
  • Scale-up and automation are key strengths of Impact Nano. We possess in-house fabrication capabilities, including welding, as well as instrumentation and control expertise that enable us to scale up chemistry in half of the time typically required.

How do customers normally engage with your company?

  • By contacting our experienced sales team: Our experienced sales professionals are readily available to discuss customer needs and provide tailored solutions.
  • Through Tier 1 suppliers. Some end user customers who are eager to control supply chains might ask their local distributor, Tier 1 supplier, or semiconductor equipment partner to work with Impact Nano to manufacture a critical material.
  • Meeting us at industry events and trade Shows: We actively participate in industry events and trade shows, showcasing our latest innovations and connecting with clients and partners.
  • Engaging us in research and development projects: We often engage in collaborative research projects with clients. An innovative customer project with Impact Chemistry is best viewed as an extension of the client’s R&D team.
  • Visiting our websites: impact-nano.com and www.impact-chemistry.com
Also Read:

CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs


An Important Advance in Analog Verification

An Important Advance in Analog Verification
by Bernard Murphy on 03-31-2025 at 6:00 am

Acclerating design exploration min

Innovation in analog design moves slowly, not from lack of desire for better methods from designers or lack of effort and ideas from design tech innovators, but simply because the space is so challenging. Continuous time and signals, and variances in ambient/process characteristics represent a multi-dimensional space across which a designer must prove stability and performance of a design. An abstraction seems like the obvious answer but a workable solution has not been easy to find. Mach42 have an ML-based answer which may just be the breakthrough designers have been looking for.

Why Analog Design is Hard

The reference engine for verifying design correctness is SPICE, which is very accurate but also slow to execute. Recent advances can accelerate performance with some compromise in accuracy, but simulator speed is only part of the problem. While an analog design has relatively few active components, simulation must be run across wide ranges of process parameters and ambient conditions to validate behavior. Sampled simulations across these range in the form of Monte-Carlo (MC) analysis or more recently Scaled Sigma Sampling (SSS) are the state of the art. These demand massive compute resources or run times to handle multi-dimensional sampling grids in which process parameters, voltages, currents, RLCs, temperatures, etc can range between min, max and nominal values.

That kind of overhead might be mandatory for tape out signoff where SPICE accuracy is essential but can be a real drag on productivity during design exploration, limiting experimentation to an uncomfortable fit between program schedules and massive MC/SSS runtimes. A better approach would be a model abstraction good enough to support fast iteration, while still allowing for full SPICE confirmation as needed.

Mach42 Discovery Platform

Mach42’s Discovery Platform builds a surrogate model using ML methods, harvesting existing simulation run results together with additional runs drive the training the AI architecture. After initial training on available simulation runs, or SPICE runs across a starter grid, Mach42 point to a couple of important innovations in their approach. These include active learning to enhance accuracy around regions in the model with high variance and a reconfigurable neural net architecture to guide to 90% model accuracy, or to allow a user to push harder for higher accuracy. I’m told that training takes no more than a few hours to an overnight run.

The 90% level is a reminder that this platform aims at fast exploration with good but not perfect accuracy. It’s a fast emulator to accelerate discovery across design options, with an expectation that final confirmation will return to signoff accuracy SPICE. That said, 90% is the same level promised by FastSPICE (over 90% accuracy), but for Discovery Platform with much faster model performance (their models don’t need to re-simulate).

This performance is important not only to get a fast abstract model. Training refinement can also find out-of-spec conditions in key performance metrics: GBW, Gain, CMRR, etc. Further this model can be invaluable to use against system level testing, to incorporate package and board level parasitics while the analog design is still in development, not just for the basics but to check for potential problems such as V/I levels, power, and ringing. That seems to me a pretty important capability to verify compliance with system level expectations early on.

Bijan Kiani, CEO of Mach42 (previously VP of marketing at Synopsys and CEO of InCA), drew an interesting comparison with PrimeTime (PT). Before such tools, simulators had to be used for timing analysis. Now, no- one would dream of using anything but PrimeTime or similar STA tools. Mach42’s models can elevate analog verification to a similar level.

Status and Looking Forward

Mach42 are building on ML technology they already have in production in a very different domain (nuclear fusion), so they had a running start in this analog application. They tell me that the Discovery Platform is already well into active evaluations with multiple customers. Mach42 also have a Connections partnership with Cadence on Spectre. In fact you can register to review a related video here.

This all looks very promising to me. Also promising is that the company is in development to build Verilog-A models in this flow. Which will be great naturally for AMS designers but also points to a possibility to develop RNM models that could be used in digital verification, notably with hardware accelerators. This would be a major advance since I hear that developing such models is still a hurdle for analog design teams. An automated way to jump over that hurdle could open the floodgates to extensive AMS testing across the analog-digital divide!

You can learn more about Mach42 HERE.

Also Read:

CEO Interview: Bijan Kiani of Mach42


Semiconductor CapEx Down in 2024 up in 2025

Semiconductor CapEx Down in 2024 up in 2025
by Bill Jewell on 03-30-2025 at 8:00 am

Mar 2025 capex 793x1024

Semiconductor Intelligence (SC-IQ) estimates semiconductor capital expenditures (CapEx) in 2024 were $155 billion, down 5% from $164 billion in 2023. Our forecast for 2025 is $160 billion, up 3%. The increase in 2025 is primarily driven by two companies. TSMC, the largest foundry company, plans between $38 billion and $42 billion in 2025 CapEx. Using the midpoint, this is an increase of $10 billion or 34%. Micron Technology projects CapEx of $14 billion for its 2025 fiscal year ending in August, up $6 billon or 73% from the previous fiscal year. Excluding these two companies, 2025 total semiconductor CapEx would decrease $12 billion or 10% from 2024. Two of the three companies with the largest CapEx plan significant cuts in 2025 with Intel down 20% and Samsung down 11%.

Semiconductor CapEx is dominated by three companies which accounted for 57% of the total in 2024: Samsung, TSMC, and Intel. As illustrated below, Samsung is responsible for 61% of total memory CapEx. TSMC spends 69% of foundry CapEx. Among Integrated Device Manufacturers (IDMs), Intel accounted for 45% of CapEx. The foundry CapEx total is based on pure-play foundries. Both Samsung and Intel also have CapEx for foundry services.

The U.S. CHIPS Act was designed to increase semiconductor manufacturing in the U.S. According to the Semiconductor Industry Association (SIA), the CHIPS Act has announced $32 billion in grants and $6 billion in loans to 32 companies for 48 projects. The largest CHIPS investments are:

Company Investment Purpose Locations
Intel $7.8 billion New/upgraded wafer fabs & packaging facility Arizona, Ohio, New Mexico, Oregon
TSMC $6.6 billion New wafer fabs Arizona
Micron Technology $6.2 billion New wafer fabs Idaho, New York, Virginia
Samsung $4.7 billion New/upgraded wafer fabs Texas
Texas Instruments $1.6 billion New wafer fabs Texas, Utah
GlobalFoundries $1.6 billion New/upgraded wafer fabs New York, Vermont

Since the latest CHIPS funding, Intel announced last month it will delay the initial opening of its planned wafer fabs in Ohio from 2027 to 2030. The Ohio fabs account for $1.5 billion of Intel’s $7.8 billion CHIPS funding. TSMC, however, announced this month it will spend an additional $100 billion on wafer fabs in the U.S. on top of the $65 billion already announced. The Trump administration has voiced its opposition to the CHIPS Act and requested the U.S. Congress to end it. If the CHIPS Act is repealed, the fate of announced CHIPS investments is uncertain.

We at Semiconductor Intelligence believe the CHIPS Act did not necessarily increase overall semiconductor CapEx. Companies plan their wafer fabs based on current and expected demand. The CHIPS Act likely influenced the location of some wafer fabs. TSMC currently has five 300 mm wafer fabs, four in Taiwan and one in China. TSMC plans to build a total of six new fabs in the U.S. and one in Germany. Samsung already had a major wafer fab in Texas, so it is uncertain if the CHIPS Act influenced its decision to build new fabs in Texas. The major U.S.-based semiconductor manufacturers (Intel, Micron, and TI) generally locate their wafer fabs in the U.S. Intel has most of its fab capacity in the U.S. but also has 300 mm fabs in Israel and Ireland. Micron has built its wafer fabs in the U.S., but through company acquisitions has fabs in Taiwan, Singapore and Japan. Texas Instruments has built all its 300 mm fabs in the U.S.

Political pressures may also affect fab location decisions. The Trump administration is considering a 25% or higher tariff on semiconductor imports to the U.S. However, tariffs on U.S. imports of semiconductors will affect companies with U.S. wafer fabs. Most of the final assembly and test of semiconductors is done outside of the U.S. According to SEMI, less than 10% of worldwide assembly and test facilities are in the U.S. The U.S. imported $63 billion of semiconductors in 2024. $28 billion, or 44%, of these imports were from three countries which have no significant wafer fab capacity but are major locations of assembly and test facilities: Malaysia, Thailand and Vietnam. SEMI estimates China has about 25% of total assembly and test facilities but only accounted for $2 billion, or 3%, of U.S. semiconductor imports. The China number is low because most semiconductors made in China are used in electronic equipment made in China. Thus, tariffs on U.S. semiconductor imports would likely hurt U.S. based companies and other companies with U.S. wafer fabs more than they would hurt China.

The global outlook for the semiconductor industry in 2025 is uncertain. The U.S. has implemented several tariff increases on certain imports and it’s considering more. Other countries have either raised or are considering raising tariffs on goods imported from the U.S. in retaliation. The tariffs will increase prices for the final consumers and thus will likely decrease demand. The tariffs may not be placed directly on semiconductors but will have a major impact on the industry if applied to goods with high semiconductor content.

Also Read:

Cutting Through the Fog: Hype versus Reality in Emerging Technologies

Accellera at DVCon 2025 Updates and Behavioral Coverage

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary


Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing
by Daniel Nenni on 03-28-2025 at 10:00 am

Dan is joined by Guy Gozlan, proteanTecs director of machine learning and algorithms, overseeing research, implementation, and infrastructure of machine learning solutions. Prior to proteanTecs he was project lead at Apple, focusing on ATE optimizations using embedded software and machine learning and embedded software engineering at Mellanox.

In this informative discussion, Guy explains how the unique proteanTecs embedded agent technology is applied to chip testing. Guy explains that as complexity in devices rise, test time and cost also rise, creating trade-offs. If the tests aren’t robust, yield will suffer, creating more challenges. Yet, the mission critical nature of many new designs also demands the highest quality and reliability, further stressing test requirements. And multi-chip packaging adds additional complications with the lack of visibility onto individual devices and interconnects.

Dan explores with Guy how proteanTecs’ solution effectively addresses these challenges with deep-data analytics. By measuring and predicting chip behavior in advance, the company enables a shift-left test strategy to catch errors early, reducing costs and improving reliability of the devices. A combination of the company’s embedded agents, IP, cloud-based analytics and sophisticated machine learning (ML) models create an end-to-end solution that can be applied real-time, in real-world conditions to continuously improve effectiveness of testing and final device quality.

To learn more about this strategy, read the white paper: Cut Defects Not Yield: Outlier Detection with ML Precision

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Dr Greg Law of Undo

CEO Interview with Dr Greg Law of Undo
by Daniel Nenni on 03-28-2025 at 6:00 am

Greg Photo 200x200

Greg Law is a C++ debugging expert, well-known conference speaker, and the founder of Undo. Greg has over 20 years’ experience in the software industry working for the pioneering British computer firm Acorn, as well as NexWave and Solarflare.

Determined to build a tool to ease the pain of debugging complex software, he started his company Undo from his garden shed. Now the company is established as the time travel debugging company for Linux. He lives in Cambridge, UK with his wife and two children; and in his spare time, he likes to code and create free C/C++ debugging tutorial videos to share on Undo’s free resource center: https://undo.io/resources/gdb-watchpoint 

Tell us about your company

Undo provides advanced debugging technology that helps engineers solve the most complex software issues in semiconductor design, EDA tools, networking, and other at-scale mission-critical environments. Our solutions are trusted by engineers at top technology companies worldwide to accelerate debugging — enabling them to boost engineering productivity and get to market faster.

What problems are you solving

Most of the world’s software is not really understood by anyone. It goes wrong, and no-one knows why. Often people don’t even know why or how it works!

At Undo we allow software engineers to see exactly what their code did and why. This allows them to easily root-cause even the most difficult issues. We also allow better collaboration between and within teams. Our technology records program execution in full detail, enabling developers to replay and analyze exactly how an issue occurred — eliminating guesswork and dramatically accelerating root-cause analysis. By making debugging deterministic and shareable, we also improve collaboration within engineering teams, reducing time wasted on reproducing issues and miscommunication

What application areas are your strongest?

Any industry dealing with millions of lines of mission-critical code — where a single bug can cost millions — benefits from Undo’s ability to provide precise, replayable debugging insights. Undo is strongest in industries where software complexity, reliability, and debugging efficiency are critical. Our technology is widely used in:

  • EDA / Computational Software – Engineers rely on Undo to debug intricate design and verification tools, ensuring semiconductor development stays on schedule and resolving customer issues faster.
  • Semiconductor design – Undo enables semiconductor companies to debug complex multithreaded applications efficiently.
  • Databases – Our time travel debugging solution helps engineers building complex multithreaded data management systems to resolve hard-to-reproduce issues.
  • Networking – We assist in diagnosing failures in networking operating systems as well as routers/switches, where intermittent issues and concurrency bugs are notoriously difficult to debug.
  • Financial technology – Undo is used in trading platforms and risk management systems, where milliseconds matter and reliability is paramount.
What keeps your customers up at night?

Our customers sleep soundly! But before they become a customer, they face some serious challenges that keep them up at night:

  • Development bottlenecks – Their engineering teams are stuck in a debugging tarpit, spending days or weeks diagnosing elusive issues instead of shipping new features.
  • Missed deadlines – Product releases slip because debugging complex systems is slow and unpredictable.
  • Production failures – Bugs escape into production, causing costly downtime, reputational damage, and support escalations.
  • Incomplete coverage – In design and verification, incomplete modelling and insufficient test coverage increase the risk that a fabricated chip won’t perform as expected under real workloads. A mistake at this stage can be a multi-million-dollar disaster — or worse, require a complete respin.

Undo removes the guesswork, enabling teams to diagnose issues quickly and confidently — so they can focus on delivering high-performance, reliable software and hardware on schedule.

What does the competitive landscape look like and how do you differentiate?

Our main competition remains old-fashioned printf debugging, maybe a bit of GDB. Most engineers are still using tools and techniques that require them to guess what happened, recompile, rerun, and hope the bug reappears. Unlike printf-based debugging, the engineer can ask questions about their program’s behavior without recompiling and rerunning.

Compared to GDB, Undo tells you about the past, exactly what happened – past tense. There are a few open-source projects trying to offer similar capabilities, but they don’t scale to the size or complexity of the systems our customers work on. Time travel debugging at enterprise scale is a hard problem. We’ve spent over a decade making it reliable, fast, and usable for real-world software teams.

What new features/technology are you working on?

A lot! One interesting thing is to generate waveform views (e.g. a vcd file) from a recording. This is particularly valuable for silicon engineers using SystemC or writing C++ models. It lets them analyse software behavior in the familiar, signal-level style they’re used to from RTL simulation.

One of SystemC’s big advantages is that you can compile and run your model like regular C++, without needing a heavyweight simulator. Undo builds on that: you keep the simplicity and speed of native execution, without giving up waveforms or the power of time travel debugging. It’s the best of both worlds.

How do customers normally engage with your company?

Our customers typically engage with Undo by testing it on real-world debugging challenges. A common approach is to take a past issue — one that was exceptionally painful to diagnose — back out the fix, and then re-run the debugging process using Undo. This allows them to directly compare the traditional approach with Undo’s time travel debugging, highlighting the drastic reduction in time and effort required to find the root cause.

Once they see how much easier debugging can be, they apply Undo to an unsolved, high-priority issue. The ability to instantly replay program execution and see exactly what happened — without relying on logs or guesswork — proves so effective that teams quickly adopt Undo as a standard debugging tool across their organization.

Request a Demo

Also Read:

CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs