SILVACO 073125 Webinar 800x100

Even HBM Isn’t Fast Enough All the Time

Even HBM Isn’t Fast Enough All the Time
by Jonah McLeod on 04-07-2025 at 6:00 am

BW V Latency

Why Latency-Tolerant Architectures Matter in the Age of AI Supercomputing

High Bandwidth Memory (HBM) has become the defining enabler of modern AI accelerators. From NVIDIA’s GB200 Ultra to AMD’s MI400, every new AI chip boasts faster and larger stacks of HBM, pushing memory bandwidth into the terabytes-per-second range. But beneath the impressive specs lies a less obvious truth: even HBM isn’t fast enough all the time. And for AI hardware designers, that insight could be the key to unlocking real performance.

The Hidden Bottleneck: Latency vs Bandwidth

HBM solves one side of the memory problem—bandwidth. It enables thousands of parallel cores to retrieve data from memory without overwhelming traditional buses. However, bandwidth is not the same as latency.

Even with terabytes per second of bandwidth available, individual memory transactions can still suffer from delays. A single miss in a load queue might cost dozens of clock cycles. The irregular access patterns typical of attention layers or sparse matrix operations often disrupt predictive mechanisms like prefetching. In many systems, memory is shared across multiple compute tiles or chiplets, introducing coordination and queuing delays that HBM can’t eliminate. And despite the vertically stacked nature of HBM, DRAM row conflicts and scheduling contention still occur.

In aggregate, these latency events create performance cliffs. While the memory system may be technically fast, it’s not always fast enough in the precise moment a compute engine needs data—leading to idle cycles in the very units that make these chips valuable.

Vector Cores Don’t Like to Wait

AI processors, particularly those optimized for vector and matrix computation, are deeply dependent on synchronized data flow. When a delay occurs—whether due to memory access, register unavailability, or data hazards—entire vector lanes can stall. A brief delay in data arrival can halt hundreds or even thousands of operations in flight.

This reality turns latency into a silent killer of performance. While increasing HBM bandwidth can help, it’s not sufficient. What today’s architectures truly need is a way to tolerate latency—not merely race ahead of it.

The Case for Latency-Tolerant Microarchitecture

Simplex Micro, a patent-rich startup based in Austin, has taken on this challenge head-on. Its suite of granted patents focuses on latency-aware instruction scheduling and pipeline recovery, offering mechanisms to keep compute engines productive even when data delivery lags.

Among their innovations is a time-aware register scoreboard, which tracks expected load latencies and schedules operations accordingly avoiding data hazards before they occur. Another key invention enables zero-overhead instruction replay, allowing instructions delayed by memory access to reissue cleanly and resume without pipeline disruption. Additionally, Simplex has introduced loop-level out-of-order execution, enabling independent loop iterations to proceed as soon as their data dependencies are met, rather than being held back by artificial order constraints.

Together, these technologies form a microarchitectural toolkit that keeps vector units fed and active—even in the face of real-world memory unpredictability.

Why It Matters for Hyperscalers

The implications of this design philosophy are especially relevant for companies building custom AI silicon—like Google’s TPU, Meta’s MTIA, and Amazon’s Trainium. While NVIDIA has pushed the envelope on HBM capacity and packaging, many hyperscalers face stricter constraints around power, die area, and system cost. For them, scaling up memory may not be a sustainable strategy.

This makes latency-tolerant architecture not just a performance booster, but a practical necessity. By improving memory utilization and compute efficiency, these innovations allow hyperscalers to extract more performance from each HBM stack, enhance power efficiency, and maintain competitiveness without massive increases in silicon cost or thermal overhead.

The Future: Smarter, Not Just Bigger

As AI workloads continue to grow in complexity and scale, the industry is rightly investing in higher-performance memory systems. But it’s increasingly clear that raw memory bandwidth alone won’t solve everything. The real competitive edge will come from architectural intelligence—the ability to keep vector engines productive even when memory stalls occur.

Latency-tolerant compute design is the missing link between cutting-edge memory technology and real-world performance. And in the race toward efficient, scalable AI infrastructure, the winners will be those who optimize smarter—not just build bigger.

Also Read:

RISC-V’s Privileged Spec and Architectural Advances Achieve Security Parity with Proprietary ISAs

Harnessing Modular Vector Processing for Scalable, Power-Efficient AI Acceleration

An Open-Source Approach to Developing a RISC-V Chip with XiangShan and Mulan PSL v2


Podcast EP281: A Master Class in the Evolving Ethernet Standard with Jon Ames of Synopsys

Podcast EP281: A Master Class in the Evolving Ethernet Standard with Jon Ames of Synopsys
by Daniel Nenni on 04-04-2025 at 10:00 am

Dan is joined by Jon Ames, principal product manager for the Synopsys Ethernet IP portfolio. Jon has been working in the communications industry since 1988 and has led engineering and marketing activities from the early days of switched Ethernet to the latest data center and high-performance computing Ethernet technologies.

Dan explores the history of the Ethernet standard with Jon, who provides an excellent overview of how the standard has evolved to deliver the high-performance, low latency and low power capabilities demanded by contemporary AI-based systems. Jon explains the enduring compatibility of the standard, allowing multiple generations to coexist in legacy and well as new systems.

Jon spends some time explaining the impact the Ultra Ethernet standard is having on advanced systems in terms of capabilities such as network utilization and latency. How current and future Ethernet standards will impact the industry is also explored. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Cyril Sagonero of Keysom

CEO Interview with Cyril Sagonero of Keysom
by Daniel Nenni on 04-04-2025 at 6:00 am

Portrait Cyril

Cyril Sagonero is the CEO and co-founder of Keysom, a deeptech company focused on RISC-V custom processor. In 2019, he founded Keysom with Luca TESTA to address inefficiencies in off-the-shelf processors, developing tailored solutions for various industries. Under his leadership, the company secured €4 million in funding in September 2024 to advance its technology and expand internationally. Previously, he co-founded Koncepto, specializing in hardware and software development, and worked as a lecturer and pedagogical manager at ESTEI in Bordeaux, focusing on electronics and embedded systems.

Tell us about your company.
Keysom is a French startup funded in 2022, specializing in the architecture of processor cores based on the RISC-V Instruction Set Architecture (ISA).

Our mission is to empower companies by providing RISC-V IP with a no-code architectural exploration tool that automatically customize processors tailored to specific application requirements. This approach ensures optimal power, performance, and area (PPA) trade-offs, enabling industries to create processors without necessitating in-depth technical expertise.

With the support of organizations like Alpha-RLH, ADI, Unitec, Région Nouvelle-Aquitaine, and BPIFrance, Keysom is committed to advancing processor design autonomy and efficiency within the semiconductor industry

What problems are you solving?
At Keysom, we address the growing demand for customized processor architectures in an era where performance efficiency, power consumption, and cost optimization are critical.

Traditional off-the-shelf processors often come with unnecessary features that increase area usage, power consumption, and cost—a significant issue for industries needing embedded systems and edge computing solutions. We solve this by offering application-specific RISC-V processor designs that precisely align with our clients’ performance and power requirements.

Our no-code architectural exploration platform enables semiconductor companies to automatically generate optimized processor cores without requiring deep hardware design expertise. This solution accelerates time-to-market, reduces engineering costs, and improves energy efficiency—key challenges in markets like IoT, AI, automation, and robotics.

By leveraging dynamic reconfigurability and custom instruction sets, we empower companies to create processors that are uniquely tailored to their applications, unlocking better performance-per-watt and cost-effectiveness than traditional solutions.

What application areas are your strongest?
Our strongest application areas are IoT, Edge AI, and critical industrial systems.

In IoT, our customizable RISC-V processors deliver ultra-low power consumption and optimized performance for smart sensors, connected devices, and autonomous systems.

For Edge AI, we provide tailored architectures that accelerate AI inference tasks directly at the cutting-edge, enabling real-time decision-making with minimal latency and energy efficiency.

In critical industrial systems, such as robotics, automation, and embedded control systems, our dynamic processor designs ensure high reliability, real-time performance, and long product lifecycles—essential requirements for mission-critical applications.

What keeps your customers up at night?
Our customers are primarily focused on achieving the right balance of performance, cost and optimization. They need processors that deliver high performance while also being energy-efficient and cost-effective. However, they also seek tools that are open, easy to use, and capable of supporting customization without the steep learning curve.

The RISC-V open-standard is crucial to them as it enables innovation and provides the flexibility to design processors tailored to their specific needs, without being locked into proprietary solutions.

At the same time, customers are increasingly looking for ways to reduce dependency on large EDA providers. By providing a no-code platform for designing RISC-V-based processors, we offer a path to greater autonomy and the ability to innovate without the heavy reliance on expensive, complex design tools. This gives them more control over their designs, improving both agility and cost-effectiveness

What does the competitive landscape look like and how do you differentiate?
The competitive landscape in the semiconductor and RISC-V industries is extremely challenging. We are a French company, and we compete on a global scale, facing some of the largest and most established players in our industry.

However, we have several unique advantages that help us differentiate ourselves. Firstly, we benefit from the strong support of European investments, which provide us with the resources to accelerate our growth and scale innovation.

Additionally, our technology and IP Core offer immediate value to our customers. By providing customizable RISC-V solutions that are directly aligned with application-specific needs, we enable faster time-to-market, better power performance, and cost efficiency—qualities that set us apart in a highly competitive environment.

What new features/technology are you working on?
We are constantly working on advancing our technology to meet the evolving needs of our customers. Currently, we are focusing on developing even more optimized 32-bit cores, which will offer better performance and efficiency for a wide range of applications.

Additionally, we are excited about our upcoming Edge AI accelerator, which is specifically designed to handle the growing demand for real-time AI inference at the edge. This product will enable our customers to deploy AI models directly in distributed environments, with low latency and low power consumption.

Lastly, a significant area of research for us is reconfigurable architectures. We’re exploring how to create architectures that can dynamically adapt to different workloads and requirements, offering greater flexibility and optimization for diverse applications. This innovation will allow us to provide more customizable and adaptive solutions to our customers in industries like IoT, AI, and industrial automation.

How do customers normally engage with your company?
Customers typically engage with us through multiple channels. We are actively building a network of sales and representatives across Europe, the USA, and Asia to better serve our global customer base.

We also meet our customers in person during major industry events, such as the Embedded World in 2025 in Germany.

In addition, we believe in sharing knowledge and insights with the broader community, so we actively publish and share our expertise to engage with professionals and experts in our field.

Finally, to provide hands-on experience with our solutions, we offer a free trial of our EDA Cloud Keysom Core Explorer. This gives potential customers the opportunity to explore our platform and see firsthand how it can benefit their projects. For immediate access, feel free to contact me directly, and I’ll be happy to assist.

Contact Keysom

Also Read:

CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Dr Greg Law of Undo

CEO Interview with Jonathan Klamkin of Aeluma


A Perfect Storm for EUV Lithography

A Perfect Storm for EUV Lithography
by Fred Chen on 04-03-2025 at 6:00 am

EUV Made Easy 1

Electron blur, stochastics, and now polarization, are all becoming stronger influences in EUV lithography as pitch continues to shrink

As EUV lithography continues to evolve, targeting smaller and smaller pitches, new physical limitations continue to emerge as formidable obstacles. While stochastic effects have long been recognized as a critical challenge [1,2], and electron blur more recently has been considered in depth [3], polarization effects [4,5] are now becoming a growing concern in image degradation. As the industry moves beyond 2nm node, these influences create a perfect storm that threatens the quality of EUV-printed features. Loss of contrast from blur and polarization make it more likely for stochastic fluctuations to cross the printing threshold [3].

Figure 1 shows the combined effects of polarization, blur, and stochastics for 18 nm pitch as expected on a 0.55 NA EUV lithography system. Dipole-induced fading [6] is ignored as a relatively minor effect. There is a 14% loss of contrast if unpolarized light is assumed [5], but electron blur has a more significant impact (~50% loss of contrast) in aggravating stochastic electron behavior in the image. The total loss of contrast is obtained by multiplying the contrast reduction from polarization by the contrast reduction from electron blur.

Figure 1. 9 nm half-pitch image as projected by a 0.55 NA 13.5 nm wavelength EUV lithography system. No dipole-induced image fading [6] is included. The assumed electron blur is shown on the right. The stochastic electron density plot in the center assumes unpolarized light (50% TE, 50% TM) [5]. A 20 nm thick metal oxide resist (20/um absorption) was assumed.
The edge “roughness” is severe enough to count as being defective. The probability of a stochastic fluctuation crossing the printing threshold is not negligible. As pitch decreases, we should expect this to grow worse, due to the more severe impact of electron blur [3] as well as the loss of contrast for unpolarized light [4,5] (Figure 2).

Figure 2. Reduction of image contrast worsens with smaller pitch. The stochastic fluctuations in electron density also grow correspondingly more severe. Aside from pitch, the same assumptions were used as in Figure 1.

Note that even for the 14 nm pitch case, the 23% loss of contrast from going from TE-polarized to unpolarized is still less than the loss of contrast from electron blur (~60%). As pitch continues to decrease, the polarization contribution will grow, along with the increasing impact from blur. As noted in the examples considered above, although polarization is recognized within the lithography community as a growing concern, the contrast reduction from electron blur is still more significant. Therefore, we must expect any useful analysis of EUV feature printability and stochastic image fluctuations to include a realistic electron blur model.

References

[1] P. de Bisschop, “Stochastic effects in EUV lithography: random, local CD variability, and printing failures,” J. Micro/Nanolith. MEMS MOEMS 16, 041013 (2017).

[2] F. Chen, Stochastic Effects Blur the Resolution Limit of EUV Lithography.

[3] F. Chen, A Realistic Electron Blur Function Shape for EUV Resist Modeling.

[4] F. Chen, The Significance of Polarization in EUV Lithography.

[5] H. J. Levinson, “High-NA EUV lithography: current status and outlook for the future,” Jpn. J. Appl. Phys. 61 SD0803 (2022).

[6] T. A. Brunner, J. G. Santaclara, G. Bottiglieri, C. Anderson, P. Naulleau, “EUV dark field lithography: extreme resolution by blocking 0th order,” Proc. SPIE 11609, 1160906 (2021).

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work.

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work. Pledge your support

Also Read:

Variable Cell Height Track Pitch Scaling Beyond Lithography

A Realistic Electron Blur Function Shape for EUV Resist Modeling

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution

Rethinking Multipatterning for 2nm Node


Podcast EP280: A Broad View of the Impact and Implications of Industrial Policy with Economist Ian Fletcher

Podcast EP280: A Broad View of the Impact and Implications of Industrial Policy with Economist Ian Fletcher
by Daniel Nenni on 04-02-2025 at 10:00 am

Dan is joined by economist Ian Fletcher. Ian is on the Coalition for a Prosperous America Advisory Board. He is the author of Free Trade Doesn’t Work , coauthor of The Conservative Case against Free Trade, and his new book Industrial Policy for the United States Winning the Competition for Good Jobs and High-Value Industries. He has been senior economist at the Coalition, a research fellow at the US Business and Industry Council, an economist in private practice, and an IT consultant.

In this far-reaching and insightful discussion, Dan explores the history, impact and future implications of the industrial policies of the US and other nations around the world with Ian. Ian explains the beginnings of industrial policy efforts in the US and the impact these programs have had across a wide range of technologies and industries. Ian provides his views of what has worked and what needs re-focus to achieve the desired results.

Through a series of historical and potential future scenarios Ian illustrates the complexity of industrial policy and the substantial impacts it has had on the world around us.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Big Picture PSS and Perspec Deployment

Big Picture PSS and Perspec Deployment
by Bernard Murphy on 04-02-2025 at 6:00 am

semiconductor design realization

I met Moshik Rubin (Sr. Group Director, Product Marketing and BizDev in the System Verification Group at Cadence) at DVCon to talk about PSS (the Portable Stimulus Standard) and Perspec, Cadence’s platform to support PSS.  This was the big picture view I was hoping for, following more down in the details views from earlier talks.

The standard and supporting tools can do many things but all technologies have compelling sweet spots, something you probably couldn’t do any other way. Moshik provided some big picture answers to this question in what Advantest and Qualcomm are doing today. Both have built bridges in testing objectives, in one case for hardware/software integration, in the other case between pre- and post-silicon testing. Each providing a clear answer to the question: “where is PSS the only reasonable solution?”

Qualcomm automating hardware/software integration testing

Memory-mapped hardware (most hardware these days) interacts with embedded hardware functions (video, audio, AI, etc) through memory-mapped registers. A register has an address in the memory map along with a variety of properties; software interacts with the hardware by writing/reading this address. This interface definition is the critical bridge between hardware and software and must be validated thoroughly.

I remember many years ago system AEs wrote apps to generate these definitions as header files and macros, together with documentation to guide driver/firmware developers. As the design evolved, they would update the app to reflect changes. This worked well, but the bridge was manually built and maintained. As the number of registers and properties on those registers grew, opportunities for mistakes also grew. (One of my favorites, should a flag be implemented as “read” or “clear on read”? Clear on read seems an easy and fast choice but can hide some difficult bugs.)

Qualcomm chose to automate this testing through a single source of truth flow based on PSS and Perspec. They first develop PSS descriptions of use-case scenarios and leaf-level (atomic) behaviors, abstracted from detailed implementation, then develop test realizations (mapping the PSS level to target test engine) for each target. These are a native mode (C running on the host processor interacting with the rest of the SoC), a UVM mode which can interact directly with a UVM testbench, and a firmware reference mode which generates documentation to be used by driver/software developers. As the design evolves, the PSS definition is updated (intentionally, or to fix bugs exposed in regression testing), and all these levels are updated in sync.

Incidentally, I know as I’m sure Qualcomm knows that there are already tools to build register descriptions, header files, and test suites. I see Qualcomm’s approach as complementary. They need PSS suites to test across the vertical range of design applications and to define synthetic tests which must probe system-level behaviors not fully comprehended in register descriptions. Seems like an opportunity for those register tools to integrate in some way with this PSS direction.

This is a big step forward from the ad-hoc support I remember.

Advantest automating pre-/post-silicon testing

Advantest showed a demo of their flow at DVCon, apparently very well attended. Connecting pre- and post-silicon testing seems to be a hot button for a lot of folks. Historically it has been difficult to automate a bridge between these domains. Pre-silicon verification could generate flat files of test vectors that could be run on an ATE tester or in a bench setup, but that was always cumbersome and limited. Now Cadence (and others) have worked with Advantest to directly use the PSS developed in pre-silicon testing for post-silicon validation. The Advantest solution (SiConic) unifies pre-silicon and post-silicon in an automated, and versatile environment by connecting the device functional interfaces (USB, PCIe, ETH) to external interfaces such as JTAG, SPI, UART, I2C, enabling rich PSS content to execute directly against silicon. That’s a major advance for post silicon testing, now advancing beyond post-silicon exercisers in the complexity of tests that can be run, and in helping to help isolate root causes for failures.

I should add one more important point. It seems tedious these days to say that development cycles are being squeezed hard, but for the hyperscalers and other big system vendors this has never been more true. They are tied to market and Wall Street cycles, requiring that they deliver new advances each year. That puts huge pressure on all in-house development, on test development as much as design development. Anywhere design teams can find canned, proven content, they are going to snatch it up. In test they are looking for more test libraries, VIP, and system VIP. Perspec is supported by extensive content for Arm, RISC-V, and x86 platforms, including System VIP building blocks for system testbench generation, traffic generation, performance analysis and score boarding.

You can learn more about Cadence Perspec HERE.

Also Read:

Metamorphic Test in AMS. Innovation in Verification

Compute and Communications Perspectives on Automotive Trends

Bug Hunting in Multi Core Processors. Innovation in Verification


Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs
by Robert Maire on 04-01-2025 at 8:00 am

Trump Elon Doge

– Trump gives CHIPS Act & AI oversight to DOGE/Musk-“Tech Support”
– CHIPS Act to switch from incentive based to tariff/punitive based
– Musk to be responsible for US AI policy & security- Will rule ChatGPT
– Talks underway to relocate Taiwan fabs & staff to avoid China threat

Donald Trump gives CHIPS Act & AI problem to his “tech support” guru, Musk

It was announced by the White House late Monday that the Trump administration is moving responsibility for the much maligned and disliked CHIPS Act out from under the Commerce Department to be under DOGE’s responsibility.

As part of the reorganization, all computer and AI security and policy, most of which was under NIST, which is also part of the Department of Commerce, will also move to be under the control of DOGE.

This will give Musk effective control of Robert Altman and ChatGPT and all things AI which he has long sought by various means including purchase and lawsuits.

It seems that this is a huge reward by Trump for Musk’s support and loyalty to Trump as well Musk being the point man for cutting government spending.

Trump said ” I can think of nobody, in the world, better suited than Elon to take on these highly complex problems with computer chips and artificial intelligence” , he went on to say ” Elon will turn the money losing, stupid CHIPS Act into a tariff driven, money making, American job making thing of beauty”, he further said ” Nobody knows more about artificial intelligence than Elon and his Tesler cars with computer “brains” that can drive themselves”

In discussing the transfer of CHIPS Act & AI to DOGE from the Department of Commerce, White House press secretary Leavitt pointed to Musk’s very strong technology background versus Commerce Department head Lutnick’s primarily financial acumen.

A potential solution to the Taiwan issue as well?

In prior discussions, Musk had commented that the primary reason for China wanting to regain Taiwan was for China to get the critically important semiconductor manufacturing located there. It has been reported by several sources that Musk is putting together a potential plan to move most of the critical, advanced, semiconductor manufacturing out of Taiwan thereby reducing China’s desire to retake the island.

The plan would entail moving the most advanced fabs in Taiwan first, followed by the less capable fabs later. This would obviously be a huge undertaking but would likely be much less costly than a full scale war over Taiwan between the US and China.

Much of the equipment could be moved into already planned fabs in Arizona & Ohio etc. New fab shells would take one to two years to build to house the moved equipment.

Perhaps the bigger issue is where to house all the Taiwanese engineers and their families that would move along with the equipment & fabs. Estimates are that over 300,000 people would have to eventually emigrate to the US. The administration would likely make room for them by the far larger number of illegal immigrants expected to be deported, much of which is already underway.

Make Greenland Green again!

Trump’s interest in Greenland may have a lot more to it than meets the eye. Greenland is rich in rare earth elements, critical to the electronics industry. Greenland is not really green but rather ice covered with hundreds of miles of glaciers amid cold climates. Greenland has plenty of hydroelectric and water, coincidentally what AI data centers and semiconductor fabs need most. In fact semiconductor fabs and power hungry data centers would be perfect in a place that has excess water, electric and perhaps most importantly low temperatures to cool those power hungry, overheated facilities. The heat from those data centers and fabs would likely melt much of the ice cover in Greenland thereby producing more needed water. In the end , the added heating could help turn Greenland “greener ” from its current arctic facade (so much for global warming concerns). Indeed Greenland might be an alternative place to move some relocated Taiwanese fabs and their engineers, they would just have to acclimate to the colder environment.

TaiwanTechTransfer working group & signal chat

There is a secret Signal chat group that is overseeing the semiconductor technology transfer out of Taiwan and Elon Musk is the moderator of the group. Here is your private secret invite link:

TaiwanTechTransfer Secret Signal Chat

Remember, its top secret, don’t share it with anyone!

Merger of Global Foundries & UMC makes for a GLUM foundry

It has been reported in various news sources that Global Foundries and UMC are discussing a merger with the combined entity to be renamed and trading under the ticker symbol GLUM. The combined market share of 6% each would make for a total of 12% market share thereby surpassing Samsung’s roughly 10% market share in foundry. However both foundries produce primarily middling to trailing edge devices that are under attack from the quickly growing China fab capacity thus the name GLUM is appropriate given the future prospects of the market they serve.

Happy April Fools Day!!!!!!
About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

About Semiwatch

Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch.

Visit Our Website

Also Read:

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary

Trump whacking CHIPS Act? When you hold the checkbook, you make up the new rules

AMAT- In line QTR – poor guide as China Chops hit home- China mkt share loss?


CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Matthew Stephens of Impact Nano
by Daniel Nenni on 03-31-2025 at 10:00 am

Matt website

Matthew Stephens, co-founder and CEO of Impact Nano, brings over 20 years of experience commercializing advanced materials. Prior to co-founding Impact Nano, Matt was VP Sales and Products at Air Liquide Advanced Materials and held C-level leadership roles at Voltaix and Metem. Matt has a Ph.D. in Chemistry from the University of Wisconsin and an MBA from INSEAD. He started his career as an industrial research scientist in the Boston area and is a co-inventor of over a dozen U.S. patents.

Tell us about your company.

Impact Nano is a leading tier 2, North American supplier of advanced materials to the semiconductor industry. We develop and manufacture a range of products that are used in the most advanced chip manufacturing processes including EUV photoresists and ALD precursors. Our products enable faster, higher storage density computer chips with lower power-consumption.

Our expertise in ligand, organometallic, silicon and fluorine chemistries, and our ability to safely and sustainably scale-up production of ultra-high purity materials, allow us to support critical innovations in the semiconductor and other high-tech industries. Other applications for our products include nanometer films and coatings for the electronics and automotive industries, energy storage applications, and pharmaceuticals.

To expand these capabilities, we recently created Impact Chemistry, an independent subsidiary for research and development and kilo-scale production in Kingston, Ontario. Impact Chemistry specializes in product development and custom synthesis services for leading companies in the semiconductor industry. The development team has strong expertise in organometallic, inorganic and materials chemistry, safely synthesizing challenging precursors, and developing tailored processes for its customers.

Impact Chemistry’s and Impact Nano’s capabilities and offerings are highly complementary. Our shared focus on the customer positions us to support all their needs through any stage of the product lifecycle from bench-scale R&D work through to larger volume manufacturing.

What problems are you solving?
  • Materials Innovation: True innovation requires experience and expertise that can be challenging for companies to resource internally. We discover and develop new materials that enable advancements in semiconductor performance and efficiency.
  • Scale-up: Innovation is only the first step. Our ability to take products from discovery to bench-top to industrial scale production allows for atomic-level control and chemical fingerprinting by design.
  • Manufacturing: Manufacturing options in North America for these specialized materials are limited. Our fully equipped and qualified manufacturing facility in Orange, MA, allows for large scale production of these materials for our clients.
  • Sustainability: We’re committed to pursuing materials advancements that enable the green energy transition, reduce the energy demands of computing, and help to decrease the environmental impact of semiconductor manufacturing processes.

What application areas are your strongest?

Our expertise and product portfolio have positioned us as leading suppliers of the materials for several key end-use applications in the semiconductor industry including:

  • EUV photoresist materials
  • ALD/CVD precursors for Si, wide band gap, and neuromorphic devices
  • Etchants for 3D architectures

Impact Nano has expertise in chemical synthesis and characterization, equipment design and fabrication, process development, chemical packaging, and chemical manufacturing operations. We are ISO-9001certified.

What keeps your customers up at night?

  • Achieving breakthroughs in semiconductor materials performance. Chipmakers and equipment manufacturers require new innovative materials to reduce power consumption, increase performance, or reduce area cost. They need to find suppliers who can address the material challenges of creating chip features at the nanometer scale.
  • Access to scale-up and manufacturing capabilities. Great innovations are not valuable if they cannot be scaled up and manufactured at high volumes.
  • Supply chain reliability. Reliable, ethical, more sustainable supply chains are critical to the industry. Traditional sources of the required materials are often no longer viable for political or environmental reasons. Impact Nano is located in the US and Canada.

What does the competitive landscape look like and how do you differentiate?

The ecosystem of semiconductor materials suppliers exhibits a tiered structure.  Tier 1 suppliers are typically multinational companies that offer a broad array of products, many of which they source from tier 2 suppliers.  Tier 2 suppliers typically possess chemical expertise or equipment, but rarely possess applications insight, scale-up engineering expertise, or the quality mindset required to support atomic level control of thin film deposition.

In contrast, Impact Nano was founded by semiconductor materials supplier veterans who have commercialized several dozen thin film deposition materials and etchants from the lab to HVM in semiconductor fabs. Embedded in the DNA of Impact Nano are the safety and quality mindsets required to safely scale up and automate materials synthesis and purification technologies to serve semiconductor applications.

Our combination of deep expertise in synthetic and analytical chemistry, combined with our scale-up and automation capabilities for large-volume manufacturing give us the ability to provide customers with control of the chemical fingerprint of a material at all scales of manufacture.

Impact Nano has demonstrated the ability to manufacture materials at scales ranging from a few grams to over 700 tons per year for demanding semiconductor applications including silicon epitaxy.

What new features/technology are you working on?

  • We are currently working with clients to scale and manufacture a wide range of innovative materials. These include innovative ALD precursors, coating formulations, advanced catalysts, and upstream pharmaceutical reagents.
  • Scale-up and automation are key strengths of Impact Nano. We possess in-house fabrication capabilities, including welding, as well as instrumentation and control expertise that enable us to scale up chemistry in half of the time typically required.

How do customers normally engage with your company?

  • By contacting our experienced sales team: Our experienced sales professionals are readily available to discuss customer needs and provide tailored solutions.
  • Through Tier 1 suppliers. Some end user customers who are eager to control supply chains might ask their local distributor, Tier 1 supplier, or semiconductor equipment partner to work with Impact Nano to manufacture a critical material.
  • Meeting us at industry events and trade Shows: We actively participate in industry events and trade shows, showcasing our latest innovations and connecting with clients and partners.
  • Engaging us in research and development projects: We often engage in collaborative research projects with clients. An innovative customer project with Impact Chemistry is best viewed as an extension of the client’s R&D team.
  • Visiting our websites: impact-nano.com and www.impact-chemistry.com
Also Read:

CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs


An Important Advance in Analog Verification

An Important Advance in Analog Verification
by Bernard Murphy on 03-31-2025 at 6:00 am

Acclerating design exploration min

Innovation in analog design moves slowly, not from lack of desire for better methods from designers or lack of effort and ideas from design tech innovators, but simply because the space is so challenging. Continuous time and signals, and variances in ambient/process characteristics represent a multi-dimensional space across which a designer must prove stability and performance of a design. An abstraction seems like the obvious answer but a workable solution has not been easy to find. Mach42 have an ML-based answer which may just be the breakthrough designers have been looking for.

Why Analog Design is Hard

The reference engine for verifying design correctness is SPICE, which is very accurate but also slow to execute. Recent advances can accelerate performance with some compromise in accuracy, but simulator speed is only part of the problem. While an analog design has relatively few active components, simulation must be run across wide ranges of process parameters and ambient conditions to validate behavior. Sampled simulations across these range in the form of Monte-Carlo (MC) analysis or more recently Scaled Sigma Sampling (SSS) are the state of the art. These demand massive compute resources or run times to handle multi-dimensional sampling grids in which process parameters, voltages, currents, RLCs, temperatures, etc can range between min, max and nominal values.

That kind of overhead might be mandatory for tape out signoff where SPICE accuracy is essential but can be a real drag on productivity during design exploration, limiting experimentation to an uncomfortable fit between program schedules and massive MC/SSS runtimes. A better approach would be a model abstraction good enough to support fast iteration, while still allowing for full SPICE confirmation as needed.

Mach42 Discovery Platform

Mach42’s Discovery Platform builds a surrogate model using ML methods, harvesting existing simulation run results together with additional runs drive the training the AI architecture. After initial training on available simulation runs, or SPICE runs across a starter grid, Mach42 point to a couple of important innovations in their approach. These include active learning to enhance accuracy around regions in the model with high variance and a reconfigurable neural net architecture to guide to 90% model accuracy, or to allow a user to push harder for higher accuracy. I’m told that training takes no more than a few hours to an overnight run.

The 90% level is a reminder that this platform aims at fast exploration with good but not perfect accuracy. It’s a fast emulator to accelerate discovery across design options, with an expectation that final confirmation will return to signoff accuracy SPICE. That said, 90% is the same level promised by FastSPICE (over 90% accuracy), but for Discovery Platform with much faster model performance (their models don’t need to re-simulate).

This performance is important not only to get a fast abstract model. Training refinement can also find out-of-spec conditions in key performance metrics: GBW, Gain, CMRR, etc. Further this model can be invaluable to use against system level testing, to incorporate package and board level parasitics while the analog design is still in development, not just for the basics but to check for potential problems such as V/I levels, power, and ringing. That seems to me a pretty important capability to verify compliance with system level expectations early on.

Bijan Kiani, CEO of Mach42 (previously VP of marketing at Synopsys and CEO of InCA), drew an interesting comparison with PrimeTime (PT). Before such tools, simulators had to be used for timing analysis. Now, no- one would dream of using anything but PrimeTime or similar STA tools. Mach42’s models can elevate analog verification to a similar level.

Status and Looking Forward

Mach42 are building on ML technology they already have in production in a very different domain (nuclear fusion), so they had a running start in this analog application. They tell me that the Discovery Platform is already well into active evaluations with multiple customers. Mach42 also have a Connections partnership with Cadence on Spectre. In fact you can register to review a related video here.

This all looks very promising to me. Also promising is that the company is in development to build Verilog-A models in this flow. Which will be great naturally for AMS designers but also points to a possibility to develop RNM models that could be used in digital verification, notably with hardware accelerators. This would be a major advance since I hear that developing such models is still a hurdle for analog design teams. An automated way to jump over that hurdle could open the floodgates to extensive AMS testing across the analog-digital divide!

You can learn more about Mach42 HERE.

Also Read:

CEO Interview: Bijan Kiani of Mach42


Semiconductor CapEx Down in 2024 up in 2025

Semiconductor CapEx Down in 2024 up in 2025
by Bill Jewell on 03-30-2025 at 8:00 am

Mar 2025 capex 793x1024

Semiconductor Intelligence (SC-IQ) estimates semiconductor capital expenditures (CapEx) in 2024 were $155 billion, down 5% from $164 billion in 2023. Our forecast for 2025 is $160 billion, up 3%. The increase in 2025 is primarily driven by two companies. TSMC, the largest foundry company, plans between $38 billion and $42 billion in 2025 CapEx. Using the midpoint, this is an increase of $10 billion or 34%. Micron Technology projects CapEx of $14 billion for its 2025 fiscal year ending in August, up $6 billon or 73% from the previous fiscal year. Excluding these two companies, 2025 total semiconductor CapEx would decrease $12 billion or 10% from 2024. Two of the three companies with the largest CapEx plan significant cuts in 2025 with Intel down 20% and Samsung down 11%.

Semiconductor CapEx is dominated by three companies which accounted for 57% of the total in 2024: Samsung, TSMC, and Intel. As illustrated below, Samsung is responsible for 61% of total memory CapEx. TSMC spends 69% of foundry CapEx. Among Integrated Device Manufacturers (IDMs), Intel accounted for 45% of CapEx. The foundry CapEx total is based on pure-play foundries. Both Samsung and Intel also have CapEx for foundry services.

The U.S. CHIPS Act was designed to increase semiconductor manufacturing in the U.S. According to the Semiconductor Industry Association (SIA), the CHIPS Act has announced $32 billion in grants and $6 billion in loans to 32 companies for 48 projects. The largest CHIPS investments are:

Company Investment Purpose Locations
Intel $7.8 billion New/upgraded wafer fabs & packaging facility Arizona, Ohio, New Mexico, Oregon
TSMC $6.6 billion New wafer fabs Arizona
Micron Technology $6.2 billion New wafer fabs Idaho, New York, Virginia
Samsung $4.7 billion New/upgraded wafer fabs Texas
Texas Instruments $1.6 billion New wafer fabs Texas, Utah
GlobalFoundries $1.6 billion New/upgraded wafer fabs New York, Vermont

Since the latest CHIPS funding, Intel announced last month it will delay the initial opening of its planned wafer fabs in Ohio from 2027 to 2030. The Ohio fabs account for $1.5 billion of Intel’s $7.8 billion CHIPS funding. TSMC, however, announced this month it will spend an additional $100 billion on wafer fabs in the U.S. on top of the $65 billion already announced. The Trump administration has voiced its opposition to the CHIPS Act and requested the U.S. Congress to end it. If the CHIPS Act is repealed, the fate of announced CHIPS investments is uncertain.

We at Semiconductor Intelligence believe the CHIPS Act did not necessarily increase overall semiconductor CapEx. Companies plan their wafer fabs based on current and expected demand. The CHIPS Act likely influenced the location of some wafer fabs. TSMC currently has five 300 mm wafer fabs, four in Taiwan and one in China. TSMC plans to build a total of six new fabs in the U.S. and one in Germany. Samsung already had a major wafer fab in Texas, so it is uncertain if the CHIPS Act influenced its decision to build new fabs in Texas. The major U.S.-based semiconductor manufacturers (Intel, Micron, and TI) generally locate their wafer fabs in the U.S. Intel has most of its fab capacity in the U.S. but also has 300 mm fabs in Israel and Ireland. Micron has built its wafer fabs in the U.S., but through company acquisitions has fabs in Taiwan, Singapore and Japan. Texas Instruments has built all its 300 mm fabs in the U.S.

Political pressures may also affect fab location decisions. The Trump administration is considering a 25% or higher tariff on semiconductor imports to the U.S. However, tariffs on U.S. imports of semiconductors will affect companies with U.S. wafer fabs. Most of the final assembly and test of semiconductors is done outside of the U.S. According to SEMI, less than 10% of worldwide assembly and test facilities are in the U.S. The U.S. imported $63 billion of semiconductors in 2024. $28 billion, or 44%, of these imports were from three countries which have no significant wafer fab capacity but are major locations of assembly and test facilities: Malaysia, Thailand and Vietnam. SEMI estimates China has about 25% of total assembly and test facilities but only accounted for $2 billion, or 3%, of U.S. semiconductor imports. The China number is low because most semiconductors made in China are used in electronic equipment made in China. Thus, tariffs on U.S. semiconductor imports would likely hurt U.S. based companies and other companies with U.S. wafer fabs more than they would hurt China.

The global outlook for the semiconductor industry in 2025 is uncertain. The U.S. has implemented several tariff increases on certain imports and it’s considering more. Other countries have either raised or are considering raising tariffs on goods imported from the U.S. in retaliation. The tariffs will increase prices for the final consumers and thus will likely decrease demand. The tariffs may not be placed directly on semiconductors but will have a major impact on the industry if applied to goods with high semiconductor content.

Also Read:

Cutting Through the Fog: Hype versus Reality in Emerging Technologies

Accellera at DVCon 2025 Updates and Behavioral Coverage

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary