100X800 Banner (1)

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs
by Robert Maire on 04-01-2025 at 8:00 am

Trump Elon Doge

– Trump gives CHIPS Act & AI oversight to DOGE/Musk-“Tech Support”
– CHIPS Act to switch from incentive based to tariff/punitive based
– Musk to be responsible for US AI policy & security- Will rule ChatGPT
– Talks underway to relocate Taiwan fabs & staff to avoid China threat

Donald Trump gives CHIPS Act & AI problem to his “tech support” guru, Musk

It was announced by the White House late Monday that the Trump administration is moving responsibility for the much maligned and disliked CHIPS Act out from under the Commerce Department to be under DOGE’s responsibility.

As part of the reorganization, all computer and AI security and policy, most of which was under NIST, which is also part of the Department of Commerce, will also move to be under the control of DOGE.

This will give Musk effective control of Robert Altman and ChatGPT and all things AI which he has long sought by various means including purchase and lawsuits.

It seems that this is a huge reward by Trump for Musk’s support and loyalty to Trump as well Musk being the point man for cutting government spending.

Trump said ” I can think of nobody, in the world, better suited than Elon to take on these highly complex problems with computer chips and artificial intelligence” , he went on to say ” Elon will turn the money losing, stupid CHIPS Act into a tariff driven, money making, American job making thing of beauty”, he further said ” Nobody knows more about artificial intelligence than Elon and his Tesler cars with computer “brains” that can drive themselves”

In discussing the transfer of CHIPS Act & AI to DOGE from the Department of Commerce, White House press secretary Leavitt pointed to Musk’s very strong technology background versus Commerce Department head Lutnick’s primarily financial acumen.

A potential solution to the Taiwan issue as well?

In prior discussions, Musk had commented that the primary reason for China wanting to regain Taiwan was for China to get the critically important semiconductor manufacturing located there. It has been reported by several sources that Musk is putting together a potential plan to move most of the critical, advanced, semiconductor manufacturing out of Taiwan thereby reducing China’s desire to retake the island.

The plan would entail moving the most advanced fabs in Taiwan first, followed by the less capable fabs later. This would obviously be a huge undertaking but would likely be much less costly than a full scale war over Taiwan between the US and China.

Much of the equipment could be moved into already planned fabs in Arizona & Ohio etc. New fab shells would take one to two years to build to house the moved equipment.

Perhaps the bigger issue is where to house all the Taiwanese engineers and their families that would move along with the equipment & fabs. Estimates are that over 300,000 people would have to eventually emigrate to the US. The administration would likely make room for them by the far larger number of illegal immigrants expected to be deported, much of which is already underway.

Make Greenland Green again!

Trump’s interest in Greenland may have a lot more to it than meets the eye. Greenland is rich in rare earth elements, critical to the electronics industry. Greenland is not really green but rather ice covered with hundreds of miles of glaciers amid cold climates. Greenland has plenty of hydroelectric and water, coincidentally what AI data centers and semiconductor fabs need most. In fact semiconductor fabs and power hungry data centers would be perfect in a place that has excess water, electric and perhaps most importantly low temperatures to cool those power hungry, overheated facilities. The heat from those data centers and fabs would likely melt much of the ice cover in Greenland thereby producing more needed water. In the end , the added heating could help turn Greenland “greener ” from its current arctic facade (so much for global warming concerns). Indeed Greenland might be an alternative place to move some relocated Taiwanese fabs and their engineers, they would just have to acclimate to the colder environment.

TaiwanTechTransfer working group & signal chat

There is a secret Signal chat group that is overseeing the semiconductor technology transfer out of Taiwan and Elon Musk is the moderator of the group. Here is your private secret invite link:

TaiwanTechTransfer Secret Signal Chat

Remember, its top secret, don’t share it with anyone!

Merger of Global Foundries & UMC makes for a GLUM foundry

It has been reported in various news sources that Global Foundries and UMC are discussing a merger with the combined entity to be renamed and trading under the ticker symbol GLUM. The combined market share of 6% each would make for a total of 12% market share thereby surpassing Samsung’s roughly 10% market share in foundry. However both foundries produce primarily middling to trailing edge devices that are under attack from the quickly growing China fab capacity thus the name GLUM is appropriate given the future prospects of the market they serve.

Happy April Fools Day!!!!!!
About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

About Semiwatch

Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch.

Visit Our Website

Also Read:

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary

Trump whacking CHIPS Act? When you hold the checkbook, you make up the new rules

AMAT- In line QTR – poor guide as China Chops hit home- China mkt share loss?


Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures
by Kalar Rajendiran on 04-01-2025 at 6:00 am

STAR Memory System (SMS) Solution

Memory testing in the early days of computing was a relatively straightforward process. Designers relied on simple, deterministic approaches to verify the functionality of memory modules. However, as memory density increased and systems became more complex, the likelihood of faults also rose. With advancements in memory technologies, more sophisticated testing strategies emerged. Error correction codes were introduced, and self-repair strategies were developed alongside increasingly automated methods.

I spoke with Pawini Mahajan, Sr. Staff Product Manager, Memory Test & Repair Solutions at Synopsys to discuss the evolution of memory test and repair. The discussion also touched on the impact of AI-driven workloads, the challenges introduced by modern memory technologies, and the features and approaches needed for effective solutions.

The Evolution of Memory Test and Repair

Historically, memory test and repair techniques were designed for simpler architectures. As semiconductor technology advanced, memory densities increased, requiring more sophisticated test and error correction mechanisms. A notable advancement was the introduction of Built-In Self-Test (BIST) techniques, which allowed memory systems to test themselves autonomously. BIST mechanisms integrated test patterns into the memory’s design, enabling self-diagnostics during operation. This capability reduced the need for external testing and provided a more robust way to identify memory faults before they caused significant system failures.

A major breakthrough came with the introduction of the Synopsys STAR Memory System™ (SMS), which revolutionized memory test and repair technology in the industry. Yervant Zorian, the Chief Architect and Fellow at Synopsys, was the visionary behind this breakthrough. SMS provides integrated BIST, error correction, redundancy allocation, and self-repair functionalities, making it a game-changer for embedded memory solutions. Unlike earlier solutions, SMS offers continuous monitoring, identifying potential problems before they escalate into system failures. If a fault is detected, SMS can automatically apply redundant memory mappings or other repair strategies to ensure system functionality without requiring a restart or manual repair. This innovation significantly improves manufacturing yield, in-field reliability, and system-level performance.

Synopsys STAR Memory System (SMS)

The Synopsys SMS is a comprehensive, silicon-proven test, repair, and diagnostics solution designed for both Synopsys and third-party memories. It incorporates a test wrapper around each memory instance, enabling controlled access during test mode. These wrappers connect to the SMS processor, which handles test execution, failure diagnosis, and redundancy analysis. The system automates the integration of test and repair IP at the RTL level, ensuring correct connectivity through automated test-bench verification. The SMS processor interfaces with the SMS server via the IEEE 1500 standard, utilizing a TAP controller for test access and scheduling. Additionally, SMS generates tester-ready patterns in STIL, WGL, or SVF formats and features advanced diagnostics that allow SoC designers and test engineers to pinpoint the exact physical location of failing bitcells. Furthermore, SMS provides interactive silicon debugging capabilities in a lab setup without requiring a production tester, streamlining the debugging process and accelerating time-to-market.

Challenges Introduced by Modern Technologies

The rapid advancement of computing, particularly with the rise of AI-driven workloads, has significantly reshaped the landscape of memory test and repair. Traditional memory architectures are being replaced with high-performance solutions such as High Bandwidth Memory (HBM) and Compute High Bandwidth Memory (cHBM) to meet the demands of modern applications. However, these advancements introduce new challenges in defect detection, repair strategies, and real-time optimization.

Modern technologies, such as Gate-All-Around (GAA) and multi-die systems, have also significantly increased the complexity of memory design and testing. These advanced architectures, with densely packed memory configurations, heighten the risk of faults, making traditional testing methods insufficient. New fault types introduced by these technologies are difficult for conventional algorithms to detect. Additionally, multi-die architectures complicate fault isolation and repair, and the scale of memory required for AI/ML workloads, such as HBM and cHBM, makes comprehensive testing increasingly difficult. As systems operate under heavy workloads, particularly in cloud or edge environments, traditional offline diagnostics are inadequate. There is a growing need for in-field diagnostics and self-repair capabilities to minimize downtime and ensure continuous performance. Moreover, the reuse of design IPs for faster time-to-market creates challenges in ensuring compatibility and reliability within new memory configurations.

Approaches to Address These Challenges

To tackle these challenges, modern memory systems must employ several advanced features. On-chip memory diagnostic (OCMD) capabilities enable real-time fault monitoring without external testers, which is particularly useful for AI/ML applications. For multi-die systems, SMS pattern diagnosis and debug capabilities help address the complexities of interconnected chiplets. Additionally, quality-of-results (QoR) optimization ensures high-performance memory for demanding workloads. Flexible repair hierarchies allow for efficient, targeted repairs without disrupting the entire system, and special testing methods are used to address defects in abutted designs, which maximize silicon area. Native IEEE 1687 support ensures seamless integration of testing across all components, while APB integration allows for comprehensive diagnostics across different memory hierarchy levels. Finally, features such as configurable e-fuse drivers and support for specialized memory types, such as banked memory, enable flexible adjustments to enhance performance and fault tolerance. Together, these solutions ensure that memory systems can meet the performance and reliability needs of next-generation computing.

The Future of Memory Test and Repair: AI-Integrated Systems

As memory demands continue to grow, the need for real-time fault detection, repair, and optimization will only become more critical. The SMS’s ability to seamlessly integrate with modern AI workloads ensures that it will continue to evolve and meet the needs of cutting-edge systems. The next phase will involve integrating AI and ML even further into memory management, enabling intelligent fault prediction, self-optimizing repair strategies, and enhanced performance monitoring. These advancements will ensure that future memory architectures can sustain the increasing computational demands of AI-driven applications.

Watch for Synopsys announcements later this year, extending and expanding their SMS solutions addressing AI-driven workloads.

To learn more about Synopsys STAR Memory System solution, click here.

Also Read:

DVCon 2025: AI and the Future of Verification Take Center Stage

Synopsys Expands Hardware-Assisted Verification Portfolio to Address Growing Chip Complexity

How Synopsys Enables Gen AI on the Edge


CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Matthew Stephens of Impact Nano
by Daniel Nenni on 03-31-2025 at 10:00 am

Matt website

Matthew Stephens, co-founder and CEO of Impact Nano, brings over 20 years of experience commercializing advanced materials. Prior to co-founding Impact Nano, Matt was VP Sales and Products at Air Liquide Advanced Materials and held C-level leadership roles at Voltaix and Metem. Matt has a Ph.D. in Chemistry from the University of Wisconsin and an MBA from INSEAD. He started his career as an industrial research scientist in the Boston area and is a co-inventor of over a dozen U.S. patents.

Tell us about your company.

Impact Nano is a leading tier 2, North American supplier of advanced materials to the semiconductor industry. We develop and manufacture a range of products that are used in the most advanced chip manufacturing processes including EUV photoresists and ALD precursors. Our products enable faster, higher storage density computer chips with lower power-consumption.

Our expertise in ligand, organometallic, silicon and fluorine chemistries, and our ability to safely and sustainably scale-up production of ultra-high purity materials, allow us to support critical innovations in the semiconductor and other high-tech industries. Other applications for our products include nanometer films and coatings for the electronics and automotive industries, energy storage applications, and pharmaceuticals.

To expand these capabilities, we recently created Impact Chemistry, an independent subsidiary for research and development and kilo-scale production in Kingston, Ontario. Impact Chemistry specializes in product development and custom synthesis services for leading companies in the semiconductor industry. The development team has strong expertise in organometallic, inorganic and materials chemistry, safely synthesizing challenging precursors, and developing tailored processes for its customers.

Impact Chemistry’s and Impact Nano’s capabilities and offerings are highly complementary. Our shared focus on the customer positions us to support all their needs through any stage of the product lifecycle from bench-scale R&D work through to larger volume manufacturing.

What problems are you solving?
  • Materials Innovation: True innovation requires experience and expertise that can be challenging for companies to resource internally. We discover and develop new materials that enable advancements in semiconductor performance and efficiency.
  • Scale-up: Innovation is only the first step. Our ability to take products from discovery to bench-top to industrial scale production allows for atomic-level control and chemical fingerprinting by design.
  • Manufacturing: Manufacturing options in North America for these specialized materials are limited. Our fully equipped and qualified manufacturing facility in Orange, MA, allows for large scale production of these materials for our clients.
  • Sustainability: We’re committed to pursuing materials advancements that enable the green energy transition, reduce the energy demands of computing, and help to decrease the environmental impact of semiconductor manufacturing processes.

What application areas are your strongest?

Our expertise and product portfolio have positioned us as leading suppliers of the materials for several key end-use applications in the semiconductor industry including:

  • EUV photoresist materials
  • ALD/CVD precursors for Si, wide band gap, and neuromorphic devices
  • Etchants for 3D architectures

Impact Nano has expertise in chemical synthesis and characterization, equipment design and fabrication, process development, chemical packaging, and chemical manufacturing operations. We are ISO-9001certified.

What keeps your customers up at night?

  • Achieving breakthroughs in semiconductor materials performance. Chipmakers and equipment manufacturers require new innovative materials to reduce power consumption, increase performance, or reduce area cost. They need to find suppliers who can address the material challenges of creating chip features at the nanometer scale.
  • Access to scale-up and manufacturing capabilities. Great innovations are not valuable if they cannot be scaled up and manufactured at high volumes.
  • Supply chain reliability. Reliable, ethical, more sustainable supply chains are critical to the industry. Traditional sources of the required materials are often no longer viable for political or environmental reasons. Impact Nano is located in the US and Canada.

What does the competitive landscape look like and how do you differentiate?

The ecosystem of semiconductor materials suppliers exhibits a tiered structure.  Tier 1 suppliers are typically multinational companies that offer a broad array of products, many of which they source from tier 2 suppliers.  Tier 2 suppliers typically possess chemical expertise or equipment, but rarely possess applications insight, scale-up engineering expertise, or the quality mindset required to support atomic level control of thin film deposition.

In contrast, Impact Nano was founded by semiconductor materials supplier veterans who have commercialized several dozen thin film deposition materials and etchants from the lab to HVM in semiconductor fabs. Embedded in the DNA of Impact Nano are the safety and quality mindsets required to safely scale up and automate materials synthesis and purification technologies to serve semiconductor applications.

Our combination of deep expertise in synthetic and analytical chemistry, combined with our scale-up and automation capabilities for large-volume manufacturing give us the ability to provide customers with control of the chemical fingerprint of a material at all scales of manufacture.

Impact Nano has demonstrated the ability to manufacture materials at scales ranging from a few grams to over 700 tons per year for demanding semiconductor applications including silicon epitaxy.

What new features/technology are you working on?

  • We are currently working with clients to scale and manufacture a wide range of innovative materials. These include innovative ALD precursors, coating formulations, advanced catalysts, and upstream pharmaceutical reagents.
  • Scale-up and automation are key strengths of Impact Nano. We possess in-house fabrication capabilities, including welding, as well as instrumentation and control expertise that enable us to scale up chemistry in half of the time typically required.

How do customers normally engage with your company?

  • By contacting our experienced sales team: Our experienced sales professionals are readily available to discuss customer needs and provide tailored solutions.
  • Through Tier 1 suppliers. Some end user customers who are eager to control supply chains might ask their local distributor, Tier 1 supplier, or semiconductor equipment partner to work with Impact Nano to manufacture a critical material.
  • Meeting us at industry events and trade Shows: We actively participate in industry events and trade shows, showcasing our latest innovations and connecting with clients and partners.
  • Engaging us in research and development projects: We often engage in collaborative research projects with clients. An innovative customer project with Impact Chemistry is best viewed as an extension of the client’s R&D team.
  • Visiting our websites: impact-nano.com and www.impact-chemistry.com
Also Read:

CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs


An Important Advance in Analog Verification

An Important Advance in Analog Verification
by Bernard Murphy on 03-31-2025 at 6:00 am

Acclerating design exploration min

Innovation in analog design moves slowly, not from lack of desire for better methods from designers or lack of effort and ideas from design tech innovators, but simply because the space is so challenging. Continuous time and signals, and variances in ambient/process characteristics represent a multi-dimensional space across which a designer must prove stability and performance of a design. An abstraction seems like the obvious answer but a workable solution has not been easy to find. Mach42 have an ML-based answer which may just be the breakthrough designers have been looking for.

Why Analog Design is Hard

The reference engine for verifying design correctness is SPICE, which is very accurate but also slow to execute. Recent advances can accelerate performance with some compromise in accuracy, but simulator speed is only part of the problem. While an analog design has relatively few active components, simulation must be run across wide ranges of process parameters and ambient conditions to validate behavior. Sampled simulations across these range in the form of Monte-Carlo (MC) analysis or more recently Scaled Sigma Sampling (SSS) are the state of the art. These demand massive compute resources or run times to handle multi-dimensional sampling grids in which process parameters, voltages, currents, RLCs, temperatures, etc can range between min, max and nominal values.

That kind of overhead might be mandatory for tape out signoff where SPICE accuracy is essential but can be a real drag on productivity during design exploration, limiting experimentation to an uncomfortable fit between program schedules and massive MC/SSS runtimes. A better approach would be a model abstraction good enough to support fast iteration, while still allowing for full SPICE confirmation as needed.

Mach42 Discovery Platform

Mach42’s Discovery Platform builds a surrogate model using ML methods, harvesting existing simulation run results together with additional runs drive the training the AI architecture. After initial training on available simulation runs, or SPICE runs across a starter grid, Mach42 point to a couple of important innovations in their approach. These include active learning to enhance accuracy around regions in the model with high variance and a reconfigurable neural net architecture to guide to 90% model accuracy, or to allow a user to push harder for higher accuracy. I’m told that training takes no more than a few hours to an overnight run.

The 90% level is a reminder that this platform aims at fast exploration with good but not perfect accuracy. It’s a fast emulator to accelerate discovery across design options, with an expectation that final confirmation will return to signoff accuracy SPICE. That said, 90% is the same level promised by FastSPICE (over 90% accuracy), but for Discovery Platform with much faster model performance (their models don’t need to re-simulate).

This performance is important not only to get a fast abstract model. Training refinement can also find out-of-spec conditions in key performance metrics: GBW, Gain, CMRR, etc. Further this model can be invaluable to use against system level testing, to incorporate package and board level parasitics while the analog design is still in development, not just for the basics but to check for potential problems such as V/I levels, power, and ringing. That seems to me a pretty important capability to verify compliance with system level expectations early on.

Bijan Kiani, CEO of Mach42 (previously VP of marketing at Synopsys and CEO of InCA), drew an interesting comparison with PrimeTime (PT). Before such tools, simulators had to be used for timing analysis. Now, no- one would dream of using anything but PrimeTime or similar STA tools. Mach42’s models can elevate analog verification to a similar level.

Status and Looking Forward

Mach42 are building on ML technology they already have in production in a very different domain (nuclear fusion), so they had a running start in this analog application. They tell me that the Discovery Platform is already well into active evaluations with multiple customers. Mach42 also have a Connections partnership with Cadence on Spectre. In fact you can register to review a related video here.

This all looks very promising to me. Also promising is that the company is in development to build Verilog-A models in this flow. Which will be great naturally for AMS designers but also points to a possibility to develop RNM models that could be used in digital verification, notably with hardware accelerators. This would be a major advance since I hear that developing such models is still a hurdle for analog design teams. An automated way to jump over that hurdle could open the floodgates to extensive AMS testing across the analog-digital divide!

You can learn more about Mach42 HERE.

Also Read:

CEO Interview: Bijan Kiani of Mach42


Semiconductor CapEx Down in 2024 up in 2025

Semiconductor CapEx Down in 2024 up in 2025
by Bill Jewell on 03-30-2025 at 8:00 am

Mar 2025 capex 793x1024

Semiconductor Intelligence (SC-IQ) estimates semiconductor capital expenditures (CapEx) in 2024 were $155 billion, down 5% from $164 billion in 2023. Our forecast for 2025 is $160 billion, up 3%. The increase in 2025 is primarily driven by two companies. TSMC, the largest foundry company, plans between $38 billion and $42 billion in 2025 CapEx. Using the midpoint, this is an increase of $10 billion or 34%. Micron Technology projects CapEx of $14 billion for its 2025 fiscal year ending in August, up $6 billon or 73% from the previous fiscal year. Excluding these two companies, 2025 total semiconductor CapEx would decrease $12 billion or 10% from 2024. Two of the three companies with the largest CapEx plan significant cuts in 2025 with Intel down 20% and Samsung down 11%.

Semiconductor CapEx is dominated by three companies which accounted for 57% of the total in 2024: Samsung, TSMC, and Intel. As illustrated below, Samsung is responsible for 61% of total memory CapEx. TSMC spends 69% of foundry CapEx. Among Integrated Device Manufacturers (IDMs), Intel accounted for 45% of CapEx. The foundry CapEx total is based on pure-play foundries. Both Samsung and Intel also have CapEx for foundry services.

The U.S. CHIPS Act was designed to increase semiconductor manufacturing in the U.S. According to the Semiconductor Industry Association (SIA), the CHIPS Act has announced $32 billion in grants and $6 billion in loans to 32 companies for 48 projects. The largest CHIPS investments are:

Company Investment Purpose Locations
Intel $7.8 billion New/upgraded wafer fabs & packaging facility Arizona, Ohio, New Mexico, Oregon
TSMC $6.6 billion New wafer fabs Arizona
Micron Technology $6.2 billion New wafer fabs Idaho, New York, Virginia
Samsung $4.7 billion New/upgraded wafer fabs Texas
Texas Instruments $1.6 billion New wafer fabs Texas, Utah
GlobalFoundries $1.6 billion New/upgraded wafer fabs New York, Vermont

Since the latest CHIPS funding, Intel announced last month it will delay the initial opening of its planned wafer fabs in Ohio from 2027 to 2030. The Ohio fabs account for $1.5 billion of Intel’s $7.8 billion CHIPS funding. TSMC, however, announced this month it will spend an additional $100 billion on wafer fabs in the U.S. on top of the $65 billion already announced. The Trump administration has voiced its opposition to the CHIPS Act and requested the U.S. Congress to end it. If the CHIPS Act is repealed, the fate of announced CHIPS investments is uncertain.

We at Semiconductor Intelligence believe the CHIPS Act did not necessarily increase overall semiconductor CapEx. Companies plan their wafer fabs based on current and expected demand. The CHIPS Act likely influenced the location of some wafer fabs. TSMC currently has five 300 mm wafer fabs, four in Taiwan and one in China. TSMC plans to build a total of six new fabs in the U.S. and one in Germany. Samsung already had a major wafer fab in Texas, so it is uncertain if the CHIPS Act influenced its decision to build new fabs in Texas. The major U.S.-based semiconductor manufacturers (Intel, Micron, and TI) generally locate their wafer fabs in the U.S. Intel has most of its fab capacity in the U.S. but also has 300 mm fabs in Israel and Ireland. Micron has built its wafer fabs in the U.S., but through company acquisitions has fabs in Taiwan, Singapore and Japan. Texas Instruments has built all its 300 mm fabs in the U.S.

Political pressures may also affect fab location decisions. The Trump administration is considering a 25% or higher tariff on semiconductor imports to the U.S. However, tariffs on U.S. imports of semiconductors will affect companies with U.S. wafer fabs. Most of the final assembly and test of semiconductors is done outside of the U.S. According to SEMI, less than 10% of worldwide assembly and test facilities are in the U.S. The U.S. imported $63 billion of semiconductors in 2024. $28 billion, or 44%, of these imports were from three countries which have no significant wafer fab capacity but are major locations of assembly and test facilities: Malaysia, Thailand and Vietnam. SEMI estimates China has about 25% of total assembly and test facilities but only accounted for $2 billion, or 3%, of U.S. semiconductor imports. The China number is low because most semiconductors made in China are used in electronic equipment made in China. Thus, tariffs on U.S. semiconductor imports would likely hurt U.S. based companies and other companies with U.S. wafer fabs more than they would hurt China.

The global outlook for the semiconductor industry in 2025 is uncertain. The U.S. has implemented several tariff increases on certain imports and it’s considering more. Other countries have either raised or are considering raising tariffs on goods imported from the U.S. in retaliation. The tariffs will increase prices for the final consumers and thus will likely decrease demand. The tariffs may not be placed directly on semiconductors but will have a major impact on the industry if applied to goods with high semiconductor content.

Also Read:

Cutting Through the Fog: Hype versus Reality in Emerging Technologies

Accellera at DVCon 2025 Updates and Behavioral Coverage

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary


Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing
by Daniel Nenni on 03-28-2025 at 10:00 am

Dan is joined by Guy Gozlan, proteanTecs director of machine learning and algorithms, overseeing research, implementation, and infrastructure of machine learning solutions. Prior to proteanTecs he was project lead at Apple, focusing on ATE optimizations using embedded software and machine learning and embedded software engineering at Mellanox.

In this informative discussion, Guy explains how the unique proteanTecs embedded agent technology is applied to chip testing. Guy explains that as complexity in devices rise, test time and cost also rise, creating trade-offs. If the tests aren’t robust, yield will suffer, creating more challenges. Yet, the mission critical nature of many new designs also demands the highest quality and reliability, further stressing test requirements. And multi-chip packaging adds additional complications with the lack of visibility onto individual devices and interconnects.

Dan explores with Guy how proteanTecs’ solution effectively addresses these challenges with deep-data analytics. By measuring and predicting chip behavior in advance, the company enables a shift-left test strategy to catch errors early, reducing costs and improving reliability of the devices. A combination of the company’s embedded agents, IP, cloud-based analytics and sophisticated machine learning (ML) models create an end-to-end solution that can be applied real-time, in real-world conditions to continuously improve effectiveness of testing and final device quality.

To learn more about this strategy, read the white paper: Cut Defects Not Yield: Outlier Detection with ML Precision

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Dr Greg Law of Undo

CEO Interview with Dr Greg Law of Undo
by Daniel Nenni on 03-28-2025 at 6:00 am

Greg Photo 200x200

Greg Law is a C++ debugging expert, well-known conference speaker, and the founder of Undo. Greg has over 20 years’ experience in the software industry working for the pioneering British computer firm Acorn, as well as NexWave and Solarflare.

Determined to build a tool to ease the pain of debugging complex software, he started his company Undo from his garden shed. Now the company is established as the time travel debugging company for Linux. He lives in Cambridge, UK with his wife and two children; and in his spare time, he likes to code and create free C/C++ debugging tutorial videos to share on Undo’s free resource center: https://undo.io/resources/gdb-watchpoint 

Tell us about your company

Undo provides advanced debugging technology that helps engineers solve the most complex software issues in semiconductor design, EDA tools, networking, and other at-scale mission-critical environments. Our solutions are trusted by engineers at top technology companies worldwide to accelerate debugging — enabling them to boost engineering productivity and get to market faster.

What problems are you solving

Most of the world’s software is not really understood by anyone. It goes wrong, and no-one knows why. Often people don’t even know why or how it works!

At Undo we allow software engineers to see exactly what their code did and why. This allows them to easily root-cause even the most difficult issues. We also allow better collaboration between and within teams. Our technology records program execution in full detail, enabling developers to replay and analyze exactly how an issue occurred — eliminating guesswork and dramatically accelerating root-cause analysis. By making debugging deterministic and shareable, we also improve collaboration within engineering teams, reducing time wasted on reproducing issues and miscommunication

What application areas are your strongest?

Any industry dealing with millions of lines of mission-critical code — where a single bug can cost millions — benefits from Undo’s ability to provide precise, replayable debugging insights. Undo is strongest in industries where software complexity, reliability, and debugging efficiency are critical. Our technology is widely used in:

  • EDA / Computational Software – Engineers rely on Undo to debug intricate design and verification tools, ensuring semiconductor development stays on schedule and resolving customer issues faster.
  • Semiconductor design – Undo enables semiconductor companies to debug complex multithreaded applications efficiently.
  • Databases – Our time travel debugging solution helps engineers building complex multithreaded data management systems to resolve hard-to-reproduce issues.
  • Networking – We assist in diagnosing failures in networking operating systems as well as routers/switches, where intermittent issues and concurrency bugs are notoriously difficult to debug.
  • Financial technology – Undo is used in trading platforms and risk management systems, where milliseconds matter and reliability is paramount.
What keeps your customers up at night?

Our customers sleep soundly! But before they become a customer, they face some serious challenges that keep them up at night:

  • Development bottlenecks – Their engineering teams are stuck in a debugging tarpit, spending days or weeks diagnosing elusive issues instead of shipping new features.
  • Missed deadlines – Product releases slip because debugging complex systems is slow and unpredictable.
  • Production failures – Bugs escape into production, causing costly downtime, reputational damage, and support escalations.
  • Incomplete coverage – In design and verification, incomplete modelling and insufficient test coverage increase the risk that a fabricated chip won’t perform as expected under real workloads. A mistake at this stage can be a multi-million-dollar disaster — or worse, require a complete respin.

Undo removes the guesswork, enabling teams to diagnose issues quickly and confidently — so they can focus on delivering high-performance, reliable software and hardware on schedule.

What does the competitive landscape look like and how do you differentiate?

Our main competition remains old-fashioned printf debugging, maybe a bit of GDB. Most engineers are still using tools and techniques that require them to guess what happened, recompile, rerun, and hope the bug reappears. Unlike printf-based debugging, the engineer can ask questions about their program’s behavior without recompiling and rerunning.

Compared to GDB, Undo tells you about the past, exactly what happened – past tense. There are a few open-source projects trying to offer similar capabilities, but they don’t scale to the size or complexity of the systems our customers work on. Time travel debugging at enterprise scale is a hard problem. We’ve spent over a decade making it reliable, fast, and usable for real-world software teams.

What new features/technology are you working on?

A lot! One interesting thing is to generate waveform views (e.g. a vcd file) from a recording. This is particularly valuable for silicon engineers using SystemC or writing C++ models. It lets them analyse software behavior in the familiar, signal-level style they’re used to from RTL simulation.

One of SystemC’s big advantages is that you can compile and run your model like regular C++, without needing a heavyweight simulator. Undo builds on that: you keep the simplicity and speed of native execution, without giving up waveforms or the power of time travel debugging. It’s the best of both worlds.

How do customers normally engage with your company?

Our customers typically engage with Undo by testing it on real-world debugging challenges. A common approach is to take a past issue — one that was exceptionally painful to diagnose — back out the fix, and then re-run the debugging process using Undo. This allows them to directly compare the traditional approach with Undo’s time travel debugging, highlighting the drastic reduction in time and effort required to find the root cause.

Once they see how much easier debugging can be, they apply Undo to an unsolved, high-priority issue. The ability to instantly replay program execution and see exactly what happened — without relying on logs or guesswork — proves so effective that teams quickly adopt Undo as a standard debugging tool across their organization.

Request a Demo

Also Read:

CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs


Upcoming Webinar: Accelerating Semiconductor Design with Generative AI and High-Level Abstraction

Upcoming Webinar: Accelerating Semiconductor Design with Generative AI and High-Level Abstraction
by Daniel Nenni on 03-27-2025 at 10:00 am

RDA SemiWikiblog graphic

We have been hearing so much lately about the power of AI and the potential of technologies like agentic AI to address the productivity gap and complexities of semiconductor designs of today and tomorrow.  Currently, however, the semiconductor industry has been slow to adopt generative and agentic AI for RTL design code.   There have been many reasons for this hesitation such as concerns about the quantity and source of RTL-based training data, plus the verification, quality and reliability of any AI generated code that is so critical for the success of the project. However, to stay competitive, the industry must embrace AI-driven hardware design to lower costs, expand accessibility, improve productivity and drive innovation.

Register for the replay

A new EDA startup, Rise Design Automation (RDA), has developed a solution that enables the use of generative AI for design, verification and exploration that overcomes many of these objections and coupled with the creativity of the human-in-the-loop, dramatically improves productivity to deliver high-quality RTL that is both verifiable and implementable.

RDA in partnership with SemiWiki will host a live webinar,  𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗦𝗲𝗺𝗶𝗰𝗼𝗻𝗱𝘂𝗰𝘁𝗼𝗿 𝗗𝗲𝘀𝗶𝗴𝗻 𝘄𝗶𝘁𝗵 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗮𝗻𝗱 𝗛𝗶𝗴𝗵-𝗟𝗲𝘃𝗲𝗹 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻 where you can learn more about this solution and have an opportunity to ask questions directly to the technical experts.  In this webinar you will learn how Rise uses a unique combination of raising design abstraction, a comprehensive high-level toolchain and a seamlessly integrated generative AI solution to deliver high quality RTL and architectural innovation in a fraction of the time.

These three technologies together are the perfect combination and complement to each other.   Once the design abstraction is raised beyond RTL, all of a sudden the massive amount of high-level code(C, C++, Python, etc.) that existing LLM’s have been trained on, become a very effective training set for generating quality high-level code.   This overcomes the questions and concerns of RTL-based training data.  Built with industry-first high-level agents and easily deployable with pretrained language models, the Rise AI solution translates natural-language intent into human-readable, modifiable, and verifiable high-level design code—reducing manual effort and accelerating adoption.

Rather than relying solely on AI for Quality of Results (QoR), Rise augments human expertise with a comprehensive high-level toolchain for design, verification, debug, and architectural exploration to generate highly optimized RTL code.  Raising design abstraction and high-level design has been proven over many years to dramatically improve productivity and quality of both design and verification but has not seen widespread adoption due to multiple factors.  These often include adoption/learning curve, lack of expertise in a project, knowing how to consistently get needed QofR compared to hand-coded RTL, verification questions, etc.   Generative AI with specialized high-level tool and language knowledge complementing human creativity and expertise with both assistants and coding and optimization agents can help overcome these challenges – like having a high-level design expert with you at all times.  Additionally, Rise has added support for untimed and loosely-timed SystemVerilog to the existing HLS languages of C++ and SystemC, so that RTL designers and project teams can choose which language best fits both their expertise and adoption comfort level.

This webinar is designed for both engineers and project managers alike.   Attendees will gain insights on into practical applications of AI-driven design methodologies and how AI can be incorporated into the design process without compromising verification rigor. This webinar is designed for both engineers and project managers alike. As SystemVerilog is new as a high-level language, it will then dive into a technical explanation of exactly what it looks like and how it works along with the features of the high-level tool chain and how RTL and verification engineers could use it.   With that foundation, it will then explain the details of the generative AI solution and how it is built and works both to assist, generative, optimize and explore.   The webinar will then conclude with a live demonstration of both the high-level tool chain running on a real design, with code walk-through, simulation results, etc followed by the AI solution interacting with both the design and the tool chain to assist, code-complete, optimize and explore various PPA solutions.   There will be plenty of time for an interactive Q&A directly to the technical team.

Register for the replay
Also Read:

CEO Interview: Badru Agarwala of Rise Design Automation

An Imaginative Approach to AI-based Design


Vision-Language Models (VLM) – the next big thing in AI?

Vision-Language Models (VLM) – the next big thing in AI?
by Daniel Nenni on 03-27-2025 at 6:00 am

Semidynamics AI SemiWiki

AI has changed a lot in the last ten years. In 2012, convolutional neural networks (CNNs) were the state of the art for computer vision. Then around 2020 vison transformers (ViTs) redefined machine learning. Now, Vision-Language Models (VLMs) are changing the game again—blending image and text understanding to power everything from autonomous vehicles to robotics to AI-driven assistants. You’ve probably heard of the biggest ones, like CLIP and DALL-E, even if you don’t know the term VLM.

Here’s the problem: most AI hardware isn’t built for this shift. The bulk of what is shipping in applications like ADAS is still focused on CNN never mind transformers. VLM? Nope.

Fixed-function Neural Processing Units (NPUs), designed for yesterday’s vison models, can’t efficiently handle VLMs’ mix of scalar, vector, and tensor operations. These models need more than just brute-force matrix math. They require:

  • Efficient memory access – AI performance often bottlenecks at data movement, not computation.
  • Programmable compute – Transformers rely on attention mechanisms, softmax etc. that traditional NPUs struggle with.
  • Scalability – AI models evolve too fast for rigid architectures to keep up.

AI needs to be freely programable. Semidynamics provides a transparent, programable solution based on the RISC-V ISA with all the flexibility that provides.

Instead of forcing AI into one-size-fits-all accelerators, you need architectures that let you build processors better suited to your AI workload. Semidynamics’ All-In-One approach delivers all the tensor, vector and CPU functionality required in a flexible and configurable solution. Instead of locking into fixed designs, a fully configurable RISC-V processor from Semidynamics can evolve with AI models—making it ideal for workloads that demand compute designed for AI, not the other way around.

VLMs aren’t just about crunching numbers. They require a mix of vector, scalar, and matrix processing. Semidynamics’ RISC-V-based All in one compute element can:

  • Process transformers efficiently—handling matrix operations and nonlinear attention mechanisms.
  • Execute complex AI logic efficiently—without unnecessary compute overhead.
  • Scale with new AI models—adapting as workloads evolve.

Instead of being limited by what a classic NPU can do, our processors are built for the job. Crucially they are fixing AI’s biggest bottleneck: memory bandwidth. Ask anyone working in AI acceleration—memory is the real problem, not raw compute power. If your processor spends more time waiting for data than processing it, you’re losing efficiency.

That’s why Semidynamics’ Gazzillion™ memory subsystem is a game-changer:

  • Reduces memory bottlenecks – Feeds data-hungry AI models with high efficiency.
  • Smarter memory access – copes with slow, external DRAM by hiding its latency.
  • Dynamic prefetching – Minimizes stalls in large-scale AI inference.

For AI workloads, data movement efficiency can be as important as FLOPS. If your hardware isn’t optimized for both, you’re leaving performance on the table.

AI shouldn’t be held back by hardware limitations. That’s why RISC-V processors like our All-In-One designs are the future. And yet most RISC-V IP vendors are struggling to deliver the comprehensive range of IP needed to build VLM capable NPUs. Semidynamics is the only provider of fully configurable RISC-V IP with advanced vector processing and memory bandwidth optimization—giving AI companies the power to build hardware that keeps up with AI’s evolution.

If your AI models are evolving, why is your processor staying the same? The AI race won’t be won by companies using generic processors. Custom compute is the edge AI companies need.

Want to build an AI processor that’s made for the future? Get in touch with Semidynamics today.

Also Read:

2025 Outlook with Volker Politz of Semidynamics

Semidynamics: A Single-Software-Stack, Configurable and Customizable RISC-V Solution

Gazzillion Misses – Making the Memory Wall Irrelevant


CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Jonathan Klamkin of Aeluma
by Daniel Nenni on 03-26-2025 at 10:00 am

Jonathan Klamkin Aeluma

 

Jonathan Klamkin, Ph.D. is founder and CEO of Aeluma, Inc. (ALMU). He is a Professor at the University of California Santa Barbara and has previously worked at Boston University, Scuola Superiore Sant’Anna, MIT Lincoln Laboratory, and BinOptics Corp. (a laser diode manufacturer that was acquired by Macom in 2015). He is the recipient of numerous awards including the NASA Young Faculty Award, the DARPA Young Faculty Award, and the DARPA Director’s Fellowship. He has published more than 230 papers, holds more than 30 issued and pending patents, and has delivered more than 120 invited presentations to industry, government and the academic community. Dr. Klamkin has nearly 25 years of experience in integrated photonics, compound semiconductors, and silicon photonics. He and team members have grown Aeluma from its conception into a transformative semiconductor company with a U.S.-based operation capable of producing high-performance chips at scale.

Tell us about your company.

At Aeluma, we are redefining semiconductor technology by integrating high-performance materials with scalable silicon manufacturing. Our goal is to bridge the gap between compound semiconductors and high-volume production, enabling AI, quantum computing, defense and aerospace, 3D sensing and next-generation communication applications. Traditionally, these high-performance semiconductors have been limited to low-volume, niche markets, but Aeluma’s proprietary approach allows us to scale this cutting-edge technology for mass-market adoption.

We have built a U.S.-based semiconductor platform, leveraging both internal R&D and foundry partnerships, to develop and commercialize next-generation chips. With strategic collaborations with NASA, DARPA, DOE, and the Navy, we are accelerating the development of AI-driven photonics, quantum dot lasers, optical interconnect solutions and high sensitivity detectors.

What problems are you solving?

As AI, quantum computing, high-performance computing (HPC), and sensing systems evolve, the demand for higher-speed, lower-power, and more scalable semiconductor solutions is growing rapidly. Traditional semiconductor architectures struggle to meet these demands, particularly in areas like AI acceleration, high-speed optical interconnects, quantum networking, and 3D sensing. Aeluma solves this by integrating compound semiconductors with large-diameter substrates (ex. 200 and 300mm), enabling mass production of photonic and electronic devices that significantly outperform existing solutions. By bringing monolithically integrated light sources to silicon photonics, we are eliminating a key bottleneck in AI and high-performance computing, improving speed, efficiency, and scalability beyond the limitations of conventional semiconductor technology.

What application areas are your strongest?

Aeluma’s technology is making a transformative impact in AI infrastructure, defense, quantum computing, and next-generation sensing. In AI and HPC, our quantum dot laser technology and high-speed optical interconnects enable ultra-fast, low-power data transfer, solving the bandwidth and power challenges facing next-generation AI accelerators and cloud infrastructure. In defense and aerospace, we work with NASA, DARPA, and the Navy to advance high-sensitivity sensing, quantum networking, and next-generation communications. These solutions are critical for autonomous systems, secure satellite communications, and precision navigation systems. In quantum computing, our silicon-integrated photonic materials are paving the way for scalable quantum networking and next-gen optical processors, essential for unlocking the next era of computational power. Additionally, our technology is driving advancements in mobile, AR/VR, and automotive lidar, where precision, performance, and scalability are paramount.

What keeps your customers up at night?

The biggest challenge for our customers is scaling AI and high-performance computing without hitting power, speed, and latency bottlenecks. As AI models grow, data centers are pushing the limits of existing semiconductor technology. Customers are looking for breakthroughs in chip architecture to maintain performance and efficiency as AI, quantum computing, and 6G networks continue to scale. For 3D sensing, customers desire low-cost and scalable approaches that are also eye safe. Another major concern is supply chain resilience. The semiconductor industry has seen significant disruptions, and companies are looking for reliable, scalable solutions with a strong U.S.-based supply chain. Aeluma is positioned to address both performance challenges and supply chain reliability, making next-gen AI and quantum computing infrastructure more scalable and accessible.

What does the competitive landscape look like and how do you differentiate?

The semiconductor industry is evolving rapidly, with NVIDIA, Intel, and Broadcom investing heavily in AI acceleration and optical networking. However, traditional chip architectures were not designed for the demands of modern AI and quantum computing. While some competitors are focused on incremental improvements, Aeluma is delivering fundamental advancements in semiconductor technology. Our differentiation comes from monolithic integration of quantum dot lasers with silicon photonics, which enables faster, more efficient AI acceleration, optical interconnects, and quantum networking. Our scalable U.S.-based manufacturing approach also sets us apart, allowing us to deliver breakthrough performance while maintaining cost efficiency at scale.

What new features/technology are you working on?

We are at the forefront of AI acceleration, quantum networking, and high-speed optical data transfer. Some of our key innovations include advancing the integration of quantum dot lasers with silicon photonics, enabling high-speed, low-power optical interconnects that are essential for next-generation AI accelerators, cloud data centers, and HPC systems. Additionally, we are developing advanced SWIR (shortwave infrared) photodetectors for defense and aerospace, energy, mobile, AR/VR, and automotive applications, providing high-sensitivity imaging and sensing for facial identification, 3D imaging, and autonomous systems, and communications. Our work in next-gen optical computing solutions is also driving breakthroughs in photonics-based AI acceleration and quantum processing, addressing the speed and power limitations of traditional semiconductors. These innovations position Aeluma at the forefront of semiconductor evolution, shaping the future of AI, quantum computing, and HPC.

How do customers normally engage with your company?

Aeluma partners with leading AI, defense and aerospace, and semiconductor companies, collaborating to integrate high-performance photonics and semiconductor solutions into their next-generation platforms. We engage with AI and HPC leaders to optimize optical interconnect solutions for next-gen AI accelerators, helping them achieve faster processing speeds with lower power consumption. Our strategic partnerships with various government agencies and the DOD support the development of high-sensitivity imaging, quantum networking, and autonomous systems, ensuring. Additionally, we work closely with semiconductor manufacturers and foundries to scale high-performance semiconductors for mass-market adoption. Whether through joint development programs, direct technology licensing, or research collaborations, our customers engage with us to accelerate their technology roadmaps, improve system performance, and bring cutting-edge semiconductor innovations to market faster.

How do you see semiconductor technology evolving in the future, and what role will Aeluma play in that transformation?

Semiconductor technology is undergoing a fundamental shift, driven by rapid growth in AI, quantum computing, and HPC. Traditional silicon-based architectures are reaching their physical limits for higher processing speeds, lower power consumption, and greater data throughput. The future of semiconductors will be defined by advanced materials, integrated photonics, and large-scale heterogeneous integration, enabling faster, more efficient computing at scale.

Aeluma is positioned at the forefront of this transformation with a breakthrough semiconductor platform that integrates compound semiconductor materials with large-diameter silicon wafers. This approach eliminates performance bottlenecks in AI and quantum computing by providing performance at scale and at low cost. Aeluma’s large-diameter wafer capability and ISO 9001-certified operation allow us to produce high-speed, energy-efficient optical interconnect technologies that will be critical for next-generation AI accelerators, data centers, and quantum networks.

The market opportunity is massive. Global semiconductor sales are projected to reach $1 trillion as early as 2030, according to analysts at the Semicon West trade show in July 2024, including Needham & Co.’s Charles Shi and Gartner’s Gaurav Gupta, who suggests the milestone could occur closer to 2031 or 2032. Meanwhile, the silicon photonics market is expected to grow to approximately $8 billion by 2030, as reported by Grand View Research.

By bringing advanced photonics and compound semiconductors into mainstream semiconductor production, Aeluma is enabling the next era of computing, where speed, efficiency, and scalability define success. Our partnerships with government agencies and commercial customers further reinforce our leadership in shaping the future of AI-driven semiconductor technology.

Also Read:

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs

2025 Outlook with James Cannings QPT Limited