Banner 800x100 0810

Semiconductor CapEx Down in 2024 up in 2025

Semiconductor CapEx Down in 2024 up in 2025
by Bill Jewell on 03-30-2025 at 8:00 am

Mar 2025 capex 793x1024

Semiconductor Intelligence (SC-IQ) estimates semiconductor capital expenditures (CapEx) in 2024 were $155 billion, down 5% from $164 billion in 2023. Our forecast for 2025 is $160 billion, up 3%. The increase in 2025 is primarily driven by two companies. TSMC, the largest foundry company, plans between $38 billion and $42 billion in 2025 CapEx. Using the midpoint, this is an increase of $10 billion or 34%. Micron Technology projects CapEx of $14 billion for its 2025 fiscal year ending in August, up $6 billon or 73% from the previous fiscal year. Excluding these two companies, 2025 total semiconductor CapEx would decrease $12 billion or 10% from 2024. Two of the three companies with the largest CapEx plan significant cuts in 2025 with Intel down 20% and Samsung down 11%.

Semiconductor CapEx is dominated by three companies which accounted for 57% of the total in 2024: Samsung, TSMC, and Intel. As illustrated below, Samsung is responsible for 61% of total memory CapEx. TSMC spends 69% of foundry CapEx. Among Integrated Device Manufacturers (IDMs), Intel accounted for 45% of CapEx. The foundry CapEx total is based on pure-play foundries. Both Samsung and Intel also have CapEx for foundry services.

The U.S. CHIPS Act was designed to increase semiconductor manufacturing in the U.S. According to the Semiconductor Industry Association (SIA), the CHIPS Act has announced $32 billion in grants and $6 billion in loans to 32 companies for 48 projects. The largest CHIPS investments are:

Company Investment Purpose Locations
Intel $7.8 billion New/upgraded wafer fabs & packaging facility Arizona, Ohio, New Mexico, Oregon
TSMC $6.6 billion New wafer fabs Arizona
Micron Technology $6.2 billion New wafer fabs Idaho, New York, Virginia
Samsung $4.7 billion New/upgraded wafer fabs Texas
Texas Instruments $1.6 billion New wafer fabs Texas, Utah
GlobalFoundries $1.6 billion New/upgraded wafer fabs New York, Vermont

Since the latest CHIPS funding, Intel announced last month it will delay the initial opening of its planned wafer fabs in Ohio from 2027 to 2030. The Ohio fabs account for $1.5 billion of Intel’s $7.8 billion CHIPS funding. TSMC, however, announced this month it will spend an additional $100 billion on wafer fabs in the U.S. on top of the $65 billion already announced. The Trump administration has voiced its opposition to the CHIPS Act and requested the U.S. Congress to end it. If the CHIPS Act is repealed, the fate of announced CHIPS investments is uncertain.

We at Semiconductor Intelligence believe the CHIPS Act did not necessarily increase overall semiconductor CapEx. Companies plan their wafer fabs based on current and expected demand. The CHIPS Act likely influenced the location of some wafer fabs. TSMC currently has five 300 mm wafer fabs, four in Taiwan and one in China. TSMC plans to build a total of six new fabs in the U.S. and one in Germany. Samsung already had a major wafer fab in Texas, so it is uncertain if the CHIPS Act influenced its decision to build new fabs in Texas. The major U.S.-based semiconductor manufacturers (Intel, Micron, and TI) generally locate their wafer fabs in the U.S. Intel has most of its fab capacity in the U.S. but also has 300 mm fabs in Israel and Ireland. Micron has built its wafer fabs in the U.S., but through company acquisitions has fabs in Taiwan, Singapore and Japan. Texas Instruments has built all its 300 mm fabs in the U.S.

Political pressures may also affect fab location decisions. The Trump administration is considering a 25% or higher tariff on semiconductor imports to the U.S. However, tariffs on U.S. imports of semiconductors will affect companies with U.S. wafer fabs. Most of the final assembly and test of semiconductors is done outside of the U.S. According to SEMI, less than 10% of worldwide assembly and test facilities are in the U.S. The U.S. imported $63 billion of semiconductors in 2024. $28 billion, or 44%, of these imports were from three countries which have no significant wafer fab capacity but are major locations of assembly and test facilities: Malaysia, Thailand and Vietnam. SEMI estimates China has about 25% of total assembly and test facilities but only accounted for $2 billion, or 3%, of U.S. semiconductor imports. The China number is low because most semiconductors made in China are used in electronic equipment made in China. Thus, tariffs on U.S. semiconductor imports would likely hurt U.S. based companies and other companies with U.S. wafer fabs more than they would hurt China.

The global outlook for the semiconductor industry in 2025 is uncertain. The U.S. has implemented several tariff increases on certain imports and it’s considering more. Other countries have either raised or are considering raising tariffs on goods imported from the U.S. in retaliation. The tariffs will increase prices for the final consumers and thus will likely decrease demand. The tariffs may not be placed directly on semiconductors but will have a major impact on the industry if applied to goods with high semiconductor content.

Also Read:

Cutting Through the Fog: Hype versus Reality in Emerging Technologies

Accellera at DVCon 2025 Updates and Behavioral Coverage

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary


Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing
by Daniel Nenni on 03-28-2025 at 10:00 am

Dan is joined by Guy Gozlan, proteanTecs director of machine learning and algorithms, overseeing research, implementation, and infrastructure of machine learning solutions. Prior to proteanTecs he was project lead at Apple, focusing on ATE optimizations using embedded software and machine learning and embedded software engineering at Mellanox.

In this informative discussion, Guy explains how the unique proteanTecs embedded agent technology is applied to chip testing. Guy explains that as complexity in devices rise, test time and cost also rise, creating trade-offs. If the tests aren’t robust, yield will suffer, creating more challenges. Yet, the mission critical nature of many new designs also demands the highest quality and reliability, further stressing test requirements. And multi-chip packaging adds additional complications with the lack of visibility onto individual devices and interconnects.

Dan explores with Guy how proteanTecs’ solution effectively addresses these challenges with deep-data analytics. By measuring and predicting chip behavior in advance, the company enables a shift-left test strategy to catch errors early, reducing costs and improving reliability of the devices. A combination of the company’s embedded agents, IP, cloud-based analytics and sophisticated machine learning (ML) models create an end-to-end solution that can be applied real-time, in real-world conditions to continuously improve effectiveness of testing and final device quality.

To learn more about this strategy, read the white paper: Cut Defects Not Yield: Outlier Detection with ML Precision

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Dr Greg Law of Undo

CEO Interview with Dr Greg Law of Undo
by Daniel Nenni on 03-28-2025 at 6:00 am

Greg Photo 200x200

Greg Law is a C++ debugging expert, well-known conference speaker, and the founder of Undo. Greg has over 20 years’ experience in the software industry working for the pioneering British computer firm Acorn, as well as NexWave and Solarflare.

Determined to build a tool to ease the pain of debugging complex software, he started his company Undo from his garden shed. Now the company is established as the time travel debugging company for Linux. He lives in Cambridge, UK with his wife and two children; and in his spare time, he likes to code and create free C/C++ debugging tutorial videos to share on Undo’s free resource center: https://undo.io/resources/gdb-watchpoint 

Tell us about your company

Undo provides advanced debugging technology that helps engineers solve the most complex software issues in semiconductor design, EDA tools, networking, and other at-scale mission-critical environments. Our solutions are trusted by engineers at top technology companies worldwide to accelerate debugging — enabling them to boost engineering productivity and get to market faster.

What problems are you solving

Most of the world’s software is not really understood by anyone. It goes wrong, and no-one knows why. Often people don’t even know why or how it works!

At Undo we allow software engineers to see exactly what their code did and why. This allows them to easily root-cause even the most difficult issues. We also allow better collaboration between and within teams. Our technology records program execution in full detail, enabling developers to replay and analyze exactly how an issue occurred — eliminating guesswork and dramatically accelerating root-cause analysis. By making debugging deterministic and shareable, we also improve collaboration within engineering teams, reducing time wasted on reproducing issues and miscommunication

What application areas are your strongest?

Any industry dealing with millions of lines of mission-critical code — where a single bug can cost millions — benefits from Undo’s ability to provide precise, replayable debugging insights. Undo is strongest in industries where software complexity, reliability, and debugging efficiency are critical. Our technology is widely used in:

  • EDA / Computational Software – Engineers rely on Undo to debug intricate design and verification tools, ensuring semiconductor development stays on schedule and resolving customer issues faster.
  • Semiconductor design – Undo enables semiconductor companies to debug complex multithreaded applications efficiently.
  • Databases – Our time travel debugging solution helps engineers building complex multithreaded data management systems to resolve hard-to-reproduce issues.
  • Networking – We assist in diagnosing failures in networking operating systems as well as routers/switches, where intermittent issues and concurrency bugs are notoriously difficult to debug.
  • Financial technology – Undo is used in trading platforms and risk management systems, where milliseconds matter and reliability is paramount.
What keeps your customers up at night?

Our customers sleep soundly! But before they become a customer, they face some serious challenges that keep them up at night:

  • Development bottlenecks – Their engineering teams are stuck in a debugging tarpit, spending days or weeks diagnosing elusive issues instead of shipping new features.
  • Missed deadlines – Product releases slip because debugging complex systems is slow and unpredictable.
  • Production failures – Bugs escape into production, causing costly downtime, reputational damage, and support escalations.
  • Incomplete coverage – In design and verification, incomplete modelling and insufficient test coverage increase the risk that a fabricated chip won’t perform as expected under real workloads. A mistake at this stage can be a multi-million-dollar disaster — or worse, require a complete respin.

Undo removes the guesswork, enabling teams to diagnose issues quickly and confidently — so they can focus on delivering high-performance, reliable software and hardware on schedule.

What does the competitive landscape look like and how do you differentiate?

Our main competition remains old-fashioned printf debugging, maybe a bit of GDB. Most engineers are still using tools and techniques that require them to guess what happened, recompile, rerun, and hope the bug reappears. Unlike printf-based debugging, the engineer can ask questions about their program’s behavior without recompiling and rerunning.

Compared to GDB, Undo tells you about the past, exactly what happened – past tense. There are a few open-source projects trying to offer similar capabilities, but they don’t scale to the size or complexity of the systems our customers work on. Time travel debugging at enterprise scale is a hard problem. We’ve spent over a decade making it reliable, fast, and usable for real-world software teams.

What new features/technology are you working on?

A lot! One interesting thing is to generate waveform views (e.g. a vcd file) from a recording. This is particularly valuable for silicon engineers using SystemC or writing C++ models. It lets them analyse software behavior in the familiar, signal-level style they’re used to from RTL simulation.

One of SystemC’s big advantages is that you can compile and run your model like regular C++, without needing a heavyweight simulator. Undo builds on that: you keep the simplicity and speed of native execution, without giving up waveforms or the power of time travel debugging. It’s the best of both worlds.

How do customers normally engage with your company?

Our customers typically engage with Undo by testing it on real-world debugging challenges. A common approach is to take a past issue — one that was exceptionally painful to diagnose — back out the fix, and then re-run the debugging process using Undo. This allows them to directly compare the traditional approach with Undo’s time travel debugging, highlighting the drastic reduction in time and effort required to find the root cause.

Once they see how much easier debugging can be, they apply Undo to an unsolved, high-priority issue. The ability to instantly replay program execution and see exactly what happened — without relying on logs or guesswork — proves so effective that teams quickly adopt Undo as a standard debugging tool across their organization.

Request a Demo

Also Read:

CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs


Upcoming Webinar: Accelerating Semiconductor Design with Generative AI and High-Level Abstraction

Upcoming Webinar: Accelerating Semiconductor Design with Generative AI and High-Level Abstraction
by Daniel Nenni on 03-27-2025 at 10:00 am

RDA SemiWikiblog graphic

We have been hearing so much lately about the power of AI and the potential of technologies like agentic AI to address the productivity gap and complexities of semiconductor designs of today and tomorrow.  Currently, however, the semiconductor industry has been slow to adopt generative and agentic AI for RTL design code.   There have been many reasons for this hesitation such as concerns about the quantity and source of RTL-based training data, plus the verification, quality and reliability of any AI generated code that is so critical for the success of the project. However, to stay competitive, the industry must embrace AI-driven hardware design to lower costs, expand accessibility, improve productivity and drive innovation.

Register for the replay

A new EDA startup, Rise Design Automation (RDA), has developed a solution that enables the use of generative AI for design, verification and exploration that overcomes many of these objections and coupled with the creativity of the human-in-the-loop, dramatically improves productivity to deliver high-quality RTL that is both verifiable and implementable.

RDA in partnership with SemiWiki will host a live webinar,  𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗦𝗲𝗺𝗶𝗰𝗼𝗻𝗱𝘂𝗰𝘁𝗼𝗿 𝗗𝗲𝘀𝗶𝗴𝗻 𝘄𝗶𝘁𝗵 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗮𝗻𝗱 𝗛𝗶𝗴𝗵-𝗟𝗲𝘃𝗲𝗹 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻 where you can learn more about this solution and have an opportunity to ask questions directly to the technical experts.  In this webinar you will learn how Rise uses a unique combination of raising design abstraction, a comprehensive high-level toolchain and a seamlessly integrated generative AI solution to deliver high quality RTL and architectural innovation in a fraction of the time.

These three technologies together are the perfect combination and complement to each other.   Once the design abstraction is raised beyond RTL, all of a sudden the massive amount of high-level code(C, C++, Python, etc.) that existing LLM’s have been trained on, become a very effective training set for generating quality high-level code.   This overcomes the questions and concerns of RTL-based training data.  Built with industry-first high-level agents and easily deployable with pretrained language models, the Rise AI solution translates natural-language intent into human-readable, modifiable, and verifiable high-level design code—reducing manual effort and accelerating adoption.

Rather than relying solely on AI for Quality of Results (QoR), Rise augments human expertise with a comprehensive high-level toolchain for design, verification, debug, and architectural exploration to generate highly optimized RTL code.  Raising design abstraction and high-level design has been proven over many years to dramatically improve productivity and quality of both design and verification but has not seen widespread adoption due to multiple factors.  These often include adoption/learning curve, lack of expertise in a project, knowing how to consistently get needed QofR compared to hand-coded RTL, verification questions, etc.   Generative AI with specialized high-level tool and language knowledge complementing human creativity and expertise with both assistants and coding and optimization agents can help overcome these challenges – like having a high-level design expert with you at all times.  Additionally, Rise has added support for untimed and loosely-timed SystemVerilog to the existing HLS languages of C++ and SystemC, so that RTL designers and project teams can choose which language best fits both their expertise and adoption comfort level.

This webinar is designed for both engineers and project managers alike.   Attendees will gain insights on into practical applications of AI-driven design methodologies and how AI can be incorporated into the design process without compromising verification rigor. This webinar is designed for both engineers and project managers alike. As SystemVerilog is new as a high-level language, it will then dive into a technical explanation of exactly what it looks like and how it works along with the features of the high-level tool chain and how RTL and verification engineers could use it.   With that foundation, it will then explain the details of the generative AI solution and how it is built and works both to assist, generative, optimize and explore.   The webinar will then conclude with a live demonstration of both the high-level tool chain running on a real design, with code walk-through, simulation results, etc followed by the AI solution interacting with both the design and the tool chain to assist, code-complete, optimize and explore various PPA solutions.   There will be plenty of time for an interactive Q&A directly to the technical team.

Register for the replay
Also Read:

CEO Interview: Badru Agarwala of Rise Design Automation

An Imaginative Approach to AI-based Design


Vision-Language Models (VLM) – the next big thing in AI?

Vision-Language Models (VLM) – the next big thing in AI?
by Daniel Nenni on 03-27-2025 at 6:00 am

Semidynamics AI SemiWiki

AI has changed a lot in the last ten years. In 2012, convolutional neural networks (CNNs) were the state of the art for computer vision. Then around 2020 vison transformers (ViTs) redefined machine learning. Now, Vision-Language Models (VLMs) are changing the game again—blending image and text understanding to power everything from autonomous vehicles to robotics to AI-driven assistants. You’ve probably heard of the biggest ones, like CLIP and DALL-E, even if you don’t know the term VLM.

Here’s the problem: most AI hardware isn’t built for this shift. The bulk of what is shipping in applications like ADAS is still focused on CNN never mind transformers. VLM? Nope.

Fixed-function Neural Processing Units (NPUs), designed for yesterday’s vison models, can’t efficiently handle VLMs’ mix of scalar, vector, and tensor operations. These models need more than just brute-force matrix math. They require:

  • Efficient memory access – AI performance often bottlenecks at data movement, not computation.
  • Programmable compute – Transformers rely on attention mechanisms, softmax etc. that traditional NPUs struggle with.
  • Scalability – AI models evolve too fast for rigid architectures to keep up.

AI needs to be freely programable. Semidynamics provides a transparent, programable solution based on the RISC-V ISA with all the flexibility that provides.

Instead of forcing AI into one-size-fits-all accelerators, you need architectures that let you build processors better suited to your AI workload. Semidynamics’ All-In-One approach delivers all the tensor, vector and CPU functionality required in a flexible and configurable solution. Instead of locking into fixed designs, a fully configurable RISC-V processor from Semidynamics can evolve with AI models—making it ideal for workloads that demand compute designed for AI, not the other way around.

VLMs aren’t just about crunching numbers. They require a mix of vector, scalar, and matrix processing. Semidynamics’ RISC-V-based All in one compute element can:

  • Process transformers efficiently—handling matrix operations and nonlinear attention mechanisms.
  • Execute complex AI logic efficiently—without unnecessary compute overhead.
  • Scale with new AI models—adapting as workloads evolve.

Instead of being limited by what a classic NPU can do, our processors are built for the job. Crucially they are fixing AI’s biggest bottleneck: memory bandwidth. Ask anyone working in AI acceleration—memory is the real problem, not raw compute power. If your processor spends more time waiting for data than processing it, you’re losing efficiency.

That’s why Semidynamics’ Gazzillion™ memory subsystem is a game-changer:

  • Reduces memory bottlenecks – Feeds data-hungry AI models with high efficiency.
  • Smarter memory access – copes with slow, external DRAM by hiding its latency.
  • Dynamic prefetching – Minimizes stalls in large-scale AI inference.

For AI workloads, data movement efficiency can be as important as FLOPS. If your hardware isn’t optimized for both, you’re leaving performance on the table.

AI shouldn’t be held back by hardware limitations. That’s why RISC-V processors like our All-In-One designs are the future. And yet most RISC-V IP vendors are struggling to deliver the comprehensive range of IP needed to build VLM capable NPUs. Semidynamics is the only provider of fully configurable RISC-V IP with advanced vector processing and memory bandwidth optimization—giving AI companies the power to build hardware that keeps up with AI’s evolution.

If your AI models are evolving, why is your processor staying the same? The AI race won’t be won by companies using generic processors. Custom compute is the edge AI companies need.

Want to build an AI processor that’s made for the future? Get in touch with Semidynamics today.

Also Read:

2025 Outlook with Volker Politz of Semidynamics

Semidynamics: A Single-Software-Stack, Configurable and Customizable RISC-V Solution

Gazzillion Misses – Making the Memory Wall Irrelevant


CEO Interview with Jonathan Klamkin of Aeluma

CEO Interview with Jonathan Klamkin of Aeluma
by Daniel Nenni on 03-26-2025 at 10:00 am

Jonathan Klamkin Aeluma

 

Jonathan Klamkin, Ph.D. is founder and CEO of Aeluma, Inc. (ALMU). He is a Professor at the University of California Santa Barbara and has previously worked at Boston University, Scuola Superiore Sant’Anna, MIT Lincoln Laboratory, and BinOptics Corp. (a laser diode manufacturer that was acquired by Macom in 2015). He is the recipient of numerous awards including the NASA Young Faculty Award, the DARPA Young Faculty Award, and the DARPA Director’s Fellowship. He has published more than 230 papers, holds more than 30 issued and pending patents, and has delivered more than 120 invited presentations to industry, government and the academic community. Dr. Klamkin has nearly 25 years of experience in integrated photonics, compound semiconductors, and silicon photonics. He and team members have grown Aeluma from its conception into a transformative semiconductor company with a U.S.-based operation capable of producing high-performance chips at scale.

Tell us about your company.

At Aeluma, we are redefining semiconductor technology by integrating high-performance materials with scalable silicon manufacturing. Our goal is to bridge the gap between compound semiconductors and high-volume production, enabling AI, quantum computing, defense and aerospace, 3D sensing and next-generation communication applications. Traditionally, these high-performance semiconductors have been limited to low-volume, niche markets, but Aeluma’s proprietary approach allows us to scale this cutting-edge technology for mass-market adoption.

We have built a U.S.-based semiconductor platform, leveraging both internal R&D and foundry partnerships, to develop and commercialize next-generation chips. With strategic collaborations with NASA, DARPA, DOE, and the Navy, we are accelerating the development of AI-driven photonics, quantum dot lasers, optical interconnect solutions and high sensitivity detectors.

What problems are you solving?

As AI, quantum computing, high-performance computing (HPC), and sensing systems evolve, the demand for higher-speed, lower-power, and more scalable semiconductor solutions is growing rapidly. Traditional semiconductor architectures struggle to meet these demands, particularly in areas like AI acceleration, high-speed optical interconnects, quantum networking, and 3D sensing. Aeluma solves this by integrating compound semiconductors with large-diameter substrates (ex. 200 and 300mm), enabling mass production of photonic and electronic devices that significantly outperform existing solutions. By bringing monolithically integrated light sources to silicon photonics, we are eliminating a key bottleneck in AI and high-performance computing, improving speed, efficiency, and scalability beyond the limitations of conventional semiconductor technology.

What application areas are your strongest?

Aeluma’s technology is making a transformative impact in AI infrastructure, defense, quantum computing, and next-generation sensing. In AI and HPC, our quantum dot laser technology and high-speed optical interconnects enable ultra-fast, low-power data transfer, solving the bandwidth and power challenges facing next-generation AI accelerators and cloud infrastructure. In defense and aerospace, we work with NASA, DARPA, and the Navy to advance high-sensitivity sensing, quantum networking, and next-generation communications. These solutions are critical for autonomous systems, secure satellite communications, and precision navigation systems. In quantum computing, our silicon-integrated photonic materials are paving the way for scalable quantum networking and next-gen optical processors, essential for unlocking the next era of computational power. Additionally, our technology is driving advancements in mobile, AR/VR, and automotive lidar, where precision, performance, and scalability are paramount.

What keeps your customers up at night?

The biggest challenge for our customers is scaling AI and high-performance computing without hitting power, speed, and latency bottlenecks. As AI models grow, data centers are pushing the limits of existing semiconductor technology. Customers are looking for breakthroughs in chip architecture to maintain performance and efficiency as AI, quantum computing, and 6G networks continue to scale. For 3D sensing, customers desire low-cost and scalable approaches that are also eye safe. Another major concern is supply chain resilience. The semiconductor industry has seen significant disruptions, and companies are looking for reliable, scalable solutions with a strong U.S.-based supply chain. Aeluma is positioned to address both performance challenges and supply chain reliability, making next-gen AI and quantum computing infrastructure more scalable and accessible.

What does the competitive landscape look like and how do you differentiate?

The semiconductor industry is evolving rapidly, with NVIDIA, Intel, and Broadcom investing heavily in AI acceleration and optical networking. However, traditional chip architectures were not designed for the demands of modern AI and quantum computing. While some competitors are focused on incremental improvements, Aeluma is delivering fundamental advancements in semiconductor technology. Our differentiation comes from monolithic integration of quantum dot lasers with silicon photonics, which enables faster, more efficient AI acceleration, optical interconnects, and quantum networking. Our scalable U.S.-based manufacturing approach also sets us apart, allowing us to deliver breakthrough performance while maintaining cost efficiency at scale.

What new features/technology are you working on?

We are at the forefront of AI acceleration, quantum networking, and high-speed optical data transfer. Some of our key innovations include advancing the integration of quantum dot lasers with silicon photonics, enabling high-speed, low-power optical interconnects that are essential for next-generation AI accelerators, cloud data centers, and HPC systems. Additionally, we are developing advanced SWIR (shortwave infrared) photodetectors for defense and aerospace, energy, mobile, AR/VR, and automotive applications, providing high-sensitivity imaging and sensing for facial identification, 3D imaging, and autonomous systems, and communications. Our work in next-gen optical computing solutions is also driving breakthroughs in photonics-based AI acceleration and quantum processing, addressing the speed and power limitations of traditional semiconductors. These innovations position Aeluma at the forefront of semiconductor evolution, shaping the future of AI, quantum computing, and HPC.

How do customers normally engage with your company?

Aeluma partners with leading AI, defense and aerospace, and semiconductor companies, collaborating to integrate high-performance photonics and semiconductor solutions into their next-generation platforms. We engage with AI and HPC leaders to optimize optical interconnect solutions for next-gen AI accelerators, helping them achieve faster processing speeds with lower power consumption. Our strategic partnerships with various government agencies and the DOD support the development of high-sensitivity imaging, quantum networking, and autonomous systems, ensuring. Additionally, we work closely with semiconductor manufacturers and foundries to scale high-performance semiconductors for mass-market adoption. Whether through joint development programs, direct technology licensing, or research collaborations, our customers engage with us to accelerate their technology roadmaps, improve system performance, and bring cutting-edge semiconductor innovations to market faster.

How do you see semiconductor technology evolving in the future, and what role will Aeluma play in that transformation?

Semiconductor technology is undergoing a fundamental shift, driven by rapid growth in AI, quantum computing, and HPC. Traditional silicon-based architectures are reaching their physical limits for higher processing speeds, lower power consumption, and greater data throughput. The future of semiconductors will be defined by advanced materials, integrated photonics, and large-scale heterogeneous integration, enabling faster, more efficient computing at scale.

Aeluma is positioned at the forefront of this transformation with a breakthrough semiconductor platform that integrates compound semiconductor materials with large-diameter silicon wafers. This approach eliminates performance bottlenecks in AI and quantum computing by providing performance at scale and at low cost. Aeluma’s large-diameter wafer capability and ISO 9001-certified operation allow us to produce high-speed, energy-efficient optical interconnect technologies that will be critical for next-generation AI accelerators, data centers, and quantum networks.

The market opportunity is massive. Global semiconductor sales are projected to reach $1 trillion as early as 2030, according to analysts at the Semicon West trade show in July 2024, including Needham & Co.’s Charles Shi and Gartner’s Gaurav Gupta, who suggests the milestone could occur closer to 2031 or 2032. Meanwhile, the silicon photonics market is expected to grow to approximately $8 billion by 2030, as reported by Grand View Research.

By bringing advanced photonics and compound semiconductors into mainstream semiconductor production, Aeluma is enabling the next era of computing, where speed, efficiency, and scalability define success. Our partnerships with government agencies and commercial customers further reinforce our leadership in shaping the future of AI-driven semiconductor technology.

Also Read:

CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Jonas Sundqvist of AlixLabs

2025 Outlook with James Cannings QPT Limited


Metamorphic Test in AMS. Innovation in Verification

Metamorphic Test in AMS. Innovation in Verification
by Bernard Murphy on 03-26-2025 at 6:00 am

Innovation New

We have talked about metamorphic testing before. Here is a clever application to testing an AMS subsystem. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is System Level Verification of Phase-Locked Loop using Metamorphic Relations and was published in the 2021 DATE conference. The authors are from the University of Bremen and the Johannes Kepler University in Austria. The paper has 5 citations.

A quick recap on metamorphic testing. In some cases it is difficult or impossible to construct a meaningful oracle against which test runs can be compared to find problems. Instead metamorphic testing compares simulation behavior between two or more different tests for which certain properties in simulation results are expected to remain the same (or close to the same). Let’s call these properties “invariants”. This method is especially interesting for AMS testing where oracles inside a circuit are hard to define.

The authors use this approach to test a production PLL – part analog and part digital – by defining invariants that should hold given the structure of that function. Through this testing they were able to find an uncommon but real case in which a production PLL can lock to the wrong frequency.

Paul’s view

Fun paper this month – metamorphic testing of analog circuits. Metamorphic Testing (MT) is testing that doesn’t need a golden reference model for the design being tested. It relies instead on validating that certain relationships hold true between two different executions of a design. A common example given is for testing design that implements the sin(x) function. One “metamorphic relation” (MR) for this function is that sin(x) = sin(180-x) for any value of x. So we can write a test that just runs the design with different values of x and 180-x and check the result is always the same. Simple concept with a lot of published works showing how powerful it can be to catch corner case bugs. Threadmill, which we blogged on last month, can be considered an MT system, since it runs multi-threaded programs many times and checked the behaviors are identical to try and catch concurrency related bugs.

This paper applies MT to a commercial PLL and finds a corner case bug where a small change in the PLL input clock frequency from 1MHz to 1.01MHz causes the phase locking feedback loop in the PLL to breakdown. The paper’s main contribution is a number of clever MRs for PLLs, one of which catches this real silicon bug. This MR states that if the input clock frequency is multiplied by some factor C, then the feedback loop clock frequency inside the PLL must also be multiplied by the same factor C. Another MR states that the locking time should be the same irrespective of the input clock frequency.

Short paper, easy read. A good motivator for us all to look at our DV suites and see if we missed writing assertions/properties that check if certain relationships hold across multiple tests, not only within a single test.

Raúl’s view

“Metamorphic” testing focuses on how a system transforms inputs rather than static input-output pairs. For example, to test a program implementing sin(x), one can use sin(x)=sin(180-x) as a “metamorphic relation”. Instead of checking the expected output for a concrete input, run the program for an input x1 and afterwards for the input x2=180-x1 and check that the program gives the same output in both cases, otherwise there is a bug. Metamorphic testing has been used for software; a major advantage of this technique is that no reference model/value is needed.

This month’s paper discusses the application of metamorphic testing to Analog/Mixed-Signal (AMS) systems, specifically focusing on the verification of Phase-Locked Loops (PLLs). The authors from the University of Bremen and Johannes Kepler University identify 8 metamorphic relations for PLLs, for example: The PLL stays in the locked state if the input frequency is varied inside the lock range, and the Lock Detector signal stays on. They applied these to an industrial PLL (from an industrial partner not named explicitly) coded in SystemC and simulated with COSIDE, and discovered a previously undetected rare but real case where the PLL could lock to the wrong frequency. The bug was related to a dead-zone effect in the Phase Frequency Detector (PFD), which was resolved by adding a delay element.

The paper is succinct and self-contained and is a pleasure to read. It gives a nice introduction to PLLs and metamorphic testing of AMS systems and shows the potential of metamorphic testing for AMS verification and its ability to uncover hard to find bugs.

Also Read:

Compute and Communications Perspectives on Automotive Trends

Bug Hunting in Multi Core Processors. Innovation in Verification

Embracing the Chiplet Journey: The Shift to Chiplet-Based Architectures


Webinar: RF board design flow examples for co-simulating active circuits

Webinar: RF board design flow examples for co-simulating active circuits
by Don Dingee on 03-25-2025 at 10:00 am

Mesh domain optimization

In part one of this webinar series, Keysight and Modelithics looked at the use of 3D passive vendor component models supporting highly accurate, automated 3D EM-circuit co-simulation of high-frequency RF board designs. Part two continues the exploration of RF board design flows for simulating active circuits on boards, again with accurate, parameterized Modelithics models with the appropriate versions driving simulations in Keysight EDA Advanced Design System (ADS), RFPro, and Genesys.

Watch the replay now: Accelerate Your Design Flows with Highly Accurate Simulation Models

Design efficiency depends on using purchasable part values in accurate simulations

In this webinar, Keysight features the use of Modelithics model libraries of vendor parts with RFPro in ADS to perform accurate EM-circuit co-simulation of RF boards, and Genesys RF circuit synthesis to automatically pick the optimal discrete purchasable vendor part values to meet performance requirements of circuits built on RF boards.

If you have ever simulated and optimized even a relatively simple circuit on an RF board design, you may have noticed that optimized component values are not available from any vendor. Designers must look up vendor parts catalogs to identify real-life purchasable components, substitute those in a circuit, re-simulate, and either live with the slightly changed results or redesign using different part values. It is a very inefficient workflow.

Genesys’ unique Vendor Parts Synthesis (VPS) utilizes the Modelithics COMPLETE Library for RF circuit synthesis. VPS starts with gradient optimization to obtain the optimal theoretical part values to meet specs, then switches to discrete grid search for the nearest purchasable real-life vendor part values and further optimizes for the best combination of upper or lower nearest discrete values to produce the best realizable results. Designers have accurate simulation results and a purchasable bill of materials ready when simulations complete, saving a tremendous amount of manual schematic adjustments.

Keysight ADS users can also employ gradient, followed by discrete optimization to obtain real-life-ready results with the Modelithics COMPLETE Library for Advanced Design System (ADS). Chris DeMartino, Applications Engineer at Modelithics, conveys a simple nonlinear component example – a model for and simulation of an Infineon BAS70 nonlinear Schottky diode in a 2.45 GHz detector circuit. Their measurement-based model of the diode delivers highly accurate simulations, as shown by the match in DC output voltage between simulated (red trace) and measured (blue Xs) results.

DeMartino provides detailed examples with Skyworks diodes and Mini-Circuits LNAs in his discussion on making EM-circuit co-simulation as easy as circuit analysis with ADS and RFPro.

Exploring what is possible with models for nonlinear components

Martin Trossing, Customer Success Manager for EDA at Keysight, builds on an example in ADS used in part one to illustrate 3D component spacing, but in this session emphasizes nonlinear behavior. His demonstration creates accurate simulations of amplifiers (LNA and PA) with EM-circuit co-simulation of the physical layout and Modelithics nonlinear component models.

Comprehensive evaluation of linear and nonlinear stability of amplifiers is easy with the Winslow stability analysis in ADS. One Winslow stability analysis with one schematic (no manual probe setups) replaces 14 separate traditional stability analyses, producing all results simultaneously.

After amplifier instabilities are detected, EM-circuit co-simulation and visualization can troubleshoot the physical locations and frequencies where undesired feedback occurs. This workflow enables amplifier designers to fix issues and eliminate multiple hardware re-spins.

Stepping through component models and RF board simulation

ADS RFPro mesh domain optimization (MDO) makes EM-circuit co-simulation of any chosen RF paths comprising layout traces and circuit models easy. It eliminates tedious traditional “cookie-cutting” of the layout for localized EM simulation, followed by manual connection of EM S-parameter ports to circuit nodes for EM-circuit analysis.

The Modelithics COMPLETE Library family contains over 28,000 passive and active components, accelerating EM-circuit co-simulations in Keysight EDA platforms and smoothly connecting designs to real-world component availability. More details and discussion follow in the RF board design flow webinar, and registration is open now:

Watch the replay now: Accelerate Your Design Flows with Highly Accurate Simulation Models

Also Read:

Webinar: RF design success hinges on enhanced models and accurate simulation

Chiplets-Based Systems: Keysight’s Role in Design, Testing, and Data Management

Crosstalk, 2kAmp power delivery, PAM4, and LPDDR5 analysis at DesignCon


Ceva-XC21 and Ceva-XC23 DSPs: Advancing Wireless and Edge AI Processing

Ceva-XC21 and Ceva-XC23 DSPs: Advancing Wireless and Edge AI Processing
by Kalar Rajendiran on 03-25-2025 at 6:00 am

Cellular Evolution

Ceva recently unveiled its XC21 and XC23 DSP cores, designed to revolutionize wireless communications and edge AI processing. These new offerings build upon the Ceva-XC20 architecture, delivering unmatched efficiency, scalability, and performance for 5G-Advanced, pre-6G, and smart edge applications. As demand grows for low-power, high-performance DSPs, Ceva’s latest innovations provide future-proof solutions tailored for a broad spectrum of industries.

Architectural Highlights

Both the Ceva-XC21 and Ceva-XC23 leverage the Ceva-XC20 architecture, providing a scalable, multi-threaded processing framework optimized for cost, power, and performance. The Ceva-XC21 offers best-in-class performance per area, ensuring multi-protocol support and LTE/5G compatibility, while the Ceva-XC23 delivers higher processing power to meet the demands of next-generation cellular and satellite communications. Additionally, the software compatibility across all Ceva-XC20 DSPs and legacy XC4500 ensures seamless migration and future scalability.

Highlights of Ceva-XC21 and Ceva-XC23 DSPs

The Ceva-XC21 DSP family introduces three advanced vector DSPs: the XC210, XC211, and XC212, each offering significant improvements in area efficiency, power consumption, and performance. These DSPs are optimized for cost-sensitive and size-constrained applications, such as IoT UE (eRedCap, RedCap, CAT M, CAT1, CAT4) and 5G Non-Terrestrial Networks (NTN) terminals. The Ceva-XC212, in particular, delivers up to a 180% performance of XC4500 with a 12% area reduction, making it a high-efficiency solution for 5G-Advanced processing.

On the other hand, the Ceva-XC23 DSP is tailored for high-end applications, including infrastructure (RAN), High Power User Equipment (HPUE), Fixed Wireless Access (FWA) and satellite communications (SATCOM). It boasts a 2.4X performance improvement, AI support, high-precision acceleration, and achieves speeds of up to 1.8GHz on TSMC’s 5nm process. With its ability to handle complex communication workloads, the XC23 has already been licensed by two Tier-1 OEMs for 5G-Advanced and pre-6G deployments.

Future-Proofing

Ceva’s XC21 and XC23 leverage the Software-Defined Radio (SDR) capabilities of the XC20 architecture, allowing seamless adaptation to evolving wireless standards via software updates. Their modular and configurable nature enables customers to tailor DSP performance, ensuring longevity and scalability in an era of rapidly advancing technology. The enhanced AI capabilities also support next-generation AI-driven signal processing and edge computing, making them highly adaptable for future innovations.

Built-In AI and ML Acceleration

The integration of AI and machine learning (ML) capabilities is another standout feature of the XC23 and XC21 processors. They come with AI capabilities for modem and communications. Tasks such as channel estimation and noise filtering are traditional handled by DSP algorithms but can be supported more efficiently with the AI capabilities that come with the XC23 and XC21 processors.

Infrastructure Market Trends

The global RAN market remains strong, generating about USD 35-40 billion annually. The deployment of 5G-Advanced is accelerating, enabling enhanced connectivity and expanded network capabilities. The growth of private and industrial networks is further driving demand for customized private 5G solutions. Meanwhile, research and development for 6G technology is progressing, with commercial deployment expected by 2030. The rising network data traffic, projected to triple by 2030, necessitates advanced spectrum efficiency and infrastructure enhancements. Additionally, spectrum expansion efforts are exploring 7 GHz to 24 GHz frequency bands to accommodate future connectivity needs. The 6G market is forecasted to reach USD 68 billion by 2035, growing at an impressive CAGR of 76.9% from 2030 to 2035.

SATCOM Market – The New Space Race

The SATCOM industry is undergoing a paradigm shift, transitioning from proprietary technologies to 3GPP-compliant 5G NTN. Ceva is at the forefront of this transformation, powering Satellite 5G Base Stations that enable global coverage for consumer and industrial applications. Additionally, Ceva supports OEMs and satellite operators in user terminals, ground gateways, and satellite communication payloads. These solutions are critical as satellite communications become increasingly integrated with terrestrial 5G networks, expanding the reach of wireless connectivity.

Ceva’s Position in the Market

Ceva’s technology spans the entire cellular ecosystem, supporting applications in infrastructure, smartphones, and IoT. The company’s 5G RAN architectures extend across base stations, disaggregated DU/RU, Active Antenna Units (AAU), small cells, vRAN, Open-RAN, backhaul, and fronthaul solutions. Ceva also plays a critical role in the 5G smartphone industry, providing optimized DSP platforms and baseband modem solutions. Furthermore, in cellular IoT, Ceva delivers cutting-edge DSPs for RedCap, eRedCap, Cellular V2X, Industrial IoT, Fixed Wireless Access (FWA), and 5G Satellite connectivity.

Summary

With the Ceva-XC21 and Ceva-XC23 DSP families, Ceva continues to push the boundaries of performance, efficiency, and AI-driven processing in next-generation wireless communication and smart edge applications. By building on the scalable and future-proof Ceva-XC20 architecture, these offerings provide best-in-class solutions for 5G-Advanced, pre-6G, cellular IoT, and SATCOM. As the industry moves toward 6G, Ceva seems well-positioned to drive innovation in connectivity, AI acceleration, and advanced network infrastructure.

To learn more, visit the respective product pages below.

Ceva-XC21 page

Ceva-XC23 page

Also Read:

AI PC momentum building with business adoption anticipated

Bluetooth 6.0 Channel Sounding is Here

Spatial audio concepts targeted for earbuds and soundbars


CEO Interview with Brad Booth of NLM Photonics

CEO Interview with Brad Booth of NLM Photonics
by Daniel Nenni on 03-24-2025 at 10:00 am

Brad Booth

Brad Booth, CEO of NLM Photonics, is a distinguished technology strategy and development leader, and influential in industry consortia and standardization. Prior to NLM, Booth served at Meta Platforms and Microsoft Azure, where he focused on developing next-generation optical connectivity solutions for Cloud and AI data centers. Previously, he worked at Dell, Intel, and Bell-Northern Research. Booth led the formation of the Ultra Ethernet Consortium, the Ethernet Technology Consortium, the Consortium for On-Board Optics, and the Ethernet Alliance. He is well-known in the networking industry and has received awards for his contributions to the industry and networking standards.

Tell us about your company?

NLM Photonics is working to change the trajectory of the photonics industry using groundbreaking hybrid organic electro-optic (OEO) materials. The photonics industry does not have an analogue to Moore’s Law in the electronics industry: For us, as bandwidth increases, so does power consumption. NLM focuses on shifting the power curve down by up to 50%.

What problems are you solving?

One of the most critical problems today is the power demand associated with AI data centers. Network power demands for AI data centers can be more than double that of traditional data centers. Photonics account for 70 percent of network power consumption; almost a third of an AI data center’s total power. NLM’s target is to cut photonics power consumption by up to 50 percent, which will have a significant impact on data center power efficiency.

What application areas are your strongest?

Energy efficient modulation. Power consumption and frequency of modulation are directly impacted by the losses inherent in the modulator. Use an inefficient or high-loss modulator, and you have to correct that by burning more power. NLM’s energy-efficient modulation has gained traction in the photonics industry for datacom, telecom, and quantum applications, plus in the mmWave industry.

What keeps your customers up at night?

Customers across this industry are concerned about how to stay competitive on bandwidth capabilities while fitting within their power limitations. Whether they’re considering pluggable optics, co-packaged optics, or optical I/O, the challenges are complex. OEO materials, like NLM’s Selerion-HTX, can provide a path to address those limitations by offering increased bandwidth for significantly less power than competing technologies.

What does the competitive landscape look like and how do you differentiate?

Many incumbent technologies in the photonics industry are now being challenged by both inorganic and organic technologies. What I like about NLM’s technology is that we’re agnostic to the photonics platform, and there’s no disruption to the wafer development. And more importantly, NLM’s technology is designed for high thermal stability to make it suitable for high-volume manufacturing.

What new features/technology are you working on?

NLM Photonics continues to develop new materials, processes, and devices to tune performance, improve modulation efficiency, and accelerate the manufacturing process. Our forthcoming additions to the Selerion family of OEO materials will further redefine the boundaries of photonics performance. We look forward to sharing more on those developments in the near future.

How do customers normally engage with your company?

NLM’s customers engage with us directly today. We foresee that model will continue as we work to develop an ecosystem. Over time, our goal is to have our technology be ubiquitous throughout the semiconductor industry, enabling those in the industry to easily access NLM’s technologies for their developments and devices. If you’re a fabricator interested in partnering with us, connect with me on LinkedIn; I’d love to talk with you about the future of photonics.

Also Read:

CEO Interview with Jonas Sundqvist of AlixLabs

2025 Outlook with James Cannings QPT Limited

CEO Interview with Dr. Thang Tran of Simplex Micro