RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology

Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology
by Daniel Nenni on 04-01-2026 at 10:00 am

Alchip’s Leadership in ASIC Innovation

Alchip Technologies has recently reported significant progress in the development of advanced 2nm  ASICs, positioning itself as a leader in next-generation semiconductor design for AI and HPC. The announcement highlights Alchip’s efforts to commercialize cutting-edge chip technologies and deliver highly customized silicon solutions for data centers, hyperscalers, and AI infrastructure providers. These developments demonstrate how the company is preparing for the transition to one of the most advanced semiconductor process nodes in the industry.

A key milestone in Alchip’s 2nm strategy is the creation of a dedicated 2nm design platform, which enables customers to develop high-performance ASICs using the latest manufacturing technologies. This platform supports advanced packaging and chiplet integration methods such as 2.5D and 3D integrated circuit technologies, allowing designers to combine a 2nm compute die with input/output (I/O) chiplets produced on mature nodes such as 3nm or 5nm. This approach improves yield, reduces cost, and allows developers to integrate complex computing architectures more efficiently.

The transition to 2nm technology represents a major shift in semiconductor architecture. Unlike earlier nodes that relied on FinFET transistor designs, 2nm processes introduce nanosheet or GAA transistors, which provide better electrostatic control and enable higher transistor density. These improvements allow chips to achieve better performance and power efficiency while continuing the scaling trends predicted by Moore’s Law. For AI workloads and large-scale data centers, these advantages are particularly important because they support faster processing speeds and reduced energy consumption.

Alchip has also successfully completed a 2nm test chip tape-out, which is a crucial step in validating the design methodology and manufacturing process. The test chip includes high-speed SRAM blocks and silicon performance monitors that provide real-time insights into chip behavior. These features allow engineers to evaluate PPA characteristics of the new process technology and refine the design flow for future customer products.

Another notable aspect of the test chip is the integration of Alchip’s AP-Link-3D input/output interface, which is designed to support advanced chiplet-based architectures and 3D integration technologies. Chiplet designs divide a large system-on-chip into smaller functional blocks that can be manufactured separately and then connected through high-speed interconnects. This method improves flexibility and scalability, allowing designers to combine different process nodes and specialized components in a single package. The success of the 2nm test chip demonstrates that Alchip’s design tools and intellectual property are ready for these emerging packaging approaches.

Developing chips at the 2nm node also presents significant challenges. The smaller transistor dimensions increase power density and thermal management issues, requiring careful floorplanning, power distribution, and cooling strategies. Alchip’s design methodology addresses these challenges by incorporating thermal-aware design techniques and early optimization of placement and routing. By solving these problems earlier in the design flow, the company aims to reduce development time and improve the likelihood of first-pass silicon success.

The company’s 2nm advancements are closely tied to the broader growth of AI and high-performance computing markets. Many hyperscale data center operators and cloud providers are increasingly turning to custom ASICs rather than off-the-shelf graphics processing units (GPUs) to optimize workloads and reduce operational costs. Alchip specializes in providing these custom silicon solutions, enabling companies to design chips tailored specifically for AI training, inference, networking, and other data-intensive applications. As AI systems continue to grow in complexity, demand for specialized ASIC designs built on advanced nodes such as 2nm is expected to increase significantly.

In addition, Alchip’s work on 2nm technology positions the company for future semiconductor generations. The insights gained from its test chips and design platform will help support the transition toward even more advanced nodes, including potential 1.6nm processes and new transistor architectures. By investing early in design methodologies and packaging technologies, Alchip aims to maintain its leadership in high-performance ASIC development.

Bottom line: Alchip’s reported ASIC-leading 2nm developments highlight a major step forward in semiconductor innovation. Through its new design platform, successful test chip tape-out, and focus on advanced packaging and chiplet integration, the company is preparing customers for the next era of AI-driven computing. These efforts reinforce Alchip’s position as a key player in the global race to deliver faster, more efficient, and highly customized silicon solutions for future technology demands.

Alchip will be at the TSMC 2026 Technical Symposium as will I. You can reach Alchip here. Check out their new website!  And of course you can reach me on SemiWiki email if you are a member.

I hope to see you there!

Also Read:

2026 Outlook with Dave Hwang of Alchip

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation


CapEx Up for Foundry, Memory

CapEx Up for Foundry, Memory
by Bill Jewell on 04-01-2026 at 6:00 am

unnamed (4)

Semiconductor Intelligence estimates total semiconductor industry capital spending (CapEx) was $166 billion in 2025, up 7% from 2024. We estimate 2026 CapEx will be $200 billion, up 20% from 2025. TSMC was the largest spender in 2024 with $40.9 billion in CapEx, 25% of the total. TSMC projects 2026 CapEx will be between $52 billion and $56 billion, an increase of 27% to 37% from 2025. The company cited 5G, AI and high-performance computing (HPC) as drivers of the increase in CapEx. Other foundries are projecting flat to down CapEx in 2026, except for GlobalFoundries with a 70% increase.

On March 21, Elon Musk announced plans for Terrafab, a wafer fab to provide semiconductor devices for Musk’s companies Tesla, SpaceX and xAI. The fab will be built in Austin, Texas, at a cost of $20 billion to $25 billion. When complete, the fab will have a capacity of one million wafer starts per month at a 2nm process node. Tech Insider predicts Terrafab will have initial production in 2028 and full production in 2032. The $25 billion cost spread out over six years is just over $4 billion a year. In 2026, Terrafab will acquire land, build infrastructure and likely begin building. We estimate Terrafab will spend $3 billion in capex in 2026. We are placing Terrafab in the foundry category since its devices will be used by Musk’s companies and not sold on the open market.

Memory companies will account for the largest percentage of CapEx in 2026 at 45%. Samsung announced it will spend over 110 trillion won ($74 billion) in 2026 to “secure leadership in the AI semiconductor era”. We estimate about $34 billion of this investment will go toward R&D and non-semiconductor Capex, leaving $40 billion for semiconductor CapEx, an increase of 20% from 2025. Micron Technology and SK Hynix each should increase CapEx by over 40% in 2026.

The integrated device manufacturers (IDMs) spent $41.3 billion on CapEx in 2025, down 25% from 2024. IDM CapEx should decline again in 2026 by about 9%. The decline in IDM CapEx is largely due to AI driving market growth. Most of the AI semiconductor market is supplied by memory companies and fabless companies such as Nvidia. Intel CapEx in 2025 was $17.7 billion, down 29% from 2024. Intel expects flat to down CapEx in 2026. For many years, Intel was one of the overall top three spenders along with Samsung and TSMC. Intel was passed by SK Hynix in 2025 and will be passed by Micron Technology in 2026. Texas Instruments will spend between $2 billion and $3 billion in Capex in 2026, down from $4.6 billion in 2025 as it aligns with market conditions. STMicroelectronics and Infineon Technologies both plan CapEx increases in 2026.

What is the appropriate level of CapEx relative to the semiconductor market? The semiconductor market is notoriously volatile. Over the last forty years, annual change has ranged from 46% growth in 1984 to a 32% decline in 2001. Although the industry has become somewhat less volatile as it has matured, in the last few years it has shown an 8% decrease in 2023 and a 26% increase in 2025. Semiconductor companies need to plan their capacity several years out. It takes about two years to build a new wafer fab and additional time for planning and financing. As a result, the ratio of semiconductor CapEx to the semiconductor market varies greatly, as shown below.

The semiconductor CapEx to market size ratio has varied from a high of 34% to a low of 12%. The five-year-average ratio ranges between 28% and 18%. Over the total period of 1980 to 2025, CapEx was 23% of the semiconductor market. In 2023, the ratio was 31.1%, one of only seven times in the last 45 years it has been over 30%. The five-year-average ratio was 28.2%, only the third time since 1980 it has exceeded 28%. The ratio dropped to 25% in 2024 and 21% in 2025. Our current projection is that the ratio will drop to 19% in 2026 and the five-year-average ratio will drop to 24%. Thus, despite expected 20% growth in 2026, total CapEx does not appear to exceed the growth of the semiconductor market. If the semiconductor market continues healthy growth over the next few years, the industry should not have overcapacity.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

AI Drives Strong Semiconductor Market in 2025-2026

AI Bubble?

Semiconductors Up Over 20% in 2025


RISC-V Now! — Where Specification Meets Scale!

RISC-V Now! — Where Specification Meets Scale!
by Daniel Nenni on 03-31-2026 at 8:00 am

RVN! 26 SemiWiki (400 x 400 px) (1)

In forty plus years as a semiconductor professional I have never seen a semiconductor design ecosystem build as fast and as strong as RISC-V. As a result, RISC-V Now! has emerged as a pivotal gathering, a conference with a clear and ambitious mission: To transform the open, modular, and flexible RISC-V ISA from an exciting specification into real products that ship at scale. Unlike many technical conferences that celebrate theoretical advances, academic breakthroughs, and road-map visions, RISC-V Now! is designed as a crucible where ideas are forged into hardware, software, and commercial ecosystems that meet real-world demand. It is here that “spec goes to scale.”

Last year’s event welcomed ~600 semiconductor professionals from 250+ companies and more than 50% engineering leaders. Companies represented included Apple, Google, Amazon, Meta, Intel, NVIDIA, Qualcomm, Samsung, TSMC, Synopsys, Cadence, Siemens EDA, Alibaba, Renesas.

A Forum for Real Products

The central theme of RISC-V Now! is productization. Attendees including engineers, architects, business leaders, and open-source advocates come together not just to talk about what RISC-V could enable, but what it is enabling now. From silicon implementations and SoC designs to compilers, operating systems, and security frameworks, the conference highlights concrete progress across the industry.

By emphasizing real products rather than prototypes or speculative technologies, RISC-V Now! reduces the gap between specification and shipment. Sessions are structured around case studies, deployment stories, and lessons learned from bringing RISC-V solutions to market. This pragmatic focus accelerates commercial maturity by helping participants avoid common pitfalls, leverage proven strategies, and adopt best practices that have already shown success.

Building an Ecosystem

A key challenge for any open standard is building a robust ecosystem — one that encompasses tools, IP blocks, software stacks, developer communities, and end-user markets. For RISC-V, ecosystem development is especially critical because the architecture thrives on extensibility; companies can add custom extensions to differentiate their products. Without shared tooling and interoperability norms, such flexibility could fragment the landscape. RISC-V Now! tackles this challenge head-on by bringing stakeholders together to align on standard extensions, testing frameworks, and compliance suites.

Workshops and working groups at the conference focus on unifying performance benchmarks, enhancing support in major toolchains like LLVM and GCC, and integrating RISC-V into mainstream operating systems such as Linux and real-time OSes for embedded systems. This collaborative environment accelerates ecosystem cohesion, which in turn boosts confidence among developers and buyers alike that RISC-V-based products will be reliable, sustainable, and future-proof.

Scaling for Global Impact

As more companies commit to RISC-V silicon, the need to scale goes beyond technical readiness — it requires scalable supply chains, global partnerships, and business models that can compete with incumbents. RISC-V Now! elevates discussions about commercialization strategies that work in diverse markets, from edge and IoT devices to high-performance computing.

Investor panels and industry keynotes explore how companies are securing funding, navigating IP landscapes, and building go-to-market channels that accelerate adoption. By spotlighting success stories from startups that have shipped RISC-V products, the conference demystifies the path to scale and signals to the broader tech ecosystem that RISC-V is more than an academic curiosity, it’s a commercially viable alternative powering real devices in production.

Fostering a Collaborative Culture

Perhaps the most enduring impact of RISC-V Now! is the culture it fosters. The open ethos of RISC-V where collaboration is not only encouraged but essential is reinforced throughout the conference. Engineers share code, companies discuss interoperability challenges transparently, and cross-organizational working groups form organically. This culture accelerates innovation in ways that closed, proprietary models struggle to match.

By providing a forum where spec authors, implementers, and product teams converge, RISC-V Now! ensures that technical visions are grounded in practical realities and that practical challenges inform future standard evolution. In essence, it creates a virtuous cycle where the architecture continuously improves while products based on it flourish.

Bottom Line: RISC-V Now! is more than a technical conference, it is a catalyst for transformation in computing architecture. By focusing on tangible products that scale, it bridges the gap between open specification and market success. It builds community, fosters ecosystem maturity, and empowers innovators to take RISC-V from concept to widespread deployment. In an industry hungry for openness, flexibility, and performance, RISC-V Now! is where the future of open hardware is being built one shipment at a time.

Silicon Valley, USA
DoubleTree By Hilton San Jose
2050 Gateway Place, San Jose, CA, 95110, US
Both Days Are Free To Attend.

I hope to see you there!

Also Read:

The Launch of RISC-V Now! A New Chapter in Open Computing

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

RISC-V: Powering the Era of Intelligent General Computing


Nuclear Power and Design Automation

Nuclear Power and Design Automation
by Bernard Murphy on 03-31-2026 at 6:00 am

Nuclear reactors

A couple of folks have asked me to write on nuclear power. Nuclear offers additional sources for power generation, a pressing concern thanks to demand from giant data centers. Also, investment by Microsoft, Sam Altman and others signals their urgency to accelerate past slow moving utilities plans. I have some background in this technology given my graduate education, so I feel comfortable that I’m not entirely winging it, either in fission or fusion reactors. The topic of interest in this forum of course is what this area might have to do with design automation. I’m expanding that brief to include software design and mechanical and fluidic design, in addition to electronics design. I’ll start with a review of the core technologies.

Fission reactors

Classical nuclear has been around for a while, in the big fusion reactors which created so much anxiety around safety and nuclear waste. I was in Narita airport (Tokyo area) when the Tohoku earthquake hit in 2011. After we had been evacuated then let back into the arrivals area, everyone was glued to TV screens. All in Japanese of course but I’m pretty sure the coverage included the Fukushima nuclear power plant being swamped by the tsunami.

Concerning of course, but nuclear is very green compared to fossil fuel-based generation, now becoming a very important consideration. As a quick reminder, fusion reactors run on a chain reaction principle. A Uranium U235 nucleus emits a neutron, which hits another nucleus, which then re-emits a neutron and so on, creating a cascade of energetic neutrons. These neutrons heat up surrounding liquid, and that energy is converted to steam through a heat exchanger. Steam then drives turbines to create electricity. Very competitive with fossil-fueled generation, with no carbon dioxide waste though radioactive waste still requires geological disposal.

Fission reactor technologies continue to advance. Small Modular Reactors (SMRs) especially are very easy to scale up. One unit can produce about a third of the power of a traditional reactor, but new units can be added quite quickly (subject to regulatory review) since components can be mass-produced offsite. Regulations are being upgraded to speed review and approval, moving quickly in the UK with Rolls-Royce SMR expected to come on-line in early to mid-2030s at a cost in the range of $2.5B-$4B. Contrast that with $30B for a traditional full-size station. Regulatory processes are also being sped up in the US.

Molten Salt Reactors (MSRs) use liquid salt for heat exchange (conventional and MSRs use water) with a much higher boiling point (~1500oC) than water, allowing them to run much more efficiently than water-cooled systems and avoiding potential for steam-explosions under conditions of over-pressure. MSRs are still in in tech prove-out. China seems to be most advanced with a first test delivering an estimated 2MW, though the goal is to deliver 100MW by ~2035. The US, Canada and Europe all have projects under development.

Fusion reactors

Radioactive nuclei are naturally unstable, allowing us to tap energetic decay energy from that fission to generate electricity. At the other end of the nuclear mass scale, helium 4 (He4) is the most stable of all nuclei. If we fuse together lighter nuclei to create He4, that process will also release energy, ideally more than is required to initiate fusion, in theory with no radioactive waste.

Jamming together smaller positively charged nuclei must overcome the electromagnetic barrier, on the order of 0.1MeV, requiring temperatures around 108K in a plasma of (dissociated) atoms in which you aim to induce fusion. There are multiple technologies in development, all aiming to establish a high enough temperature in the plasma at high enough density for a long enough time to cross a point where more energy is produced than is put into the process. Commonwealth Fusion is aiming for first plasma in 2027. Helion Energy plans to meet a 2028 deadline to supply 50 MW of fusion-generated power to Microsoft’s data centers. Other technologies (hybrid, pulsed and inertial confinement) are still in R&D.

Extracting energy is another tricky step. Some plasmas like those for General Fusion emit high energy neutrons which are absorbed in a blanket layer around the plasma. These generate heating in the blanket, picked up via a coolant traveling through the blanket. That heat is exchanged to create steam and used to drive turbines, very much like the approach used in fission reactors. Helion Energy instead uses an induction method to extract energy directly from the plasma and (plausibly) claims to have much higher efficiency than the traditional cooling ->steam->turbine approach. There are also other methods.

There is little info so far on power generation capability. All methods still seem to be working towards first sustainable power.

Where does design automation fit?

Fission reactors

Electronics in containment chambers must be very radiation tolerant, apparently pushing complex designs towards FPGAs rather than processor-based architectures, though sensors and actuators may be built on rad-hard SOI with error correction logic.

Digital twin modeling is used to model control software against physics models of neutron-based heating to ensure control always reacts quickly to pump failures across edge cases (to avoid meltdown).

Mechanical/electrical Place and Route (coolant pipes, electric conduits) has become very important in SMR design to manage routing through tight spaces while maintaining separation rules for safety.

Thermal analysis modeling, along with stress modeling of the reactor core is extremely important, to assess potential overheating possibilities.

Formal verification is a requirement for proving software, also for FPGAs inside the containment vessel.

Fusion reactors

Managing a plasma stream is an immensely complex magneto-hydro dynamic fluidics problem. Plasma at a hundred million degrees or more cannot touch the sides of the containment vessel, since the plasma would collapse and do untold damage to the vessel. Containment methods depend on electric and/or magnetic fields which must respond incredibly quickly to variations. This is accomplished through high-speed control loops, also through reinforcement learning to proactively sense and correct for disruptions. (Mach42 is one company I know of in this space.)

Similar place and route requirements apply here, for cryogenic lines, high voltage lines, and fuel lines (to keep fueling the plasma).

I know that energy extraction in systems directly generating power requires very advanced power electronics. Technologies include wide bandgap semiconductors and inductive coupling techniques. I am not at all expert in this area.

Here also, digital twin modeling is important, for example to model plasma disruption to check if control responds fast enough. And again, formal verification is essential in any “cannot fail” circuits.

In summary: nuclear power as a source of energy in the near term will depend on SMR reactors, with MSRs expected to come on-line somewhat later. Fusion is still a future. Perhaps investment being pumped into fusion might accelerate prototypes which can sustainably generate energy. Meantime there are applications for design automation-like technologies, in mechanical place and route, in safety and in digital twin modeling. This area will be interesting to watch.

Also Read:

CEO Interview with Charlie Peppiatt of Gooch & Housego

CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd

Sensors Converge: Where Intelligence Meets the Edge


CEO Interview with Charlie Peppiatt of Gooch & Housego

CEO Interview with Charlie Peppiatt of Gooch & Housego
by Daniel Nenni on 03-30-2026 at 10:00 am

Reeder 231005 0313

Charlie Peppiatt has served as Chief Executive Officer of Gooch & Housego since September 2022. He joined the company from TT Electronics, where he was Executive Vice President following TT’s acquisition of Stadium Group plc. Prior to that, Charlie served as Chief Executive Officer of Stadium Group from 2013 until its acquisition in 2018.

Earlier in his career, he held senior operational leadership roles at Laird plc, a FTSE 250 electronics company, including Vice President of Global Operations. Over more than three decades in high-technology manufacturing, Charlie has led global businesses supplying advanced electronics and engineered solutions into the medical, telecommunications, industrial, and aerospace & defense sectors.

Tell us about your company.

Gooch & Housego (G&H) is a global photonics engineering and manufacturing company specializing in high-performance optical components, subsystems, and systems. We operate across the full photonics value chain, from materials and crystal growth through precision optics, fiber optics, acousto-optics, and electro-optics, all the way to integrated optical assemblies.

Our technologies sit inside many of the world’s most demanding applications, including semiconductor manufacturing, telecommunications infrastructure, quantum technologies, aerospace and defense systems, and life sciences instrumentation.

What differentiates G&H is our ability to combine deep photonics expertise with vertically integrated manufacturing. Many customers come to us when they need a partner who can move beyond individual optical components and help engineer a complete optical subsystem that performs reliably in real-world environments.

Today we operate across multiple engineering and manufacturing sites in the U.S., U.K., and Europe, with partners in Asia, supporting global customers who rely on photonics to enable next-generation technologies.

What problems are you solving?

Photonics is often the enabling technology behind advances in computing, communications, and sensing. The challenge is that optical systems must deliver extremely high precision and stability while operating in complex environments.

Our role is to help customers solve those engineering challenges.

For example, semiconductor manufacturing tools require optical components and laser control systems that can maintain stability and precision at extreme levels of miniaturization even down to the nano-scale. In telecommunications infrastructure, reliability is critical because components deployed in subsea networks must operate unattended for decades. In emerging fields like quantum computing and fusion energy research, photonic components must perform with exceptional accuracy and repeatability.

We work closely with customers to engineer optical solutions that meet those requirements while also being manufacturable at scale. That combination of performance and production readiness is where much of the real innovation happens.

What application areas are your strongest?

G&H has strong positions in several high-growth photonics markets.

Semiconductor and advanced manufacturing are key areas for us, where our optical components and systems support laser processing, metrology, and precision instrumentation used in semiconductor fabrication.

Telecommunications is another major sector, particularly in high-reliability fiber optic components used in subsea communication networks. These networks carry most of the global data traffic and require optical components with extremely long operational lifetimes.

We also support life sciences and medical instrumentation, providing precision optics and optical subsystems used in imaging, diagnostics, and analytical equipment.

In addition, we are seeing increasing demand from emerging technologies such as quantum computing, advanced sensing, and nuclear fusion research, where photonics play a critical enabling role.

What keeps your customers up at night?

For most of our customers, the biggest challenge is balancing performance, reliability, and scalability.

Many photonics solutions work well in laboratory environments but become much harder to deploy reliably in real-world systems or high-volume manufacturing. Optical alignment tolerances, thermal stability, and long-term reliability can all impact system performance.

Customers also face increasing pressure to accelerate development timelines while ensuring that new technologies can scale into production.

That is why they often look for partners who can combine optical design expertise with manufacturing capability. By working collaboratively early in the design process, we can help ensure that optical systems are optimized not only for performance but also for manufacturability and long-term reliability.

What does the competitive landscape look like and how do you differentiate?

Photonics is a diverse ecosystem with many specialized suppliers focused on individual technologies or components.

G&H differentiates itself through the breadth of our photonics capabilities and our ability to integrate them. Because we work across multiple optical technologies, including acousto-optics, electro-optics, fiber optics, and precision optics, we can design solutions that combine these elements into a single integrated system.

Vertical integration is another important differentiator. By controlling critical processes such as crystal growth, optical fabrication, and advanced assembly, we can maintain tight control over quality, performance, and supply chain reliability.

Finally, our engineering culture is built around close collaboration with customers. Many of our most successful projects begin as joint development programs where we work alongside the customer’s engineering teams to solve complex optical challenges.

What new features or technologies are you working on?

We are investing in several areas where photonics will play an increasingly important role.

One is advanced fiber optic technologies that support the growing capacity demands of global communications infrastructure. This includes high-reliability fiber components designed for long-lifetime operation in subsea networks.

Another is photonics solutions for quantum technologies. These systems often require extremely precise optical control, and our expertise in acousto-optic and electro-optic devices is well suited to these applications.

We are also continuing to develop more integrated optical subsystems that combine multiple photonic technologies into compact, robust solutions for demanding environments.

Across all of these areas, our focus is on helping customers move from research and prototype stages into scalable production.

How do customers normally engage with your company?

Most engagements begin with a technical discussion around a specific challenge or application requirement.

In some cases, customers are looking for a specific optical component or assembly. In other instances, they need help designing an optical subsystem or solving a broader system-level problem.

Our engineering teams work closely with customers to understand the application, define the performance requirements, and identify the most effective solution. That collaboration often continues through prototyping, validation, and eventually production.

Because photonics systems are highly application-specific, long-term partnerships are common. Many of our customer relationships span years or even decades as technologies evolve and new programs are developed.

Also Read:

CEO Interview with JP Pentinen of Vexlum

CEO Interview with Moti Margalit of SonicEdge

CEO Interview with Dr. Mohammad Rastegari of Elastix.AI


CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd

CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd
by Daniel Nenni on 03-30-2026 at 6:00 am

PXL 20260326 123134779

Jussi-Pekka Penttinen is the chief executive officer, chief technical officer, and cofounder of Vexlum Ltd, an advanced laser technology company. With more than 15 years of experience, he is a leading researcher in the field of Vertical External Cavity Surface Emitting Laser (VECSEL) and successfully commercialized the technology. Vexlum has translated cutting-edge research into products as a fast-growing company, providing an enabling technology for the quantum industry and cutting-edge solutions in other markets.

Tell us about your company.

Vexlum is a manufacturer of advanced semiconductor lasers for high-impact applications with deep roots

in a unique academic collaboration that bridged continents and scientific disciplines. The company’s laser concept emerged from a crucial partnership between a quantum research group at NIST (National Institute of Standards and Technology) in Boulder, Colorado, and a semiconductor and optoelectronics team at Tampere University in Finland. This partnership eventually led to the development of Vexlum’s core technology. This history is directly connected to the foundational work of Nobel laureate David Wineland’s group, whose groundbreaking trapped ion research required the kind of laser capabilities that Vexlum’s technology was designed to deliver.

Looking to the future, Vexlum’s success in the quantum computing industry has made it possible to diversify into high-growth markets like the semiconductor and medical industries. The extreme precision and stability required for quantum computing serve as a powerful validation of Vexlum’s technology, providing a strong reputation to leverage in other fields. Our lasers have potential applications in semiconductor manufacturing for precision lithography and inspection, as well as in medical treatments in dermatology and ophthalmology.. By focusing on providing the most powerful engine for these diverse applications, Vexlum is already being recognized as an advanced laser company that empowers a wide array of human endeavors formerly thought to be impossible, from scientific discovery and space exploration to everyday health and technology.

What problems are you solving?

The size and cost of lasers available to meet the needs of quantum technology have long been recognized as a bottleneck in advancing quantum technologies, such as trapped-ion or neutral-atom quantum computers. Additionally, the lack of a mature enabling technology supply chain for quantum technology further slows down the scaling of quantum computing technology.

Laser systems are often bulky and expensive to integrate, requiring significant space. More than 100 different laser wavelengths are needed across all quantum technology implementations, and different applications impose conflicting requirements on size, weight, and performance.

What application areas are your strongest in?

Vexlum’s lasers are an enabling technology for some of the most demanding applications in science and industry. While the company’s roots are in solving the hardest problems of quantum computing, this has also enabled our lasers to be used in the newest optical atomic clocks and in semiconductor manufacturing. We have been particularly strong in scientific applications. Vexlum has delivered hundreds of high-performance, compact, and cost-effective lasers that replace older, more complicated, and expensive technologies used for research and space exploration. This strategy of democratizing access to cutting-edge laser technology is allowing a broader range of institutions and companies to push the boundaries of research and development.

What keeps your customers up at night?

The cost and size of lasers that must be an exact wavelength a big concern. When new ideas and breakthroughs happen in science and industry, the actual implementation is often blocked by lack of funding or space.

For example, in the space industry, there are challenges in communicating with satellites and identifying the exact location of objects orbiting the Earth due to unpredictable weather and light. To fix this, lasers are used not only for the communication itself, but also for properly locating objects that need to be communicated with using a special yellow laser. Currently, the benefits of these adaptive optical correction systems, which use large, bulky, and expensive lasers, are limited to large telescopes with the space and budget to operate systems that overcome imaging fuzziness created by atmospheric air currents. Vexlum’s technology addresses the key challenges of space-to-ground optical links, including turbulent air currents and the slower transfer speeds of radio waves, by eliminating the need for the massive, costly yellow lasers used in ELTs (Extra Large Telescopes).

By making adaptive optics accessible to smaller telescopes, Vexlum’s approach opens the door to faster delivery of critical information, such as hyperspectral imaging for monitoring wildfires, floods, and ecosystems, as well as more precise tracking of satellites and space debris to enable trajectory corrections and collision avoidance.

What does the competitive landscape look like and how do you differentiate?

We are lucky to be located in Tampere, Finland, which is the emerging “Silicon Valley” of type III / IV laser semiconductor technology. Our patents, unique technology based on the foundational work of Nobel laureate David Wineland’s group, a growing number of partnerships in cutting-edge science, and being in the hidden hub of this specific type of semiconductor development, seem to be keeping us one step ahead of our competitors. This opportunity has been a long time in the making, based on many decades of research and innovation. We consider ourselves to be fortunate that we can be at the right time and place to see and participate in this moment of so many amazing breakthroughs enabled by new photonics advancements.

What new features/technology are you working on?

Vexlum just released its new VXL laser, a next-generation in its single-frequency Vertical-External-Cavity Surface-Emitting Laser (VECSEL) portfolio that combines high performance with a compact, robust design.

In addition to its capability to be made in any wavelength, this laser is 10 times smaller than many systems on the market with similar power qualities, and the VXL laser platform delivers the same high-output powers as Vexlum’s VALO platform, in a dramatically smaller and more resilient package, bringing quantum-enabling technology within reach of more research and industry applications.

As a vertically integrated laser manufacturer, Vexlum is accelerating development of quantum technologies by providing single-frequency, high-power, low-noise lasers at an industry-leading selection of wavelengths. Along with being some of the most powerful and accurate lasers available for quantum computing applications, the company’s solutions are driving development in quantum sensing and lab-to-field deployment of quantum technologies.

A laser platform that had typically comprised rack-mounted components is now reduced to a compact, two-liter system, a more than 20-fold reduction in volume, while improving robustness and accessibility. In addition to removing bottlenecks in scaling quantum technologies, the VXL has dual-use applications in the semiconductor, medical, or defense markets. The VXL has already been deployed in early-access projects by research organizations and universities, focusing on quantum computing and quantum sensing technologies.

How do customers normally engage with your company?

Tell us your wavelength and we will custom-make a system for you.

When new discoveries require a specific laser wavelength, we work directly with the researchers or manufacturing team, often from the start of the project, on understanding and jointly developing the complex specification. Then we take those specs back to our factory to grow the custom semiconductor in our reactor and build a laser system designed for their exact application. Because this is science, there are often iterations of a chip or laser to get things exactly right for our customers’ design, but with close coordination, we are proud to say that Vexlum lasers have been part of some amazing advancements and industrial breakthroughs.

Why is it such an important advance in laser technology to be able to make a laser that can be made in any wavelength?

In the semiconductor and quantum industries, we have historically been ‘wavelength-locked’ by the physical limitations of material systems like gallium arsenide or indium phosphide. Breaking this barrier with a wavelength-agnostic platform like the VXL is a fundamental shift from building experiments around available tools to building tools around the science.

By delivering high-power, single-frequency performance at any customized wavelength within a compact, two-liter footprint, we are effectively ‘de-risking’ the transition from laboratory proof-of-concept to industrial-scale manufacturing. For quantum computing, this means researchers no longer need room-sized racks of temperamental lasers to manipulate specific atomic transitions; they can now integrate these systems into rugged, field-deployable units. In semiconductor manufacturing, this flexibility allows for high-precision metrology and lithography applications that were previously cost-prohibitive or physically impossible due to space constraints. Ultimately, the impact is a democratization of precision photonics: when you remove the ‘science project’ complexity from the light source, you allow the industry to focus on scaling the solutions that will define the next decade of computing and sensing.

Also Read:

CEO Interview with JP Pentinen of Vexlum

CEO Interview with Moti Margalit of SonicEdge

CEO Interview with Dr. Mohammad Rastegari of Elastix.AI


Sensors Converge: Where Intelligence Meets the Edge

Sensors Converge: Where Intelligence Meets the Edge
by Daniel Nenni on 03-29-2026 at 6:00 pm

Sensors Merge 2026 SemiWiki

The Sensors Converge Conference is one of the premier technical gatherings dedicated to the design, integration, and deployment of sensing technologies across industries. The event brings together engineers, system architects, researchers, and product developers to explore advancements in sensor hardware, edge computing, connectivity, artificial intelligence, and embedded systems. As sensing technologies become foundational to automation, digital transformation, and data-driven decision making, the conference serves as a focal point for examining both emerging innovations and practical implementation challenges.

A major technical theme of Sensors Converge is sensor miniaturization and integration. Advances in MEMS fabrication, system-in-package (SiP) architectures, and heterogeneous integration have enabled multiple sensing modalities—such as temperature, pressure, inertial measurement, and environmental monitoring—to be combined into compact modules. These integrated systems reduce power consumption, lower bill-of-material costs, and simplify deployment in space-constrained applications such as wearable devices, industrial robotics, and medical instruments. Engineers at the conference often discuss trade-offs between accuracy, drift, and calibration complexity when combining sensors into multi-function packages.

Edge intelligence is another key focus area. Traditional sensing systems relied heavily on cloud-based processing, but latency, bandwidth, and privacy constraints have accelerated the adoption of on-device analytics. Microcontrollers and embedded processors now integrate DSP blocks and AI accelerators capable of running lightweight machine learning models. These capabilities allow sensors to perform anomaly detection, predictive maintenance, and classification locally. Technical sessions frequently explore model quantization, TinyML frameworks, and hardware acceleration strategies that optimize inference performance under tight power budgets. The convergence of sensing and intelligence reduces data transmission requirements while enabling real-time responsiveness.

Power management is also a central engineering challenge addressed at the conference. Many sensor nodes operate in battery-powered or energy-harvesting environments, such as remote industrial monitoring or smart agriculture. Designers must balance sampling frequency, communication intervals, and processing workload to maximize operational lifetime. Emerging techniques include duty cycling, ultra-low-power wake-on-event architectures, and hybrid energy harvesting using solar, vibration, or thermal gradients. Discussions often highlight the importance of co-design between sensor hardware and firmware to achieve optimal power efficiency.

Connectivity technologies form another major pillar of Sensors Converge. Engineers evaluate trade-offs among Bluetooth Low Energy, Wi-Fi, LoRaWAN, NB-IoT, and emerging ultra-wideband solutions. Each communication protocol presents unique benefits in range, throughput, latency, and energy consumption. For example, industrial monitoring applications may prioritize long-range low-power connectivity, while asset tracking systems require precise location accuracy. Conference presentations often include case studies demonstrating how hybrid connectivity strategies combine local mesh networking with cloud gateways for scalable deployments.

Sensor fusion and data reliability are also critical technical topics. Modern applications frequently combine data from multiple sensors to improve accuracy and robustness. For example, combining accelerometer, gyroscope, and magnetometer data enables precise orientation tracking. However, fusion algorithms must address noise, calibration mismatches, and environmental interference. Technical sessions explore Kalman filtering, Bayesian estimation, and machine-learning-based fusion approaches. These methods enhance performance in autonomous systems, robotics, and navigation technologies.

Security considerations have gained increasing attention as sensor networks expand. Embedded devices are often deployed in physically accessible environments, making them vulnerable to tampering and cyberattacks. Engineers discuss secure boot mechanisms, hardware root-of-trust, encrypted communication, and firmware update strategies. The integration of security at the silicon level is becoming essential to protect data integrity and system reliability. The conference emphasizes designing security features early in the development lifecycle rather than treating them as add-on components.

Applications showcased at Sensors Converge span multiple industries, including healthcare monitoring, smart cities, automotive systems, industrial automation, and environmental sensing. These use cases illustrate how sensing technologies enable predictive analytics, operational efficiency, and improved safety. For instance, vibration sensors in industrial equipment can detect early signs of mechanical wear, reducing downtime and maintenance costs. Similarly, environmental sensors support air quality monitoring and climate research initiatives.

Bottom line: the Sensors Converge Conference highlights the interdisciplinary nature of modern sensing systems. By addressing hardware innovation, edge intelligence, connectivity, power management, security, and data analytics, the event reflects the evolution of sensors from standalone components into intelligent distributed systems. As industries continue to rely on real-time data and automation, the technologies presented at Sensors Converge will play a central role in shaping the next generation of embedded and connected devices.

REGISTER NOW

Also Read:

Arteris Highlights a Path to Scalable Multi-Die Systems at the Chiplet Summit

Siemens Wins Best in Show Award at Chiplet Summit and Targets Broad 3D IC Design Enablement

Verification Analytics: The New Paradigm with Cogita-PRO at DVCON 2026

 


CEO Interview with JP Pentinen of Vexlum

CEO Interview with JP Pentinen of Vexlum
by Daniel Nenni on 03-29-2026 at 2:00 pm

Jussi Pekka Penttinen Velxum

Jussi-Pekka Penttinen is the chief executive officer, chief technical officer, and cofounder of Vexlum Ltd, an advanced laser technology company. With more than 15 years of experience, he is a leading researcher in the field of Vertical External Cavity Surface Emitting Laser (VECSEL) and successfully commercialized the technology. Vexlum has translated cutting-edge research into products as a fast-growing company, providing an enabling technology for the quantum industry and cutting-edge solutions in other markets.

Tell us about your company.

Vexlum is a manufacturer of advanced semiconductor lasers for high-impact applications with deep roots

in a unique academic collaboration that bridged continents and scientific disciplines. The company’s laser concept emerged from a crucial partnership between a quantum research group at NIST (National Institute of Standards and Technology) in Boulder, Colorado, and a semiconductor and optoelectronics team at Tampere University in Finland. This partnership eventually led to the development of Vexlum’s core technology. This history is directly connected to the foundational work of Nobel laureate David Wineland’s group, whose groundbreaking trapped ion research required the kind of laser capabilities that Vexlum’s technology was designed to deliver.

Looking to the future, Vexlum’s success in the quantum computing industry has made it possible to diversify into high-growth markets like the semiconductor and medical industries. The extreme precision and stability required for quantum computing serve as a powerful validation of Vexlum’s technology, providing a strong reputation to leverage in other fields. Our lasers have potential applications in semiconductor manufacturing for precision lithography and inspection, as well as in medical treatments in dermatology and ophthalmology.. By focusing on providing the most powerful engine for these diverse applications, Vexlum is already being recognized as an advanced laser company that empowers a wide array of human endeavors formerly thought to be impossible, from scientific discovery and space exploration to everyday health and technology.

What problems are you solving?

The size and cost of lasers available to meet the needs of quantum technology have long been recognized as a bottleneck in advancing quantum technologies, such as trapped-ion or neutral-atom quantum computers. Additionally, the lack of a mature enabling technology supply chain for quantum technology further slows down the scaling of quantum computing technology.

Laser systems are often bulky and expensive to integrate, requiring significant space. More than 100 different laser wavelengths are needed across all quantum technology implementations, and different applications impose conflicting requirements on size, weight, and performance.

What application areas are your strongest in?

Vexlum’s lasers are an enabling technology for some of the most demanding applications in science and industry. While the company’s roots are in solving the hardest problems of quantum computing,  this has also enabled our lasers to be used in the newest optical atomic clocks and in semiconductor manufacturing.  We have been particularly strong in scientific applications. Vexlum has delivered hundreds of high-performance, compact, and cost-effective lasers that replace older, more complicated, and expensive technologies used for research and space exploration. This strategy of democratizing access to cutting-edge laser technology is allowing a broader range of institutions and companies to push the boundaries of research and development.

What keeps your customers up at night?

The cost and size of lasers that must be an exact wavelength a big concern. When new ideas and breakthroughs happen in science and industry, the actual implementation is often blocked by lack of funding or space.

For example, in the space industry, there are challenges in communicating with satellites and identifying the exact location of objects orbiting the Earth due to unpredictable weather and light. To fix this, lasers are used not only for the communication itself, but also for properly locating objects that need to be communicated with using a special yellow laser. Currently, the benefits of these adaptive optical correction systems, which use large, bulky, and expensive lasers, are limited to large telescopes with the space and budget to operate systems that overcome imaging fuzziness created by atmospheric air currents. Vexlum’s technology addresses the key challenges of space-to-ground optical links, including turbulent air currents and the slower transfer speeds of radio waves, by eliminating the need for the massive, costly yellow lasers used in ELTs (Extra Large Telescopes).

By making adaptive optics accessible to smaller telescopes, Vexlum’s approach opens the door to faster delivery of critical information, such as hyperspectral imaging for monitoring wildfires, floods, and ecosystems, as well as more precise tracking of satellites and space debris to enable trajectory corrections and collision avoidance.

What does the competitive landscape look like and how do you differentiate?

We are lucky to be located in Tampere, Finland, which is the emerging “Silicon Valley” of type III / IV laser semiconductor technology. Our patents, unique technology based on the foundational work of Nobel laureate David Wineland’s group, a growing number of partnerships in cutting-edge science, and being in the hidden hub of this specific type of semiconductor development, seem to be keeping us one step ahead of our competitors. This opportunity has been a long time in the making, based on many decades of research and innovation. We consider ourselves to be fortunate that we can be at the right time and place to see and participate in this moment of so many amazing breakthroughs enabled by new photonics advancements.

What new features/technology are you working on?

Vexlum just released its new VXL laser, a next-generation in its single-frequency Vertical-External-Cavity Surface-Emitting Laser (VECSEL) portfolio that combines high performance with a compact, robust design.

In addition to its capability to be made in any wavelength, this laser is 10 times smaller than many systems on the market with similar power qualities, and the VXL laser platform delivers the same high-output powers as Vexlum’s VALO platform, in a dramatically smaller and more resilient package, bringing quantum-enabling technology within reach of more research and industry applications.

As a vertically integrated laser manufacturer, Vexlum is accelerating development of quantum technologies by providing single-frequency, high-power, low-noise lasers at an industry-leading selection of wavelengths. Along with being some of the most powerful and accurate lasers available for quantum computing applications, the company’s solutions are driving development in quantum sensing and lab-to-field deployment of quantum technologies.

A laser platform that had typically comprised rack-mounted components is now reduced to a compact, two-liter system, a more than 20-fold reduction in volume, while improving robustness and accessibility. In addition to removing bottlenecks in scaling quantum technologies, the VXL has dual-use applications in the semiconductor, medical, or defense markets. The VXL has already been deployed in early-access projects by research organizations and universities, focusing on quantum computing and quantum sensing technologies.

How do customers normally engage with your company?

Tell us your wavelength and we will custom-make a system for you.

When new discoveries require a specific laser wavelength, we work directly with the researchers or manufacturing team, often from the start of the project, on understanding and jointly developing the complex specification. Then we take those specs back to our factory to grow the custom semiconductor in our reactor and build a laser system designed for their exact application. Because this is science, there are often iterations of a chip or laser to get things exactly right for our customers’ design, but with close coordination, we are proud to say that Vexlum lasers have been part of some amazing advancements and industrial breakthroughs.

Why is it such an important advance in laser technology to be able to make a laser that can be made in any wavelength?

In the semiconductor and quantum industries, we have historically been ‘wavelength-locked’ by the physical limitations of material systems like gallium arsenide or indium phosphide. Breaking this barrier with a wavelength-agnostic platform like the VXL is a fundamental shift from building experiments around available tools to building tools around the science.

By delivering high-power, single-frequency performance at any customized wavelength within a compact, two-liter footprint, we are effectively ‘de-risking’ the transition from laboratory proof-of-concept to industrial-scale manufacturing. For quantum computing, this means researchers no longer need room-sized racks of temperamental lasers to manipulate specific atomic transitions; they can now integrate these systems into rugged, field-deployable units. In semiconductor manufacturing, this flexibility allows for high-precision metrology and lithography applications that were previously cost-prohibitive or physically impossible due to space constraints. Ultimately, the impact is a democratization of precision photonics: when you remove the ‘science project’ complexity from the light source, you allow the industry to focus on scaling the solutions that will define the next decade of computing and sensing.

CONTACT VEXLUM

Also Read:

CEO Interview with Moti Margalit of SonicEdge

CEO Interview with Dr. Mohammad Rastegari of Elastix.AI

CEO Interview with Jerome Paye of TAU Systems


Podcast EP337: The Importance of Network Communications to Enable AI Workloads with Abhinav Kothiala

Podcast EP337: The Importance of Network Communications to Enable AI Workloads with Abhinav Kothiala
by Daniel Nenni on 03-27-2026 at 10:00 am

Daniel is joined by Abhinav Kothiala, a principal product manager for the Synopsys Ethernet IP portfolio. He has over 12 years of experience across engineering and product management, spanning SoC design, functional verification, and building wireless connectivity platforms and IoT products. He also holds two patents in circuit design.

Dan discusses the evolution of Ethernet standards with Abhinav, who explains that traditional Ethernet is not well-suited for distributed AI workloads. In this informative discussion, Abhinav describes new and emerging interface protocols better suited to AI environments.

He discusses what is needed to achieve the required scale-up. Abhinav explains that the network must “disappear” by delivering very low latency, deterministic performance. The capabilities of ESUN and UALink are explored in detail. Abhinav also explains what the future will require and the role of the IP provider.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Musk’s Orbital Compute Vision: TERAFAB and the End of the Terrestrial Data Center

Musk’s Orbital Compute Vision: TERAFAB and the End of the Terrestrial Data Center
by Jonah McLeod on 03-27-2026 at 6:00 am

Terafab Elon Musk 2026

At the TERAFAB launch event in Austin on March 21, Elon Musk made a prediction that would have sounded like science fiction a decade ago—and may still: roughly 80 percent of AI compute will eventually move off-planet.

The argument is straightforward once you accept his premises. Earth-based data centers face three hard constraints—land, cooling, and grid capacity—and all three are getting worse as AI infrastructure demand accelerates. Land requires zoning, permitting, and proximity to fiber and power. Cooling consumes enormous quantities of water or electricity, or both. And grid capacity, particularly clean grid capacity, is increasingly contested.

Space, Musk argues, dissolves all three simultaneously. Satellites don’t need real estate. The vacuum of space is an ideal radiative heat sink—no water, no chillers, no mechanical systems at all. And solar irradiance above the atmosphere runs roughly five times the average output of a ground-based installation—not because the sun shines harder in space, but because a space-based array sees the sun continuously, with no night cycle, no weather, and no atmospheric losses. It is, Musk suggested, basically a free data center—if you can get there.

The obvious objection is launch cost. Getting hardware into orbit remains expensive by any terrestrial comparison. Musk’s counter is that Starship changes the math, and TERAFAB—announced the same evening, in a defunct Austin power plant, with light beams shooting into the sky and the Governor of Texas in the audience—changes it further.

TERAFAB is a $20–25 billion joint venture between Tesla, SpaceX, and xAI, to be built at Giga Texas in Austin, consolidating chip design, lithography, fabrication, memory production, packaging, and testing under one roof—vertical integration no semiconductor company has attempted at this scale, for reasons that will become apparent. The stated production target is chips with an aggregate power draw of one terawatt—roughly fifty times the estimated power consumption of all advanced AI chips currently in production worldwide.

Musk uses power draw as his unit of scale because it is the one metric that translates across wildly different chip architectures, and because it serves his core argument: total US grid capacity runs approximately 0.5 terawatts, making a terawatt of chip power physically impossible to run on Earth. Most of it, he concludes, must go to space. Getting that much compute into orbit means launching roughly 10 million tons per year—approximately 50,000 Starship flights annually, or one every ten minutes. Musk provided no construction or production timeline..

TERAFAB is intended to produce two chip families: AI5, a purpose-stripped inference processor for Tesla vehicles and Optimus robots, with design nearly complete and small-batch production expected later this year; and D3, a space-hardened chip purpose-built for the orbital satellite constellation. Musk has described personal involvement in AI5’s design—the strategic decisions appear to be his; the detailed engineering work is being done by Tesla’s in-house chip team, whose names are not public. The D3 has no disclosed timeline, no foundry assignment, and no published architecture. SpaceX has already filed with the FCC to launch up to one million satellites built around it. The satellites are ready for ordering. The chip is ready for naming.

If launch prices fall to the levels Musk is targeting and TERAFAB delivers at anything approaching its stated capacity, the economics of orbital compute become at least arguable. Space offers effectively unlimited siting, free radiative cooling, and abundant solar power without grid or permitting constraints. In that model, the long-term savings eventually swamp the upfront cost of getting hardware off the ground. The physics are genuine. The execution is another matter.

What Stays on the Ground

Anything with a human or machine waiting on a response. Conversational AI, agentic pipelines, autonomous vehicles, industrial robotics, financial systems, real-time audio and video processing—all require response times that orbital round-trips cannot accommodate. LEO adds 40–80ms of latency before a single computation runs. GEO pushes that past 500ms. For a user waiting on a reply, or a robot waiting on a command, that’s disqualifying. Gravity, it turns out, is not the only thing keeping compute on Earth.

What moves to orbit? Training runs and batch workloads. A model training job that takes days doesn’t care about a 60ms round-trip. Neither does batch inference, large-scale data processing, scientific simulation, or pre-generated content rendering. These are the workloads that consume the most power and are hardest to site on Earth—and they are genuinely good candidates for orbital migration, if someone can build the infrastructure to get them there.

The 80 Percent Problem

Here is where Musk’s headline figure deserves scrutiny. Current data on workload composition suggests the orbital-eligible fraction of global data center compute is closer to 20–30 percent—not 80. The gap between those numbers is not a rounding error. It is the entire argument.

According to McKinsey’s December 2025 data center demand model, total global data center demand in 2025 runs approximately 82 GW, with AI training accounting for 23 GW, AI inference 21 GW, and non-AI workloads 38 GW. [McKinsey & Company] Training—the most straightforwardly orbital-eligible workload—represents roughly 28 percent of the total. Add the latency-tolerant fraction of batch processing and non-AI workloads and you might reach 35–40 percent, generously.

The bigger problem is where growth is headed. Inference will account for roughly two-thirds of all AI compute by 2026, up from about one-third in 2023. [Deloitte Insights] And inference is structurally latency-bound. Inference workloads follow user behavior, and real-time responsiveness is key—which is why inference infrastructure needs to be close to population centers. [Edgecore] That requirement doesn’t dissolve with cheaper launch costs. It doesn’t dissolve at all.

McKinsey projects that by 2030, AI inference will represent more than 40 percent of total data center demand, overtaking non-AI workloads by 2029, while training holds steady at just under 30 percent. [McKinsey & Company] The dominant and fastest-growing category of compute is precisely the one most resistant to orbital migration. Musk’s 80 percent assumes a future where most inference migrates off-planet—which would require either a latency breakthrough that does not appear on any roadmap, or a fundamental restructuring of how AI applications are built that nobody has proposed.

None of this invalidates the core insight. Training workloads are insensitive to latency and can tolerate delays of up to 100 milliseconds between adjacent regions, which already allows hyperscalers to site them in remote, power-rich areas where grid capacity, land, and water are more available. [McKinsey & Company] Orbit is simply the logical extreme of that same siting logic. A more defensible claim might be that orbital compute captures 25–35 percent of global data center demand within the next two decades, concentrated in training and scheduled batch workloads. That is still an enormous market. It is just not the one Musk described in Austin.

The Harder Questions

Thermal management in low earth orbit, radiation hardening at scale, on-orbit servicing, and debris risk remain largely unaddressed in Musk’s public presentation. The D3’s design philosophy—running hotter to shed radiator mass—is elegant engineering thinking. But a chip that hasn’t taped out is not a solution to any of those problems yet. And the launch arithmetic is sobering: 50,000 Starship flights a year is not an engineering challenge, it is a category error relative to anything in the current manifest.

What is real: the terrestrial power constraint driving this vision is genuine and worsening. The semiconductor and systems industries have been quietly watching data center power demand outrun grid capacity for years. Musk is the first person with launch infrastructure, chip design capability, and apparent willingness to spend $25 billion making the orbital alternative credible. That is worth taking seriously, even if the specific numbers are not.

In Austin last week, the conversation shifted. Whether or not TERAFAB delivers on its promises, orbital compute is no longer a thought experiment. That much Musk has accomplished—which is, it should be said, more than most people accomplish in a career.

The rest of the scorecard, however, looks like this: Dojo was cancelled, revived, renamed, and partially absorbed into AI6—all within six months. AI5 was “finished” in July 2025, “almost done” in January 2026, and still not taped out in March. The D3 chip that the entire orbital compute vision depends on has no disclosed design, foundry, or timeline. SpaceX has an FCC filing for a million satellites built around a chip that doesn’t exist yet. And TERAFAB itself has no construction timeline and a price tag that isn’t in Tesla’s capital plan.

Standing in front of all of that, Musk announced the next three projects: megawatt satellites, a lunar factory, and an electromagnetic mass driver on the Moon.

He is, as ever, a man who is always three projects ahead of his last unfinished one.

Also Read:

Silicon Insurance: Why eFPGA is Cheaper Than a Respin — and Why It Matters in the Intel 18A Era

Captain America: Can Elon Musk Save America’s Chip Manufacturing Industry?

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation