RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Silicon Catalyst and Microelectronics US 2026

Silicon Catalyst and Microelectronics US 2026
by Daniel Nenni on 04-02-2026 at 10:00 am

Silicon Catalyst Microelectronics US 2026 Conference

The designation of Silicon Catalyst as the exclusive strategic partner for Microelectronics US 2026 represents a significant alignment between a leading semiconductor startup ecosystem and a rapidly growing U.S. microelectronics industry event. This partnership reflects broader trends in semiconductor innovation, including the increasing importance of startup-driven technology development, cross-sector collaboration, and national supply-chain resilience.

Microelectronics US 2026 is scheduled for April 22–23, 2026, at the Palmer Events Center in Austin, Texas. The conference aims to convene senior engineers, technical architects, investors, manufacturing specialists, and policy stakeholders from across the U.S. microelectronics ecosystem. The event is designed to focus on semiconductor design, advanced manufacturing, embedded systems, AI hardware, and supply-chain innovation. By bringing together these stakeholders, the conference seeks to foster technical collaboration and accelerate commercialization pathways for emerging technologies.

The exclusive strategic partnership with Silicon Catalyst enhances this mission. Silicon Catalyst is widely recognized as an accelerator dedicated exclusively to semiconductor startups, providing incubation programs, technical resources, and access to investors and corporate partners. Through its ecosystem, startups gain access to industry advisors, design tools, manufacturing resources, and strategic guidance that can shorten development cycles and reduce the capital barriers typically associated with chip innovation.

Under the partnership, Silicon Catalyst collaborates with IQPC Exhibitions, the event organizer, to strengthen Microelectronics US 2026 as a key commercial and technical platform for the U.S. semiconductor industry. This strategic role positions Silicon Catalyst to influence program development, connect startups with industry stakeholders, and highlight emerging technologies from its accelerator portfolio.

The technical significance of this collaboration lies in the evolving nature of semiconductor innovation. Historically, semiconductor advances were dominated by large IDMs and fabless companies with substantial capital resources. However, the industry is increasingly driven by specialized startups developing domain-specific accelerators, photonics-based processors, chiplets, and heterogeneous integration technologies. These startups often rely on ecosystem partnerships to access Design IP, EDA tools, and manufacturing capacity. By integrating Silicon Catalyst into the conference structure, Microelectronics US 2026 aims to create a platform that supports this new model of distributed innovation.

Another key dimension is workforce and ecosystem development. Microelectronics US 2026 is expected to host more than 3,000 attendees and feature over 150 exhibitors covering chip design, AI, photonics, embedded systems, and power electronics. Such scale provides an opportunity to connect startup founders with suppliers, foundries, packaging companies, and system integrators. This interaction is critical for translating research concepts into manufacturable silicon solutions.

From a technical perspective, Silicon Catalyst’s involvement may emphasize emerging areas such as chiplet-based architectures, AI accelerators, MEMS sensors, and quantum-related semiconductor technologies. These domains require collaboration across multiple disciplines including device physics, packaging engineering, and system architecture. The accelerator’s network of advisors and in-kind partners can help bridge these disciplines, enabling startups to move from concept to tape-out more efficiently.

The partnership also aligns with broader national priorities. The U.S. semiconductor ecosystem has increasingly emphasized domestic innovation capacity, supply-chain resilience, and advanced packaging leadership. Conferences such as Microelectronics US serve as coordination points for academia, startups, and established companies. By leveraging Silicon Catalyst’s startup pipeline, the event can highlight early-stage technologies that may evolve into future production platforms.

In addition, the exclusive nature of the partnership suggests a deeper integration than typical sponsorship arrangements. Silicon Catalyst’s role may include curating startup showcases, facilitating investor meetings, and contributing to technical sessions. This could lead to more practical discussions focused on commercialization challenges, such as design-for-manufacturability, IP reuse, and advanced packaging integration.

REGISTER HERE

Bottom line: The designation of Silicon Catalyst as the exclusive strategic partner of Microelectronics US 2026 underscores the growing importance of startup ecosystems in semiconductor innovation. By combining a dedicated accelerator with a large-scale industry conference, the partnership creates a platform that connects early-stage innovation with manufacturing expertise, investment capital, and system-level integration. This collaborative model reflects the evolving structure of the semiconductor industry, where breakthroughs increasingly emerge from coordinated ecosystems rather than isolated organizations.

Also Read:

Post-Silicon Validating an MMU. Innovation in Verification

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation


Webinar – How to Reclaim Margin in Advanced Nodes

Webinar – How to Reclaim Margin in Advanced Nodes
by Mike Gianfagna on 04-02-2026 at 6:00 am

Webinar – How to Reclaim Margin in Advanced Nodes

This informative webinar discusses a significant issue that is cropping up for sub-5nm designs. As the graphic above shows, modeling uncertainty at advanced nodes results in excessive guard banding. These guard bands result in reduced performance and profit. A loss of 25 – 35% in PPA is discussed, along with the lost profit associated with paying for advanced node performance and not being able to take advantage of it.

You will learn a lot about the dimensions of this problem and how to fix it, resulting in improved performance, competitiveness and profit. A replay link is coming but first let’s examine how to reclaim margin in advanced nodes.

You can access the webinar replay here. And that’s how to reclaim margin in advanced nodes.

The Presenter

Dave Johnson

Dave Johnson is the webinar presenter. Dave works on strategic sales at ClockEdge. Prior to his decades long career in EDA, Dave was an ASIC engineer specializing in custom IC development. He has worked with many of the largest semiconductor companies around the world to optimize their design flow. He is someone who believes deeply that the choice of design methodology matters, significantly impacting the project’s success.

Dave is quite knowledgeable on the topic of design margins. He has an easy-to-follow presentation style. You will learn a lot during this short (22-minute) webinar.

The Webinar

Dave begins by describing the margin problem as a silent crisis in advanced node design. He discusses the widespread use of abstractions to drive design of ever-larger chips. He describes the “abstraction tax” that results from the difference between the estimates that drive design margins when compared with the actual performance needed. Dave gets into the details of what drives this “abstraction tax” and what penalties result. He then discusses a new and unique solution that enables design teams to reclaim the wasted margin at advanced nodes so the true value of advanced processes can be realized.

He describes the pessimism wall that exist sub 5-nm. He goes on to explain that at 3nm, the foundry promises and design teams expect a 15-18% performance improvement at constant power, or a 30-34% power reduction at constant frequency.

The Pessimism Wall

He goes on to explain that these gains are vanishing due to the pessimism wall. Now, the primary performance bottleneck is not silicon capability, but an abstraction-based methodology. Margins are now heavily inflated to compensate for methodology uncertainty. For example, clock sign-off guard bands routinely consume 25-35% of the available clock period. This results in over-designing the network by 2.5X. The figure at the right summarizes these points.

Dave then explores the details of the abstraction tax. He discusses the areas that contribute to the problem, including near-threshold voltage sensitivity, power supply-induced jitter, interconnect-dominated clock delay, aging, and local variability and Liberty Variation Format (LVF) residuals. You will learn a lot about the impact all these items have.

Dave then explores the ROI associated with recovering the lost margin due to these effects. Performance, clock tree area, dynamic power and binning yield are all discussed.

An effective solution to these problems offered by ClockEdge is then explored in some detail. Dave explains how the ClockEdge Veridian Engine can deliver full clock SPICE-level analysis overnight for over 100 million gate designs. He explains the significant impact a tool like this can have on advanced node design, allowing the abstraction tax to be removed. Design teams can now access the full capability offered by advanced nodes.

The webinar concludes with a very informative question and answer session.

To Learn More

If you struggle to get all the benefits offered by advanced process nodes due to excessive design margins, you need to watch this webinar. In a short 22 minutes, you will understand the problem much better and learn about a new and effective solution to unlock superior performance and increased profitability.

You can access the webinar replay here. And that’s how to reclaim margin in advanced nodes.

What is the 3nm Pessimism Wall and Why is it An Economic Crisis?

The Risk of Not Optimizing Clock Power

Taming Advanced Node Clock Network Challenges: Jitter


Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology

Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology
by Daniel Nenni on 04-01-2026 at 10:00 am

Alchip’s Leadership in ASIC Innovation

Alchip Technologies has recently reported significant progress in the development of advanced 2nm  ASICs, positioning itself as a leader in next-generation semiconductor design for AI and HPC. The announcement highlights Alchip’s efforts to commercialize cutting-edge chip technologies and deliver highly customized silicon solutions for data centers, hyperscalers, and AI infrastructure providers. These developments demonstrate how the company is preparing for the transition to one of the most advanced semiconductor process nodes in the industry.

A key milestone in Alchip’s 2nm strategy is the creation of a dedicated 2nm design platform, which enables customers to develop high-performance ASICs using the latest manufacturing technologies. This platform supports advanced packaging and chiplet integration methods such as 2.5D and 3D integrated circuit technologies, allowing designers to combine a 2nm compute die with input/output (I/O) chiplets produced on mature nodes such as 3nm or 5nm. This approach improves yield, reduces cost, and allows developers to integrate complex computing architectures more efficiently.

The transition to 2nm technology represents a major shift in semiconductor architecture. Unlike earlier nodes that relied on FinFET transistor designs, 2nm processes introduce nanosheet or GAA transistors, which provide better electrostatic control and enable higher transistor density. These improvements allow chips to achieve better performance and power efficiency while continuing the scaling trends predicted by Moore’s Law. For AI workloads and large-scale data centers, these advantages are particularly important because they support faster processing speeds and reduced energy consumption.

Alchip has also successfully completed a 2nm test chip tape-out, which is a crucial step in validating the design methodology and manufacturing process. The test chip includes high-speed SRAM blocks and silicon performance monitors that provide real-time insights into chip behavior. These features allow engineers to evaluate PPA characteristics of the new process technology and refine the design flow for future customer products.

Another notable aspect of the test chip is the integration of Alchip’s AP-Link-3D input/output interface, which is designed to support advanced chiplet-based architectures and 3D integration technologies. Chiplet designs divide a large system-on-chip into smaller functional blocks that can be manufactured separately and then connected through high-speed interconnects. This method improves flexibility and scalability, allowing designers to combine different process nodes and specialized components in a single package. The success of the 2nm test chip demonstrates that Alchip’s design tools and intellectual property are ready for these emerging packaging approaches.

Developing chips at the 2nm node also presents significant challenges. The smaller transistor dimensions increase power density and thermal management issues, requiring careful floorplanning, power distribution, and cooling strategies. Alchip’s design methodology addresses these challenges by incorporating thermal-aware design techniques and early optimization of placement and routing. By solving these problems earlier in the design flow, the company aims to reduce development time and improve the likelihood of first-pass silicon success.

The company’s 2nm advancements are closely tied to the broader growth of AI and high-performance computing markets. Many hyperscale data center operators and cloud providers are increasingly turning to custom ASICs rather than off-the-shelf graphics processing units (GPUs) to optimize workloads and reduce operational costs. Alchip specializes in providing these custom silicon solutions, enabling companies to design chips tailored specifically for AI training, inference, networking, and other data-intensive applications. As AI systems continue to grow in complexity, demand for specialized ASIC designs built on advanced nodes such as 2nm is expected to increase significantly.

In addition, Alchip’s work on 2nm technology positions the company for future semiconductor generations. The insights gained from its test chips and design platform will help support the transition toward even more advanced nodes, including potential 1.6nm processes and new transistor architectures. By investing early in design methodologies and packaging technologies, Alchip aims to maintain its leadership in high-performance ASIC development.

Bottom line: Alchip’s reported ASIC-leading 2nm developments highlight a major step forward in semiconductor innovation. Through its new design platform, successful test chip tape-out, and focus on advanced packaging and chiplet integration, the company is preparing customers for the next era of AI-driven computing. These efforts reinforce Alchip’s position as a key player in the global race to deliver faster, more efficient, and highly customized silicon solutions for future technology demands.

Alchip will be at the TSMC 2026 Technical Symposium as will I. You can reach Alchip here. Check out their new website!  And of course you can reach me on SemiWiki email if you are a member.

I hope to see you there!

Also Read:

2026 Outlook with Dave Hwang of Alchip

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation


CapEx Up for Foundry, Memory

CapEx Up for Foundry, Memory
by Bill Jewell on 04-01-2026 at 6:00 am

unnamed (4)

Semiconductor Intelligence estimates total semiconductor industry capital spending (CapEx) was $166 billion in 2025, up 7% from 2024. We estimate 2026 CapEx will be $200 billion, up 20% from 2025. TSMC was the largest spender in 2024 with $40.9 billion in CapEx, 25% of the total. TSMC projects 2026 CapEx will be between $52 billion and $56 billion, an increase of 27% to 37% from 2025. The company cited 5G, AI and high-performance computing (HPC) as drivers of the increase in CapEx. Other foundries are projecting flat to down CapEx in 2026, except for GlobalFoundries with a 70% increase.

On March 21, Elon Musk announced plans for Terrafab, a wafer fab to provide semiconductor devices for Musk’s companies Tesla, SpaceX and xAI. The fab will be built in Austin, Texas, at a cost of $20 billion to $25 billion. When complete, the fab will have a capacity of one million wafer starts per month at a 2nm process node. Tech Insider predicts Terrafab will have initial production in 2028 and full production in 2032. The $25 billion cost spread out over six years is just over $4 billion a year. In 2026, Terrafab will acquire land, build infrastructure and likely begin building. We estimate Terrafab will spend $3 billion in capex in 2026. We are placing Terrafab in the foundry category since its devices will be used by Musk’s companies and not sold on the open market.

Memory companies will account for the largest percentage of CapEx in 2026 at 45%. Samsung announced it will spend over 110 trillion won ($74 billion) in 2026 to “secure leadership in the AI semiconductor era”. We estimate about $34 billion of this investment will go toward R&D and non-semiconductor Capex, leaving $40 billion for semiconductor CapEx, an increase of 20% from 2025. Micron Technology and SK Hynix each should increase CapEx by over 40% in 2026.

The integrated device manufacturers (IDMs) spent $41.3 billion on CapEx in 2025, down 25% from 2024. IDM CapEx should decline again in 2026 by about 9%. The decline in IDM CapEx is largely due to AI driving market growth. Most of the AI semiconductor market is supplied by memory companies and fabless companies such as Nvidia. Intel CapEx in 2025 was $17.7 billion, down 29% from 2024. Intel expects flat to down CapEx in 2026. For many years, Intel was one of the overall top three spenders along with Samsung and TSMC. Intel was passed by SK Hynix in 2025 and will be passed by Micron Technology in 2026. Texas Instruments will spend between $2 billion and $3 billion in Capex in 2026, down from $4.6 billion in 2025 as it aligns with market conditions. STMicroelectronics and Infineon Technologies both plan CapEx increases in 2026.

What is the appropriate level of CapEx relative to the semiconductor market? The semiconductor market is notoriously volatile. Over the last forty years, annual change has ranged from 46% growth in 1984 to a 32% decline in 2001. Although the industry has become somewhat less volatile as it has matured, in the last few years it has shown an 8% decrease in 2023 and a 26% increase in 2025. Semiconductor companies need to plan their capacity several years out. It takes about two years to build a new wafer fab and additional time for planning and financing. As a result, the ratio of semiconductor CapEx to the semiconductor market varies greatly, as shown below.

The semiconductor CapEx to market size ratio has varied from a high of 34% to a low of 12%. The five-year-average ratio ranges between 28% and 18%. Over the total period of 1980 to 2025, CapEx was 23% of the semiconductor market. In 2023, the ratio was 31.1%, one of only seven times in the last 45 years it has been over 30%. The five-year-average ratio was 28.2%, only the third time since 1980 it has exceeded 28%. The ratio dropped to 25% in 2024 and 21% in 2025. Our current projection is that the ratio will drop to 19% in 2026 and the five-year-average ratio will drop to 24%. Thus, despite expected 20% growth in 2026, total CapEx does not appear to exceed the growth of the semiconductor market. If the semiconductor market continues healthy growth over the next few years, the industry should not have overcapacity.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

AI Drives Strong Semiconductor Market in 2025-2026

AI Bubble?

Semiconductors Up Over 20% in 2025


RISC-V Now! — Where Specification Meets Scale!

RISC-V Now! — Where Specification Meets Scale!
by Daniel Nenni on 03-31-2026 at 8:00 am

RVN! 26 SemiWiki (400 x 400 px) (1)

In forty plus years as a semiconductor professional I have never seen a semiconductor design ecosystem build as fast and as strong as RISC-V. As a result, RISC-V Now! has emerged as a pivotal gathering, a conference with a clear and ambitious mission: To transform the open, modular, and flexible RISC-V ISA from an exciting specification into real products that ship at scale. Unlike many technical conferences that celebrate theoretical advances, academic breakthroughs, and road-map visions, RISC-V Now! is designed as a crucible where ideas are forged into hardware, software, and commercial ecosystems that meet real-world demand. It is here that “spec goes to scale.”

Last year’s event welcomed ~600 semiconductor professionals from 250+ companies and more than 50% engineering leaders. Companies represented included Apple, Google, Amazon, Meta, Intel, NVIDIA, Qualcomm, Samsung, TSMC, Synopsys, Cadence, Siemens EDA, Alibaba, Renesas.

A Forum for Real Products

The central theme of RISC-V Now! is productization. Attendees including engineers, architects, business leaders, and open-source advocates come together not just to talk about what RISC-V could enable, but what it is enabling now. From silicon implementations and SoC designs to compilers, operating systems, and security frameworks, the conference highlights concrete progress across the industry.

By emphasizing real products rather than prototypes or speculative technologies, RISC-V Now! reduces the gap between specification and shipment. Sessions are structured around case studies, deployment stories, and lessons learned from bringing RISC-V solutions to market. This pragmatic focus accelerates commercial maturity by helping participants avoid common pitfalls, leverage proven strategies, and adopt best practices that have already shown success.

Building an Ecosystem

A key challenge for any open standard is building a robust ecosystem — one that encompasses tools, IP blocks, software stacks, developer communities, and end-user markets. For RISC-V, ecosystem development is especially critical because the architecture thrives on extensibility; companies can add custom extensions to differentiate their products. Without shared tooling and interoperability norms, such flexibility could fragment the landscape. RISC-V Now! tackles this challenge head-on by bringing stakeholders together to align on standard extensions, testing frameworks, and compliance suites.

Workshops and working groups at the conference focus on unifying performance benchmarks, enhancing support in major toolchains like LLVM and GCC, and integrating RISC-V into mainstream operating systems such as Linux and real-time OSes for embedded systems. This collaborative environment accelerates ecosystem cohesion, which in turn boosts confidence among developers and buyers alike that RISC-V-based products will be reliable, sustainable, and future-proof.

Scaling for Global Impact

As more companies commit to RISC-V silicon, the need to scale goes beyond technical readiness — it requires scalable supply chains, global partnerships, and business models that can compete with incumbents. RISC-V Now! elevates discussions about commercialization strategies that work in diverse markets, from edge and IoT devices to high-performance computing.

Investor panels and industry keynotes explore how companies are securing funding, navigating IP landscapes, and building go-to-market channels that accelerate adoption. By spotlighting success stories from startups that have shipped RISC-V products, the conference demystifies the path to scale and signals to the broader tech ecosystem that RISC-V is more than an academic curiosity, it’s a commercially viable alternative powering real devices in production.

Fostering a Collaborative Culture

Perhaps the most enduring impact of RISC-V Now! is the culture it fosters. The open ethos of RISC-V where collaboration is not only encouraged but essential is reinforced throughout the conference. Engineers share code, companies discuss interoperability challenges transparently, and cross-organizational working groups form organically. This culture accelerates innovation in ways that closed, proprietary models struggle to match.

By providing a forum where spec authors, implementers, and product teams converge, RISC-V Now! ensures that technical visions are grounded in practical realities and that practical challenges inform future standard evolution. In essence, it creates a virtuous cycle where the architecture continuously improves while products based on it flourish.

Bottom Line: RISC-V Now! is more than a technical conference, it is a catalyst for transformation in computing architecture. By focusing on tangible products that scale, it bridges the gap between open specification and market success. It builds community, fosters ecosystem maturity, and empowers innovators to take RISC-V from concept to widespread deployment. In an industry hungry for openness, flexibility, and performance, RISC-V Now! is where the future of open hardware is being built one shipment at a time.

Silicon Valley, USA
DoubleTree By Hilton San Jose
2050 Gateway Place, San Jose, CA, 95110, US
Both Days Are Free To Attend.

I hope to see you there!

Also Read:

The Launch of RISC-V Now! A New Chapter in Open Computing

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

RISC-V: Powering the Era of Intelligent General Computing


Nuclear Power and Design Automation

Nuclear Power and Design Automation
by Bernard Murphy on 03-31-2026 at 6:00 am

Nuclear reactors

A couple of folks have asked me to write on nuclear power. Nuclear offers additional sources for power generation, a pressing concern thanks to demand from giant data centers. Also, investment by Microsoft, Sam Altman and others signals their urgency to accelerate past slow moving utilities plans. I have some background in this technology given my graduate education, so I feel comfortable that I’m not entirely winging it, either in fission or fusion reactors. The topic of interest in this forum of course is what this area might have to do with design automation. I’m expanding that brief to include software design and mechanical and fluidic design, in addition to electronics design. I’ll start with a review of the core technologies.

Fission reactors

Classical nuclear has been around for a while, in the big fusion reactors which created so much anxiety around safety and nuclear waste. I was in Narita airport (Tokyo area) when the Tohoku earthquake hit in 2011. After we had been evacuated then let back into the arrivals area, everyone was glued to TV screens. All in Japanese of course but I’m pretty sure the coverage included the Fukushima nuclear power plant being swamped by the tsunami.

Concerning of course, but nuclear is very green compared to fossil fuel-based generation, now becoming a very important consideration. As a quick reminder, fusion reactors run on a chain reaction principle. A Uranium U235 nucleus emits a neutron, which hits another nucleus, which then re-emits a neutron and so on, creating a cascade of energetic neutrons. These neutrons heat up surrounding liquid, and that energy is converted to steam through a heat exchanger. Steam then drives turbines to create electricity. Very competitive with fossil-fueled generation, with no carbon dioxide waste though radioactive waste still requires geological disposal.

Fission reactor technologies continue to advance. Small Modular Reactors (SMRs) especially are very easy to scale up. One unit can produce about a third of the power of a traditional reactor, but new units can be added quite quickly (subject to regulatory review) since components can be mass-produced offsite. Regulations are being upgraded to speed review and approval, moving quickly in the UK with Rolls-Royce SMR expected to come on-line in early to mid-2030s at a cost in the range of $2.5B-$4B. Contrast that with $30B for a traditional full-size station. Regulatory processes are also being sped up in the US.

Molten Salt Reactors (MSRs) use liquid salt for heat exchange (conventional and MSRs use water) with a much higher boiling point (~1500oC) than water, allowing them to run much more efficiently than water-cooled systems and avoiding potential for steam-explosions under conditions of over-pressure. MSRs are still in in tech prove-out. China seems to be most advanced with a first test delivering an estimated 2MW, though the goal is to deliver 100MW by ~2035. The US, Canada and Europe all have projects under development.

Fusion reactors

Radioactive nuclei are naturally unstable, allowing us to tap energetic decay energy from that fission to generate electricity. At the other end of the nuclear mass scale, helium 4 (He4) is the most stable of all nuclei. If we fuse together lighter nuclei to create He4, that process will also release energy, ideally more than is required to initiate fusion, in theory with no radioactive waste.

Jamming together smaller positively charged nuclei must overcome the electromagnetic barrier, on the order of 0.1MeV, requiring temperatures around 108K in a plasma of (dissociated) atoms in which you aim to induce fusion. There are multiple technologies in development, all aiming to establish a high enough temperature in the plasma at high enough density for a long enough time to cross a point where more energy is produced than is put into the process. Commonwealth Fusion is aiming for first plasma in 2027. Helion Energy plans to meet a 2028 deadline to supply 50 MW of fusion-generated power to Microsoft’s data centers. Other technologies (hybrid, pulsed and inertial confinement) are still in R&D.

Extracting energy is another tricky step. Some plasmas like those for General Fusion emit high energy neutrons which are absorbed in a blanket layer around the plasma. These generate heating in the blanket, picked up via a coolant traveling through the blanket. That heat is exchanged to create steam and used to drive turbines, very much like the approach used in fission reactors. Helion Energy instead uses an induction method to extract energy directly from the plasma and (plausibly) claims to have much higher efficiency than the traditional cooling ->steam->turbine approach. There are also other methods.

There is little info so far on power generation capability. All methods still seem to be working towards first sustainable power.

Where does design automation fit?

Fission reactors

Electronics in containment chambers must be very radiation tolerant, apparently pushing complex designs towards FPGAs rather than processor-based architectures, though sensors and actuators may be built on rad-hard SOI with error correction logic.

Digital twin modeling is used to model control software against physics models of neutron-based heating to ensure control always reacts quickly to pump failures across edge cases (to avoid meltdown).

Mechanical/electrical Place and Route (coolant pipes, electric conduits) has become very important in SMR design to manage routing through tight spaces while maintaining separation rules for safety.

Thermal analysis modeling, along with stress modeling of the reactor core is extremely important, to assess potential overheating possibilities.

Formal verification is a requirement for proving software, also for FPGAs inside the containment vessel.

Fusion reactors

Managing a plasma stream is an immensely complex magneto-hydro dynamic fluidics problem. Plasma at a hundred million degrees or more cannot touch the sides of the containment vessel, since the plasma would collapse and do untold damage to the vessel. Containment methods depend on electric and/or magnetic fields which must respond incredibly quickly to variations. This is accomplished through high-speed control loops, also through reinforcement learning to proactively sense and correct for disruptions. (Mach42 is one company I know of in this space.)

Similar place and route requirements apply here, for cryogenic lines, high voltage lines, and fuel lines (to keep fueling the plasma).

I know that energy extraction in systems directly generating power requires very advanced power electronics. Technologies include wide bandgap semiconductors and inductive coupling techniques. I am not at all expert in this area.

Here also, digital twin modeling is important, for example to model plasma disruption to check if control responds fast enough. And again, formal verification is essential in any “cannot fail” circuits.

In summary: nuclear power as a source of energy in the near term will depend on SMR reactors, with MSRs expected to come on-line somewhat later. Fusion is still a future. Perhaps investment being pumped into fusion might accelerate prototypes which can sustainably generate energy. Meantime there are applications for design automation-like technologies, in mechanical place and route, in safety and in digital twin modeling. This area will be interesting to watch.

Also Read:

CEO Interview with Charlie Peppiatt of Gooch & Housego

CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd

Sensors Converge: Where Intelligence Meets the Edge


CEO Interview with Charlie Peppiatt of Gooch & Housego

CEO Interview with Charlie Peppiatt of Gooch & Housego
by Daniel Nenni on 03-30-2026 at 10:00 am

Reeder 231005 0313

Charlie Peppiatt has served as Chief Executive Officer of Gooch & Housego since September 2022. He joined the company from TT Electronics, where he was Executive Vice President following TT’s acquisition of Stadium Group plc. Prior to that, Charlie served as Chief Executive Officer of Stadium Group from 2013 until its acquisition in 2018.

Earlier in his career, he held senior operational leadership roles at Laird plc, a FTSE 250 electronics company, including Vice President of Global Operations. Over more than three decades in high-technology manufacturing, Charlie has led global businesses supplying advanced electronics and engineered solutions into the medical, telecommunications, industrial, and aerospace & defense sectors.

Tell us about your company.

Gooch & Housego (G&H) is a global photonics engineering and manufacturing company specializing in high-performance optical components, subsystems, and systems. We operate across the full photonics value chain, from materials and crystal growth through precision optics, fiber optics, acousto-optics, and electro-optics, all the way to integrated optical assemblies.

Our technologies sit inside many of the world’s most demanding applications, including semiconductor manufacturing, telecommunications infrastructure, quantum technologies, aerospace and defense systems, and life sciences instrumentation.

What differentiates G&H is our ability to combine deep photonics expertise with vertically integrated manufacturing. Many customers come to us when they need a partner who can move beyond individual optical components and help engineer a complete optical subsystem that performs reliably in real-world environments.

Today we operate across multiple engineering and manufacturing sites in the U.S., U.K., and Europe, with partners in Asia, supporting global customers who rely on photonics to enable next-generation technologies.

What problems are you solving?

Photonics is often the enabling technology behind advances in computing, communications, and sensing. The challenge is that optical systems must deliver extremely high precision and stability while operating in complex environments.

Our role is to help customers solve those engineering challenges.

For example, semiconductor manufacturing tools require optical components and laser control systems that can maintain stability and precision at extreme levels of miniaturization even down to the nano-scale. In telecommunications infrastructure, reliability is critical because components deployed in subsea networks must operate unattended for decades. In emerging fields like quantum computing and fusion energy research, photonic components must perform with exceptional accuracy and repeatability.

We work closely with customers to engineer optical solutions that meet those requirements while also being manufacturable at scale. That combination of performance and production readiness is where much of the real innovation happens.

What application areas are your strongest?

G&H has strong positions in several high-growth photonics markets.

Semiconductor and advanced manufacturing are key areas for us, where our optical components and systems support laser processing, metrology, and precision instrumentation used in semiconductor fabrication.

Telecommunications is another major sector, particularly in high-reliability fiber optic components used in subsea communication networks. These networks carry most of the global data traffic and require optical components with extremely long operational lifetimes.

We also support life sciences and medical instrumentation, providing precision optics and optical subsystems used in imaging, diagnostics, and analytical equipment.

In addition, we are seeing increasing demand from emerging technologies such as quantum computing, advanced sensing, and nuclear fusion research, where photonics play a critical enabling role.

What keeps your customers up at night?

For most of our customers, the biggest challenge is balancing performance, reliability, and scalability.

Many photonics solutions work well in laboratory environments but become much harder to deploy reliably in real-world systems or high-volume manufacturing. Optical alignment tolerances, thermal stability, and long-term reliability can all impact system performance.

Customers also face increasing pressure to accelerate development timelines while ensuring that new technologies can scale into production.

That is why they often look for partners who can combine optical design expertise with manufacturing capability. By working collaboratively early in the design process, we can help ensure that optical systems are optimized not only for performance but also for manufacturability and long-term reliability.

What does the competitive landscape look like and how do you differentiate?

Photonics is a diverse ecosystem with many specialized suppliers focused on individual technologies or components.

G&H differentiates itself through the breadth of our photonics capabilities and our ability to integrate them. Because we work across multiple optical technologies, including acousto-optics, electro-optics, fiber optics, and precision optics, we can design solutions that combine these elements into a single integrated system.

Vertical integration is another important differentiator. By controlling critical processes such as crystal growth, optical fabrication, and advanced assembly, we can maintain tight control over quality, performance, and supply chain reliability.

Finally, our engineering culture is built around close collaboration with customers. Many of our most successful projects begin as joint development programs where we work alongside the customer’s engineering teams to solve complex optical challenges.

What new features or technologies are you working on?

We are investing in several areas where photonics will play an increasingly important role.

One is advanced fiber optic technologies that support the growing capacity demands of global communications infrastructure. This includes high-reliability fiber components designed for long-lifetime operation in subsea networks.

Another is photonics solutions for quantum technologies. These systems often require extremely precise optical control, and our expertise in acousto-optic and electro-optic devices is well suited to these applications.

We are also continuing to develop more integrated optical subsystems that combine multiple photonic technologies into compact, robust solutions for demanding environments.

Across all of these areas, our focus is on helping customers move from research and prototype stages into scalable production.

How do customers normally engage with your company?

Most engagements begin with a technical discussion around a specific challenge or application requirement.

In some cases, customers are looking for a specific optical component or assembly. In other instances, they need help designing an optical subsystem or solving a broader system-level problem.

Our engineering teams work closely with customers to understand the application, define the performance requirements, and identify the most effective solution. That collaboration often continues through prototyping, validation, and eventually production.

Because photonics systems are highly application-specific, long-term partnerships are common. Many of our customer relationships span years or even decades as technologies evolve and new programs are developed.

Also Read:

CEO Interview with JP Pentinen of Vexlum

CEO Interview with Moti Margalit of SonicEdge

CEO Interview with Dr. Mohammad Rastegari of Elastix.AI


CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd

CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd
by Daniel Nenni on 03-30-2026 at 6:00 am

PXL 20260326 123134779

Jussi-Pekka Penttinen is the chief executive officer, chief technical officer, and cofounder of Vexlum Ltd, an advanced laser technology company. With more than 15 years of experience, he is a leading researcher in the field of Vertical External Cavity Surface Emitting Laser (VECSEL) and successfully commercialized the technology. Vexlum has translated cutting-edge research into products as a fast-growing company, providing an enabling technology for the quantum industry and cutting-edge solutions in other markets.

Tell us about your company.

Vexlum is a manufacturer of advanced semiconductor lasers for high-impact applications with deep roots

in a unique academic collaboration that bridged continents and scientific disciplines. The company’s laser concept emerged from a crucial partnership between a quantum research group at NIST (National Institute of Standards and Technology) in Boulder, Colorado, and a semiconductor and optoelectronics team at Tampere University in Finland. This partnership eventually led to the development of Vexlum’s core technology. This history is directly connected to the foundational work of Nobel laureate David Wineland’s group, whose groundbreaking trapped ion research required the kind of laser capabilities that Vexlum’s technology was designed to deliver.

Looking to the future, Vexlum’s success in the quantum computing industry has made it possible to diversify into high-growth markets like the semiconductor and medical industries. The extreme precision and stability required for quantum computing serve as a powerful validation of Vexlum’s technology, providing a strong reputation to leverage in other fields. Our lasers have potential applications in semiconductor manufacturing for precision lithography and inspection, as well as in medical treatments in dermatology and ophthalmology.. By focusing on providing the most powerful engine for these diverse applications, Vexlum is already being recognized as an advanced laser company that empowers a wide array of human endeavors formerly thought to be impossible, from scientific discovery and space exploration to everyday health and technology.

What problems are you solving?

The size and cost of lasers available to meet the needs of quantum technology have long been recognized as a bottleneck in advancing quantum technologies, such as trapped-ion or neutral-atom quantum computers. Additionally, the lack of a mature enabling technology supply chain for quantum technology further slows down the scaling of quantum computing technology.

Laser systems are often bulky and expensive to integrate, requiring significant space. More than 100 different laser wavelengths are needed across all quantum technology implementations, and different applications impose conflicting requirements on size, weight, and performance.

What application areas are your strongest in?

Vexlum’s lasers are an enabling technology for some of the most demanding applications in science and industry. While the company’s roots are in solving the hardest problems of quantum computing, this has also enabled our lasers to be used in the newest optical atomic clocks and in semiconductor manufacturing. We have been particularly strong in scientific applications. Vexlum has delivered hundreds of high-performance, compact, and cost-effective lasers that replace older, more complicated, and expensive technologies used for research and space exploration. This strategy of democratizing access to cutting-edge laser technology is allowing a broader range of institutions and companies to push the boundaries of research and development.

What keeps your customers up at night?

The cost and size of lasers that must be an exact wavelength a big concern. When new ideas and breakthroughs happen in science and industry, the actual implementation is often blocked by lack of funding or space.

For example, in the space industry, there are challenges in communicating with satellites and identifying the exact location of objects orbiting the Earth due to unpredictable weather and light. To fix this, lasers are used not only for the communication itself, but also for properly locating objects that need to be communicated with using a special yellow laser. Currently, the benefits of these adaptive optical correction systems, which use large, bulky, and expensive lasers, are limited to large telescopes with the space and budget to operate systems that overcome imaging fuzziness created by atmospheric air currents. Vexlum’s technology addresses the key challenges of space-to-ground optical links, including turbulent air currents and the slower transfer speeds of radio waves, by eliminating the need for the massive, costly yellow lasers used in ELTs (Extra Large Telescopes).

By making adaptive optics accessible to smaller telescopes, Vexlum’s approach opens the door to faster delivery of critical information, such as hyperspectral imaging for monitoring wildfires, floods, and ecosystems, as well as more precise tracking of satellites and space debris to enable trajectory corrections and collision avoidance.

What does the competitive landscape look like and how do you differentiate?

We are lucky to be located in Tampere, Finland, which is the emerging “Silicon Valley” of type III / IV laser semiconductor technology. Our patents, unique technology based on the foundational work of Nobel laureate David Wineland’s group, a growing number of partnerships in cutting-edge science, and being in the hidden hub of this specific type of semiconductor development, seem to be keeping us one step ahead of our competitors. This opportunity has been a long time in the making, based on many decades of research and innovation. We consider ourselves to be fortunate that we can be at the right time and place to see and participate in this moment of so many amazing breakthroughs enabled by new photonics advancements.

What new features/technology are you working on?

Vexlum just released its new VXL laser, a next-generation in its single-frequency Vertical-External-Cavity Surface-Emitting Laser (VECSEL) portfolio that combines high performance with a compact, robust design.

In addition to its capability to be made in any wavelength, this laser is 10 times smaller than many systems on the market with similar power qualities, and the VXL laser platform delivers the same high-output powers as Vexlum’s VALO platform, in a dramatically smaller and more resilient package, bringing quantum-enabling technology within reach of more research and industry applications.

As a vertically integrated laser manufacturer, Vexlum is accelerating development of quantum technologies by providing single-frequency, high-power, low-noise lasers at an industry-leading selection of wavelengths. Along with being some of the most powerful and accurate lasers available for quantum computing applications, the company’s solutions are driving development in quantum sensing and lab-to-field deployment of quantum technologies.

A laser platform that had typically comprised rack-mounted components is now reduced to a compact, two-liter system, a more than 20-fold reduction in volume, while improving robustness and accessibility. In addition to removing bottlenecks in scaling quantum technologies, the VXL has dual-use applications in the semiconductor, medical, or defense markets. The VXL has already been deployed in early-access projects by research organizations and universities, focusing on quantum computing and quantum sensing technologies.

How do customers normally engage with your company?

Tell us your wavelength and we will custom-make a system for you.

When new discoveries require a specific laser wavelength, we work directly with the researchers or manufacturing team, often from the start of the project, on understanding and jointly developing the complex specification. Then we take those specs back to our factory to grow the custom semiconductor in our reactor and build a laser system designed for their exact application. Because this is science, there are often iterations of a chip or laser to get things exactly right for our customers’ design, but with close coordination, we are proud to say that Vexlum lasers have been part of some amazing advancements and industrial breakthroughs.

Why is it such an important advance in laser technology to be able to make a laser that can be made in any wavelength?

In the semiconductor and quantum industries, we have historically been ‘wavelength-locked’ by the physical limitations of material systems like gallium arsenide or indium phosphide. Breaking this barrier with a wavelength-agnostic platform like the VXL is a fundamental shift from building experiments around available tools to building tools around the science.

By delivering high-power, single-frequency performance at any customized wavelength within a compact, two-liter footprint, we are effectively ‘de-risking’ the transition from laboratory proof-of-concept to industrial-scale manufacturing. For quantum computing, this means researchers no longer need room-sized racks of temperamental lasers to manipulate specific atomic transitions; they can now integrate these systems into rugged, field-deployable units. In semiconductor manufacturing, this flexibility allows for high-precision metrology and lithography applications that were previously cost-prohibitive or physically impossible due to space constraints. Ultimately, the impact is a democratization of precision photonics: when you remove the ‘science project’ complexity from the light source, you allow the industry to focus on scaling the solutions that will define the next decade of computing and sensing.

Also Read:

CEO Interview with JP Pentinen of Vexlum

CEO Interview with Moti Margalit of SonicEdge

CEO Interview with Dr. Mohammad Rastegari of Elastix.AI


Sensors Converge: Where Intelligence Meets the Edge

Sensors Converge: Where Intelligence Meets the Edge
by Daniel Nenni on 03-29-2026 at 6:00 pm

Sensors Merge 2026 SemiWiki

The Sensors Converge Conference is one of the premier technical gatherings dedicated to the design, integration, and deployment of sensing technologies across industries. The event brings together engineers, system architects, researchers, and product developers to explore advancements in sensor hardware, edge computing, connectivity, artificial intelligence, and embedded systems. As sensing technologies become foundational to automation, digital transformation, and data-driven decision making, the conference serves as a focal point for examining both emerging innovations and practical implementation challenges.

A major technical theme of Sensors Converge is sensor miniaturization and integration. Advances in MEMS fabrication, system-in-package (SiP) architectures, and heterogeneous integration have enabled multiple sensing modalities—such as temperature, pressure, inertial measurement, and environmental monitoring—to be combined into compact modules. These integrated systems reduce power consumption, lower bill-of-material costs, and simplify deployment in space-constrained applications such as wearable devices, industrial robotics, and medical instruments. Engineers at the conference often discuss trade-offs between accuracy, drift, and calibration complexity when combining sensors into multi-function packages.

Edge intelligence is another key focus area. Traditional sensing systems relied heavily on cloud-based processing, but latency, bandwidth, and privacy constraints have accelerated the adoption of on-device analytics. Microcontrollers and embedded processors now integrate DSP blocks and AI accelerators capable of running lightweight machine learning models. These capabilities allow sensors to perform anomaly detection, predictive maintenance, and classification locally. Technical sessions frequently explore model quantization, TinyML frameworks, and hardware acceleration strategies that optimize inference performance under tight power budgets. The convergence of sensing and intelligence reduces data transmission requirements while enabling real-time responsiveness.

Power management is also a central engineering challenge addressed at the conference. Many sensor nodes operate in battery-powered or energy-harvesting environments, such as remote industrial monitoring or smart agriculture. Designers must balance sampling frequency, communication intervals, and processing workload to maximize operational lifetime. Emerging techniques include duty cycling, ultra-low-power wake-on-event architectures, and hybrid energy harvesting using solar, vibration, or thermal gradients. Discussions often highlight the importance of co-design between sensor hardware and firmware to achieve optimal power efficiency.

Connectivity technologies form another major pillar of Sensors Converge. Engineers evaluate trade-offs among Bluetooth Low Energy, Wi-Fi, LoRaWAN, NB-IoT, and emerging ultra-wideband solutions. Each communication protocol presents unique benefits in range, throughput, latency, and energy consumption. For example, industrial monitoring applications may prioritize long-range low-power connectivity, while asset tracking systems require precise location accuracy. Conference presentations often include case studies demonstrating how hybrid connectivity strategies combine local mesh networking with cloud gateways for scalable deployments.

Sensor fusion and data reliability are also critical technical topics. Modern applications frequently combine data from multiple sensors to improve accuracy and robustness. For example, combining accelerometer, gyroscope, and magnetometer data enables precise orientation tracking. However, fusion algorithms must address noise, calibration mismatches, and environmental interference. Technical sessions explore Kalman filtering, Bayesian estimation, and machine-learning-based fusion approaches. These methods enhance performance in autonomous systems, robotics, and navigation technologies.

Security considerations have gained increasing attention as sensor networks expand. Embedded devices are often deployed in physically accessible environments, making them vulnerable to tampering and cyberattacks. Engineers discuss secure boot mechanisms, hardware root-of-trust, encrypted communication, and firmware update strategies. The integration of security at the silicon level is becoming essential to protect data integrity and system reliability. The conference emphasizes designing security features early in the development lifecycle rather than treating them as add-on components.

Applications showcased at Sensors Converge span multiple industries, including healthcare monitoring, smart cities, automotive systems, industrial automation, and environmental sensing. These use cases illustrate how sensing technologies enable predictive analytics, operational efficiency, and improved safety. For instance, vibration sensors in industrial equipment can detect early signs of mechanical wear, reducing downtime and maintenance costs. Similarly, environmental sensors support air quality monitoring and climate research initiatives.

Bottom line: the Sensors Converge Conference highlights the interdisciplinary nature of modern sensing systems. By addressing hardware innovation, edge intelligence, connectivity, power management, security, and data analytics, the event reflects the evolution of sensors from standalone components into intelligent distributed systems. As industries continue to rely on real-time data and automation, the technologies presented at Sensors Converge will play a central role in shaping the next generation of embedded and connected devices.

REGISTER NOW

Also Read:

Arteris Highlights a Path to Scalable Multi-Die Systems at the Chiplet Summit

Siemens Wins Best in Show Award at Chiplet Summit and Targets Broad 3D IC Design Enablement

Verification Analytics: The New Paradigm with Cogita-PRO at DVCON 2026

 


CEO Interview with JP Pentinen of Vexlum

CEO Interview with JP Pentinen of Vexlum
by Daniel Nenni on 03-29-2026 at 2:00 pm

Jussi Pekka Penttinen Velxum

Jussi-Pekka Penttinen is the chief executive officer, chief technical officer, and cofounder of Vexlum Ltd, an advanced laser technology company. With more than 15 years of experience, he is a leading researcher in the field of Vertical External Cavity Surface Emitting Laser (VECSEL) and successfully commercialized the technology. Vexlum has translated cutting-edge research into products as a fast-growing company, providing an enabling technology for the quantum industry and cutting-edge solutions in other markets.

Tell us about your company.

Vexlum is a manufacturer of advanced semiconductor lasers for high-impact applications with deep roots

in a unique academic collaboration that bridged continents and scientific disciplines. The company’s laser concept emerged from a crucial partnership between a quantum research group at NIST (National Institute of Standards and Technology) in Boulder, Colorado, and a semiconductor and optoelectronics team at Tampere University in Finland. This partnership eventually led to the development of Vexlum’s core technology. This history is directly connected to the foundational work of Nobel laureate David Wineland’s group, whose groundbreaking trapped ion research required the kind of laser capabilities that Vexlum’s technology was designed to deliver.

Looking to the future, Vexlum’s success in the quantum computing industry has made it possible to diversify into high-growth markets like the semiconductor and medical industries. The extreme precision and stability required for quantum computing serve as a powerful validation of Vexlum’s technology, providing a strong reputation to leverage in other fields. Our lasers have potential applications in semiconductor manufacturing for precision lithography and inspection, as well as in medical treatments in dermatology and ophthalmology.. By focusing on providing the most powerful engine for these diverse applications, Vexlum is already being recognized as an advanced laser company that empowers a wide array of human endeavors formerly thought to be impossible, from scientific discovery and space exploration to everyday health and technology.

What problems are you solving?

The size and cost of lasers available to meet the needs of quantum technology have long been recognized as a bottleneck in advancing quantum technologies, such as trapped-ion or neutral-atom quantum computers. Additionally, the lack of a mature enabling technology supply chain for quantum technology further slows down the scaling of quantum computing technology.

Laser systems are often bulky and expensive to integrate, requiring significant space. More than 100 different laser wavelengths are needed across all quantum technology implementations, and different applications impose conflicting requirements on size, weight, and performance.

What application areas are your strongest in?

Vexlum’s lasers are an enabling technology for some of the most demanding applications in science and industry. While the company’s roots are in solving the hardest problems of quantum computing,  this has also enabled our lasers to be used in the newest optical atomic clocks and in semiconductor manufacturing.  We have been particularly strong in scientific applications. Vexlum has delivered hundreds of high-performance, compact, and cost-effective lasers that replace older, more complicated, and expensive technologies used for research and space exploration. This strategy of democratizing access to cutting-edge laser technology is allowing a broader range of institutions and companies to push the boundaries of research and development.

What keeps your customers up at night?

The cost and size of lasers that must be an exact wavelength a big concern. When new ideas and breakthroughs happen in science and industry, the actual implementation is often blocked by lack of funding or space.

For example, in the space industry, there are challenges in communicating with satellites and identifying the exact location of objects orbiting the Earth due to unpredictable weather and light. To fix this, lasers are used not only for the communication itself, but also for properly locating objects that need to be communicated with using a special yellow laser. Currently, the benefits of these adaptive optical correction systems, which use large, bulky, and expensive lasers, are limited to large telescopes with the space and budget to operate systems that overcome imaging fuzziness created by atmospheric air currents. Vexlum’s technology addresses the key challenges of space-to-ground optical links, including turbulent air currents and the slower transfer speeds of radio waves, by eliminating the need for the massive, costly yellow lasers used in ELTs (Extra Large Telescopes).

By making adaptive optics accessible to smaller telescopes, Vexlum’s approach opens the door to faster delivery of critical information, such as hyperspectral imaging for monitoring wildfires, floods, and ecosystems, as well as more precise tracking of satellites and space debris to enable trajectory corrections and collision avoidance.

What does the competitive landscape look like and how do you differentiate?

We are lucky to be located in Tampere, Finland, which is the emerging “Silicon Valley” of type III / IV laser semiconductor technology. Our patents, unique technology based on the foundational work of Nobel laureate David Wineland’s group, a growing number of partnerships in cutting-edge science, and being in the hidden hub of this specific type of semiconductor development, seem to be keeping us one step ahead of our competitors. This opportunity has been a long time in the making, based on many decades of research and innovation. We consider ourselves to be fortunate that we can be at the right time and place to see and participate in this moment of so many amazing breakthroughs enabled by new photonics advancements.

What new features/technology are you working on?

Vexlum just released its new VXL laser, a next-generation in its single-frequency Vertical-External-Cavity Surface-Emitting Laser (VECSEL) portfolio that combines high performance with a compact, robust design.

In addition to its capability to be made in any wavelength, this laser is 10 times smaller than many systems on the market with similar power qualities, and the VXL laser platform delivers the same high-output powers as Vexlum’s VALO platform, in a dramatically smaller and more resilient package, bringing quantum-enabling technology within reach of more research and industry applications.

As a vertically integrated laser manufacturer, Vexlum is accelerating development of quantum technologies by providing single-frequency, high-power, low-noise lasers at an industry-leading selection of wavelengths. Along with being some of the most powerful and accurate lasers available for quantum computing applications, the company’s solutions are driving development in quantum sensing and lab-to-field deployment of quantum technologies.

A laser platform that had typically comprised rack-mounted components is now reduced to a compact, two-liter system, a more than 20-fold reduction in volume, while improving robustness and accessibility. In addition to removing bottlenecks in scaling quantum technologies, the VXL has dual-use applications in the semiconductor, medical, or defense markets. The VXL has already been deployed in early-access projects by research organizations and universities, focusing on quantum computing and quantum sensing technologies.

How do customers normally engage with your company?

Tell us your wavelength and we will custom-make a system for you.

When new discoveries require a specific laser wavelength, we work directly with the researchers or manufacturing team, often from the start of the project, on understanding and jointly developing the complex specification. Then we take those specs back to our factory to grow the custom semiconductor in our reactor and build a laser system designed for their exact application. Because this is science, there are often iterations of a chip or laser to get things exactly right for our customers’ design, but with close coordination, we are proud to say that Vexlum lasers have been part of some amazing advancements and industrial breakthroughs.

Why is it such an important advance in laser technology to be able to make a laser that can be made in any wavelength?

In the semiconductor and quantum industries, we have historically been ‘wavelength-locked’ by the physical limitations of material systems like gallium arsenide or indium phosphide. Breaking this barrier with a wavelength-agnostic platform like the VXL is a fundamental shift from building experiments around available tools to building tools around the science.

By delivering high-power, single-frequency performance at any customized wavelength within a compact, two-liter footprint, we are effectively ‘de-risking’ the transition from laboratory proof-of-concept to industrial-scale manufacturing. For quantum computing, this means researchers no longer need room-sized racks of temperamental lasers to manipulate specific atomic transitions; they can now integrate these systems into rugged, field-deployable units. In semiconductor manufacturing, this flexibility allows for high-precision metrology and lithography applications that were previously cost-prohibitive or physically impossible due to space constraints. Ultimately, the impact is a democratization of precision photonics: when you remove the ‘science project’ complexity from the light source, you allow the industry to focus on scaling the solutions that will define the next decade of computing and sensing.

CONTACT VEXLUM

Also Read:

CEO Interview with Moti Margalit of SonicEdge

CEO Interview with Dr. Mohammad Rastegari of Elastix.AI

CEO Interview with Jerome Paye of TAU Systems