Bronco Webinar 800x100 1

An Upper Bound on Effective Quantum Computation?

An Upper Bound on Effective Quantum Computation?
by Bernard Murphy on 04-07-2026 at 6:00 am

QC with entanglement ceiling

You may think that quantum theory is fully understood but that view is not quite right. There remain open questions around the uncertainty principle, wave-particle duality, measurement collapse, and harmonizing quantum mechanics and gravitation. These concerns may seem very abstract and irrelevant to everyday applications but together promote a lingering sense that we still don’t fully understand quantum mechanics. Efforts to correct our understanding have been with us for over 100 years, starting with Einstein, Born and many others.

One such proposal in a recent paper could have rather dramatic consequences for quantum computing. The method used to address gaps in our understanding suggests that there might be a theoretical upper bound to the number of qubits that can be usefully superposed and/or entangled at any one time. If true, industrial cryptography may never be cracked by a quantum computer. The paper is available from the Proceedings of the National Academy of Sciences, though this is an easier read. Below is my attempt to abstract the key ideas. Apologies up-front – this is a geeky blog.

Clarification

The paper doesn’t suggest a limit on the number of qubits we can stuff into a quantum computer. That number might be practically bounded but so far is not theoretically bounded. The limit proposed in this paper is on how large a set of interdependent qubits is possible in a quantum algorithm.

Superposition/entanglement is table stakes for any useful quantum algorithm. You may be able to compute with a larger number of qubits, but not any faster than a classical computer if you don’t use superposition or entanglement. Interdependence between qubits is fundamental to quantum advantage.

Rethinking space

There is a widely held (not universal) view in physics that to unify quantum theory and general relativity we must switch from continuous space representations to a view that space is not arbitrarily divisible. There are physical limits (Planck length) on continuity, also uncertainty and wave-particle duality start to look more reasonable in discrete space. Further, in quantum computing an information-theoretic view of qubit states is appealing following Shannon, and this too works most comfortably in discrete space.

An N-qubit vector should in principle be able to address any arbitrary state in the full possible state space of that vector. Remember qubit states are slightly more involved than regular bit states. A bit can be 0 or 1. A qubit can be ⍺|0> + β|1> where |0> and |1> are “pure” quantum states like spin-up and spin-down and ⍺, β are complex phases such that |⍺|2 + |β|2 = 1. In continuous space quantum theory this formulation can represent any possible state in the full state space of a qubit, similarly an N-qubit vector could represent any state in that N-qubit space. However, if space is discretized in some manner, this guarantee can no longer be provided according to the paper, and there is an upper limit to how many states can be addressed by an N-qubit vector.

Since discretized, each qubit ⍺ and β will have a finite range of possible values. Extending to an N-qubit vector there will be similarly be a finite bound on how many states can be encoded within the state space. Less obviously, there will be in-principle “reachable” states in the state space which fail to meet discretization rules, and therefore are undefined/unreachable. As you add qubits, state space expands exponentially as do unreachable states, and quantum advantage begins to tail off. The author suggests that the absolute upper limit N for which a qubit vector could effectively address only legal states is 1000 qubits and that practical limits could be even lower.

This 1,000-qubit limit is well below any qubit count I have seen suggested (>104) for Shor or beyond-Shor algorithms applied to RSA 2048. If the theory is correct, this is a significant limitation. Many useful applications may still be possible, especially in quantum chemistry and materials science, but problem sizes will be much more constrained than we have been led to believe.

It’s just a theory

True, although the theory proposes intriguing resolutions to several of the open quantum questions I mentioned earlier. The paper suggests a practical test for a limit, which should be possible within the next 5-10 years. Simply run Shor’s algorithm attempting to factor a large integer on an N-qubit machine (logical qubits). If the concept behind the algorithm holds, performance should saturate to classical performance beyond some threshold for N. The paper suggests saturation may start as low as 500 qubits. If quantum advantage disappears or starts to disappear around this point, we will have hit a fundamental barrier in quantum computing. If not, then the theorists must go back to the drawing board.

Incidentally, the author illustrates his reasoning using rational number discretization, though stresses that his primary conclusion should hold (if correct) independent of that choice.

I will be interested to hear what QM builders and theorists have to say about this.

Also Read:

Another Quantum Topic: Quantum Communication

PQShield on Preparing for Q-Day

Where is Quantum Error Correction Headed Next?


yieldWerx Delivers a Master Class in Co-Packaged Photonics Implementation

yieldWerx Delivers a Master Class in Co-Packaged Photonics Implementation
by Mike Gianfagna on 04-06-2026 at 10:00 am

yieldWerx Delivers a Master Class in Co Packaged Photonics Implementation

We all know the semiconductor industry is seeing a new era of data intensity. The industry’s response includes advanced semiconductor design strategies, the adoption of chiplets, and the integration of optical I/O and photonics to enable higher performance, faster AI computation, and increased modularity. Co-packaged photonics holds great promise, but implementation strategies are complex.

The graphic above illustrates the digital thread that must be followed for effective deployment. It turns out a lot of focus is on items 2 and 3. While useful, the complete lifecycle is needed. yieldWerx is a company that manages this kind of broad data and the required analytics. The company will be presenting a webinar soon that treats these topics and delivers many more critical insights needed to stay ahead of data intensity demands. A link to register is coming but first let’s look at some of the details of how yieldWerx delivers a master class in co-packaged photonics implementation.

The Webinar Presenter

Aftkhar Aslam

The webinar is presented by Aftkhar Aslam, CEO, CTO and co-founder of yieldWerx. Aftkhar is a semiconductor industry veteran with more than 30 years of experience spanning manufacturing, test engineering, yield management, IP strategy, and enterprise digital transformation.

Prior to founding yieldWerx, Aftkhar held senior leadership roles at Texas Instruments, where he served as Worldwide Director of Test & Yield Management Solutions & Director of Digital Transformation in the space of design and delivery systems and solutions across hardware and software.

He also served as a Director within Accenture’s Industry X (IX) practice, where he advised leading global technology organizations including Intel, GlobalFoundries, Qualcomm, Lam Research, Microsoft, STMicroelectronics, and Skyworks. His consulting work focused on bridging the design-to-manufacturing divide — architecting digital thread and digital twin strategies that connected product design, IP management, manufacturing execution, test, and enterprise systems into unified operational frameworks.

Aftkhar brings a broad view of the problem to this webinar. His insights are quite valuable.

Webinar Topics

The webinar begins with an overview of yieldWerx capabilities and how the it serves the industry. The company delivers a trusted data and yield analytics platform supporting semiconductor companies across fab, assembly, test, advanced packaging, photonics, and AI-driven device manufacturing. Aftkhar explains how the yieldWerx platform enables organizations to unify fragmented manufacturing data into scalable, actionable yield intelligence. He gets into some details about the how the company works and what it delivers. You will find this overview quite valuable.

He then explores the key challenges faced in the photonics industry. The graphic at the top of this post serves as an initial overview of what’s involved. Aftkhar then explains in detail the many CPO challenges. There are about 12 items discussed. They span electrical, optical, thermal, packaging, test, assembly and reliability as well as many others. Items such as optical data complexity, test flow discontinuity and alignment data loss are also explored in detail.

Aftkhar then moves on to dynamic behavior vs. static test, lack of cross-domain correlation, data volume explosion, the lack of standard data model for CPO and the hidden problem of good die vs. bad module. He then brings it all together with an insightful summary. You will learn a lot.

Aftkhar then focuses on how yieldWerx addresses the challenges. Items discussed here include a unified data model, the importance of full genealogy, cross-domain correlation and using AI on non-standard data. Examples of how to apply these concepts on real data are discussed.

Details about how yieldWerx can accelerate your photonics journey are then presented in detail. The organization of the company’s broad services and how they are organized to achieve the required goals is explained. This is followed by specific examples of deployment in real design and manufacturing scenarios. Quite a few examples across a broad range of activities are covered.

After a short summary of the presentation there will be a Q&A session.

To Learn More

Aftkhar is extremely knowledgeable on the topics presented and he covers a great deal of very useful information clearly and concisely. If co-packaged optics is in your future, this webinar is must-see. The webinar will be on Thursday, April 16, 2026, at 10AM Pacific Time. You can register here. And that’s how yieldWerx delivers a master class in co-packaged photonics implementation.

Also Read:

WEBINAR: Outrunning the Data Wave – Why we need to keep pace with the coming 400% data surge 

CEO Interview with Aftkhar Aslam of yieldWerx


RISC-V Has Momentum. The Real Question Is Who Can Deliver

RISC-V Has Momentum. The Real Question Is Who Can Deliver
by Kalar Rajendiran on 04-06-2026 at 6:00 am

RVA23 Momentum (from Andrea Gallo keynote at 2025 RISC V Summit)

RISC-V has momentum. The industry knows it. The harder question is: who can actually deliver when and where it matters?

A Shift That Changes the Stakes

On March 24, 2026, Arm made something explicit: it is now a silicon company. After decades as a neutral IP provider, Arm is moving up the stack. It’s building chips and complete solutions, not just licensing architectures.

This is a fundamental shift. Arm’s value was rooted in being a trusted, independent processor and system IP provider enabling an ecosystem that allowed its customers to build differentiated SoCs. That dynamic is changing. Even if gradual, the implication is clear: Arm is moving closer to its customers’ end markets, and in some cases, competing with them. Moments like this trigger strategic reassessment.

RISC-V Moves to Center Stage

In that context, RISC-V is no longer peripheral to the conversation.

What was once seen as promising but fragmented has matured. With RVA23, the ecosystem now has a defined baseline for high-performance, general-purpose compute. Operating systems such as Linux can target a consistent profile, removing a major barrier to adoption.

RISC-V is no longer just an ISA. It is becoming a platform.

An Active but Uneven Market

Progress is real. Silicon exists. Teams are building. Some deployments are already in production.

But the landscape remains unsettled. Companies have exited, been acquired, or shifted to internal use. Many others remain focused on embedded or domain-specific markets.

The result is a fragmented picture: high activity, uneven maturity, and limited clarity on who is building for high-performance, general-purpose compute. Who is actually building deployable platforms for the open market?

The Gap Between Talking and Shipping

The constraint is no longer architecture but rather execution.

High-performance compute requires more than a core. It requires a complete system: out-of-order CPUs, coherent memory, system IP, high-speed IO, and a usable software stack. And most importantly, it requires real silicon on which real workloads can be run, measured and benchmarked.

This is where ambition meets reality.

From IP to Systems: Akeana’s Approach

A small number of companies are addressing this gap directly. Akeana is one of them.

Rather than focusing solely on CPU IP, Akeana is building a system-level platform. Its Alpine test chip, taped out in December 2025, is a 4nm RVA23-compatible SoC designed for software development and validation.

Akeana was able to lean on its strong server class SoC pedigree to showcase the maturity and quality of its processor plus system IP portfolio by pulling together an SoC design and taping it out in a short amount of time.

Alpine integrates an eight-core cluster of 64-bit out-of-order (application) processors, additional control cores, a coherent mesh, full system IP, and high-performance IO including LPDDR5 and PCIe Gen5. Of note, the test chip also showcases a 4-way simultaneous multi-threaded and wide vector (512b) in-order core that is very commonly used in AI (xPU) chips.

Alpine Test Chip

The system has also been validated with a full software stack, including Linux; running prior to tape-out via emulation.

A defining feature is configurability: pipelines, vector units, and other parameters can be tuned for specific workloads, particularly in cloud and AI environments. This shifts hardware closer to a programmable solution.

The focus is clear: Ability to deploy, not just ability to design.

Performance Is Now Part of the Equation

Akeana’s 5100, 5200, and 5300 series target performance tiers aligned with modern high-performance CPUs, with benchmarking against publicly available silicon providing early validation.

Industry signals point in the same direction. Recent RISC-V summit keynotes highlight increasingly ambitious designs, including many-core systems targeting server-class workloads.

The intent is clear. Execution is the differentiator.

Why This Remains Difficult

High-performance silicon demands deep expertise, large-scale verification, advanced process nodes, tight hardware-software integration, and significant upfront investment. Even experienced teams take years to deliver production-ready systems.

The ecosystem of silicon, systems and software readiness for RISC-V appears large, but at the high-performance level, it is much smaller.

As a result, the number of credible players narrows quickly at the high end. This is not a limitation of RISC-V. It reflects the level of execution required.

Why Timing Matters

The urgency is increasing. AI agentic workloads are elevating the role of CPUs in orchestration and data movement. At the same time, companies are reassessing architectural dependencies, especially in light of shifts like Arm’s move into silicon.

With RVA23 in place, expectations for RISC-V have risen. The question is no longer whether it can work but rather whether it can deliver now.

Furthermore, the software lift that has often been stated as a barrier to entry for high performance RISC-V is simplified. An orchestration CPU for agentic workloads does not bear the same burden as a general-purpose Cloud/Enterprise CPU in terms of middleware and applications software porting/support required.

From Momentum to Delivery

RISC-V has crossed an important threshold: momentum, relevance, and demand are all in place.

What matters now is delivery. Who can move from roadmap to silicon, from silicon to systems, and from systems to deployment?

Summary

RISC-V’s moment has arrived. The next phase will not be defined by participation, but by execution. Defined by those who can translate momentum into real, high-performance platforms. Because in the end, adoption comes down to one thing: What can be built, shipped, and trusted.

Also Read:

Akeana Partners with Axiomise for Formal Verification of Its Super-Scalar RISC-V Cores

Demand Meets Design: RISC-V and the Next Wave of AI Hardware

CEO Interview with Rabin Sugumar of Akeana


CEO Interview with Jussi-Pekka Penttinen of Vexlum

CEO Interview with Jussi-Pekka Penttinen of Vexlum
by Daniel Nenni on 04-05-2026 at 2:00 pm

PXL 20260326 123134779 (1)

Jussi-Pekka Penttinen is the chief executive officer, chief technical officer, and cofounder of Vexlum Ltd, an advanced laser technology company. With more than 15 years of experience, he is a leading researcher in the field of Vertical External Cavity Surface Emitting Laser (VECSEL) and successfully commercialized the technology. Vexlum has translated cutting-edge research into products as a fast-growing company, providing an enabling technology for the quantum industry and cutting-edge solutions in other markets.

Tell us about your company.

Vexlum is a manufacturer of advanced semiconductor lasers for high-impact applications with deep roots

in a unique academic collaboration that bridged continents and scientific disciplines. The company’s laser concept emerged from a crucial partnership between a quantum research group at NIST (National Institute of Standards and Technology) in Boulder, Colorado, and a semiconductor and optoelectronics team at Tampere University in Finland. This partnership eventually led to the development of Vexlum’s core technology. This history is directly connected to the foundational work of Nobel laureate David Wineland’s group, whose groundbreaking trapped ion research required the kind of laser capabilities that Vexlum’s technology was designed to deliver.

Looking to the future, Vexlum’s success in the quantum computing industry has made it possible to diversify into high-growth markets like the semiconductor and medical industries. The extreme precision and stability required for quantum computing serve as a powerful validation of Vexlum’s technology, providing a strong reputation to leverage in other fields. Our lasers have potential applications in semiconductor manufacturing for precision lithography and inspection, as well as in medical treatments in dermatology and ophthalmology.. By focusing on providing the most powerful engine for these diverse applications, Vexlum is already being recognized as an advanced laser company that empowers a wide array of human endeavors formerly thought to be impossible, from scientific discovery and space exploration to everyday health and technology.

What problems are you solving?

The size and cost of lasers available to meet the needs of quantum technology have long been recognized as a bottleneck in advancing quantum technologies, such as trapped-ion or neutral-atom quantum computers. Additionally, the lack of a mature enabling technology supply chain for quantum technology further slows down the scaling of quantum computing technology.

Laser systems are often bulky and expensive to integrate, requiring significant space. More than 100 different laser wavelengths are needed across all quantum technology implementations, and different applications impose conflicting requirements on size, weight, and performance.

What application areas are your strongest in?

Vexlum’s lasers are an enabling technology for some of the most demanding applications in science and industry. While the company’s roots are in solving the hardest problems of quantum computing, this has also enabled our lasers to be used in the newest optical atomic clocks and in semiconductor manufacturing. We have been particularly strong in scientific applications. Vexlum has delivered hundreds of high-performance, compact, and cost-effective lasers that replace older, more complicated, and expensive technologies used for research and space exploration. This strategy of democratizing access to cutting-edge laser technology is allowing a broader range of institutions and companies to push the boundaries of research and development.

What keeps your customers up at night?

The cost and size of lasers that must be an exact wavelength a big concern. When new ideas and breakthroughs happen in science and industry, the actual implementation is often blocked by lack of funding or space.

For example, in the space industry, there are challenges in communicating with satellites and identifying the exact location of objects orbiting the Earth due to unpredictable weather and light. To fix this, lasers are used not only for the communication itself, but also for properly locating objects that need to be communicated with using a special yellow laser. Currently, the benefits of these adaptive optical correction systems, which use large, bulky, and expensive lasers, are limited to large telescopes with the space and budget to operate systems that overcome imaging fuzziness created by atmospheric air currents. Vexlum’s technology addresses the key challenges of space-to-ground optical links, including turbulent air currents and the slower transfer speeds of radio waves, by eliminating the need for the massive, costly yellow lasers used in ELTs (Extra Large Telescopes).

By making adaptive optics accessible to smaller telescopes, Vexlum’s approach opens the door to faster delivery of critical information, such as hyperspectral imaging for monitoring wildfires, floods, and ecosystems, as well as more precise tracking of satellites and space debris to enable trajectory corrections and collision avoidance.

What does the competitive landscape look like and how do you differentiate?

We are lucky to be located in Tampere, Finland, which is the emerging “Silicon Valley” of type III / IV laser semiconductor technology. Our patents, unique technology based on the foundational work of Nobel laureate David Wineland’s group, a growing number of partnerships in cutting-edge science, and being in the hidden hub of this specific type of semiconductor development, seem to be keeping us one step ahead of our competitors. This opportunity has been a long time in the making, based on many decades of research and innovation. We consider ourselves to be fortunate that we can be at the right time and place to see and participate in this moment of so many amazing breakthroughs enabled by new photonics advancements.

What new features/technology are you working on?

Vexlum just released its new VXL laser, a next-generation in its single-frequency Vertical-External-Cavity Surface-Emitting Laser (VECSEL) portfolio that combines high performance with a compact, robust design.

In addition to its capability to be made in any wavelength, this laser is 10 times smaller than many systems on the market with similar power qualities, and the VXL laser platform delivers the same high-output powers as Vexlum’s VALO platform, in a dramatically smaller and more resilient package, bringing quantum-enabling technology within reach of more research and industry applications.

As a vertically integrated laser manufacturer, Vexlum is accelerating development of quantum technologies by providing single-frequency, high-power, low-noise lasers at an industry-leading selection of wavelengths. Along with being some of the most powerful and accurate lasers available for quantum computing applications, the company’s solutions are driving development in quantum sensing and lab-to-field deployment of quantum technologies.

A laser platform that had typically comprised rack-mounted components is now reduced to a compact, two-liter system, a more than 20-fold reduction in volume, while improving robustness and accessibility. In addition to removing bottlenecks in scaling quantum technologies, the VXL has dual-use applications in the semiconductor, medical, or defense markets. The VXL has already been deployed in early-access projects by research organizations and universities, focusing on quantum computing and quantum sensing technologies.

How do customers normally engage with your company?

Tell us your wavelength and we will custom-make a system for you.

When new discoveries require a specific laser wavelength, we work directly with the researchers or manufacturing team, often from the start of the project, on understanding and jointly developing the complex specification. Then we take those specs back to our factory to grow the custom semiconductor in our reactor and build a laser system designed for their exact application. Because this is science, there are often iterations of a chip or laser to get things exactly right for our customers’ design, but with close coordination, we are proud to say that Vexlum lasers have been part of some amazing advancements and industrial breakthroughs.

Bonus Question: Why is it such an important advance in laser technology to be able to make a laser that can be made in any wavelength?

In the semiconductor and quantum industries, we have historically been ‘wavelength-locked’ by the physical limitations of material systems like gallium arsenide or indium phosphide. Breaking this barrier with a wavelength-agnostic platform like the VXL is a fundamental shift from building experiments around available tools to building tools around the science.

By delivering high-power, single-frequency performance at any customized wavelength within a compact, two-liter footprint, we are effectively ‘de-risking’ the transition from laboratory proof-of-concept to industrial-scale manufacturing. For quantum computing, this means researchers no longer need room-sized racks of temperamental lasers to manipulate specific atomic transitions; they can now integrate these systems into rugged, field-deployable units. In semiconductor manufacturing, this flexibility allows for high-precision metrology and lithography applications that were previously cost-prohibitive or physically impossible due to space constraints. Ultimately, the impact is a democratization of precision photonics: when you remove the ‘science project’ complexity from the light source, you allow the industry to focus on scaling the solutions that will define the next decade of computing and sensing.

CONTACT VEXLUM

Also Read:

CEO Interview with Charlie Peppiatt of Gooch & Housego

CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd

CEO Interview with JP Pentinen of Vexlum


CEO Interview with Dr. Tony Atti of Phononic

CEO Interview with Dr. Tony Atti of Phononic
by Daniel Nenni on 04-05-2026 at 12:00 pm

Tony Atti Headshot


Tony Atti, Ph.D. is Phononic’s CEO. Tony is an experienced technology entrepreneur and executive, who is passionate about disruptive technology solutions that change our lives. As CEO and Co-Founder of Phononic, Dr. Atti has led the company’s mission to sustainably transform global cooling and heating through semiconductor innovation. The Company’s thermoelectric technology platform provides mission critical cooling for chip-level hotspots across the data center, including Optics (Pluggable & Co-packaged); GPU/HBM; switches and server rack/infrastructure. The company’s enterprise cooling solutions provide end-users flexible access to hardware and software systems that drive AI computing, dramatically mitigate CO2 emissions, and lower data center energy consumption.

Tell us about your company?

Phononic is the preeminent leader in solid state cooling solutions for data centers, deployed across every major hyperscaler today with a suite of integrated thermoelectric solutions for networking, GPU HBMs, and AI rack/infrastructure. Phononic’s portfolio of cooling solutions delivers precision cooling when and where it is needed, optimizing AI infrastructure, minimizing costly overprovisioning, delivering cooling in millisecond timing, unlocking performance, and dramatically improving ROI, useful lifetime, and energy efficiency.

What problems are you solving?

AI is transforming the data center — and cooling is consuming almost as much power as computing with increasingly limited effectiveness. Compute density and thermal loads are surging, creating performance bottlenecks and wasting energy by overprovisioning. Conventional cooling approaches lack the precision and real-time responsiveness to manage rapid thermal fluctuations, resulting in throttled workloads and higher operational overhead.

By addressing the fundamental thermal management challenges of data centers, we’re addressing the performance-hindering bottlenecks currently impacting AI infrastructure. Deployed via our Thermal Kit, Phononic’s solution encompasses thermoelectric coolers, scalable cartridges, integrated software, thermal modeling & design support. Phononic’s solution delivers precise, predictive thermal control for compute-intensive AI workloads, anywhere in the data center. Our thermoelectric coolers (TECs) deliver rapid cooling and precise temperature control when and where it’s needed. This is critical for high-speed optical transceivers and co-packaged optics as data centers transition to 1.6T compute.

Further, we solve the problem of managing multiple systems within a data center as our solutions are able to be seamlessly integrated into existing system-level hardware, computing nodes, chassis and racks. Our approach offers operators dynamic flexibility to actively optimize PUE, compute, and lifespan variables as desired, in real-time, across the entire data center.

The problems we solve are best seen in the results we create. For example, Phononic’s cooling solution for GPU HBMs delivers up to 0.15 PUE savings, 5X lifetime improvement, 40% greater compute performance, and a 3X ROI. It is designed to maintain tightly regulated operating conditions under highly dynamic heat loads. With an increase in 75% cooling capability, the system enables higher sustained GPU performance, reduces thermally-induced throttling, and improves overall cluster stability.

What application areas are your strongest?

We have an entire suite of integrated thermoelectric solutions for networking, including new solutions for 1.6T pluggables, CPOs and GPU HBMs. I would say it’s not a single application area that is our strongest, but our ability to create bespoke solutions for the varying needs of today’s data centers, all integrated through our Thermal Fabric for total data center and workload orchestration. Not all facilities have the same challenges, but we create each solution to specifically target the unique requirements of each of our customers.

What keeps your customers up at night?

From our conversations with customers there’s a tug of war between increasing compute performance, facilities management, and component lifetime — not to mention an insatiable demand for capacity. Our customers are looking for partners that can unleash their AI compute performance, but in a manner that is scalable for their long-term growth and ROI. With deployments across every major hyperscaler, the industry’s first qualified and deployed 1.6T HVM solution, and a scalable contract manufacturing network, our customers know Phononic is an organization they can depend on.

What does the competitive landscape look like and how do you differentiate?

The majority of cooling solutions currently in the market are over-provisioned, unintelligent, and imprecise — focusing on the broad heat management challenge instead capitalizing on the exponential performance potential of node-level thermal management.

Phononic differentiates by taking a proactive, precise approach to attacking the hotspots themselves throughout the entire data center. A cooling process that usually takes minutes to address, can be addressed in milliseconds thanks to our two-way, spot cooling approach. We also analyze signals upstream allowing us to anticipate when a thermal load is coming so we can pre-cool and ensure the server and system remains stable — addressing the spot before it even has a chance to heat up. And our embedded SW/FW enable data center architects and operators to constantly learn, optimize and refine the performance of their infrastructure, unlocking uptime, enabling strategic workload placement and orchestration, and protecting against productivity losses or un-planned downtimes.

What new features/technology are you working on?

We actually just announced an expanded optical networking portfolio, including 1.6T pluggable and CPO cooling solutions. In addition, we are excited to be sampling our thermal-kit enabled GPU HBM cooling solution for AI data centers.

Our expanded portfolio of cooling for optical transceivers, including 1.6T Pluggables, support greater than 50% higher heat load while maintaining power consumption. Our CPO-ready Thermal Kit delivers advanced packaging and localized cooling for co-packaged optical engines in scale out and scale up applications, and is perfectly suited for the inevitable market transition to co-packaged optics more broadly, unlocking additional performance headroom, reducing signal path distance and associated energy draw, and enabling the scale and bandwidth density required for next generation fabrics. Finally, our Next Gen HBM4 Aligned GPU HBM Cooling Solution is incredible, delivering up to 0.15 PUE savings, 5X lifetime improvement, 40% greater compute performance, and a 3X ROI for data centers.

How do customers normally engage with your company?

Our customers can be across the spectrum of data center operators — from the owner of the facility to specific component design suppliers looking for the right cooling solution mix. From the start of the engagement, whether it is a new build or a retrofit or specific component level cooling solution, we work with the customer to design the specific platform for them to be successful. Every engagement and deployment from us is designed in partnership and collaboration with the customer.

Also Read:

CEO Interview with Charlie Peppiatt of Gooch & Housego

CEO Interview with Jussi-Pekka Penttinen of Vexlum Ltd

CEO Interview with JP Pentinen of Vexlum


Podcast EP338: How Thermo Fisher Scientific Helps Advanced Semiconductor Development with Dr. Mohan Iyer

Podcast EP338: How Thermo Fisher Scientific Helps Advanced Semiconductor Development with Dr. Mohan Iyer
by Daniel Nenni on 04-03-2026 at 10:00 am

Daniel is joined by Dr. Mohan Iyer, who serves as the vice president and general manager of the Semiconductor Business Unit at Thermo Fisher Scientific, a global leader in providing reference metrology, defect characterization, and localization equipment. These advanced systems are essential for driving innovation, accelerating time to market, and optimizing manufacturing yields in the semiconductor industry. Mohan has over 27 years of experience in the semiconductor industry specializing in semiconductor equipment and process control.

Mohan provides some background on Thermo Fisher Scientific, the world leader serving science. He explains that the company’s mission is to help its customers make the world healthier, cleaner and safer. Markets served include life sciences, diagnostics, biopharma and semiconductor.

In the semiconductor area, Dan explores the impact Thermo Fisher’s advanced sensing and imaging technology is having on semiconductors, particularly 3D ICs. Mohan discusses the reasons for moving from planar to 3D ICs and the impact Thermo Fisher’s automated 3D analysis is having on process and device development. He describes the benefits of the early detection the technology delivers.

You can learn more about what Thermo Fisher Scientific does for the semiconductor market here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Samtec’s Strong Presence at embedded world 2026

Samtec’s Strong Presence at embedded world 2026
by Mike Gianfagna on 04-03-2026 at 6:00 am

Samtec’s Strong Presence at embedded world 2026

The embedded world Exhibition & Conference recently concluded. The event is held annually in Nuremberg, Germany and has become one of the most influential gatherings for the global embedded systems community. Since its inception in 2003, the event has grown from a modest technical meeting into a large-scale international platform where industry professionals, engineers, researchers, and decision-makers meet to exchange knowledge, showcase innovations, and discuss future trends in embedded technologies. 

Samtec has a tradition of strong presence at major shows like this, and the recent embedded world event followed this winning formula. There were many presentations and joint demonstrations. I’ll also provide a bit more backstory thanks to my post-show interview with Matt Burns of Samtec. Here are the highlights that define Samtec’s strong presence at embedded world 2026.

Around the Show Floor

Samtec and TechWay showcased Samtec’s FireFly™ technology on a new platform running at 100 Gbps (4x25G) at the TechWay booth. Samtec and Dolphin Interconnect Solutions also showcased optical connectivity in embedded systems running at PCIe 5.0 data rates at the Samtec booth.

Matt Burns presented The SGET Open Harmonized FPGA Module™ (oHFM) is Here! – Now What?!?! on Tuesday, March 10 at the Conference. Matt also presented Implementing Optical PCIe® Technology in Embedded Computing Applications at the Lecture Exhibitor’s Forum on Thursday, March 12.

Overall, Samtec’s technical experts were authors and/or presenters during the conference and exhibitor forums as well as panel discussions throughout the event.

Comments from Matt Burns – Capabilities and Lessons Learned

Matt Burns

I had the opportunity to chat with Matt Burns, Global Director of Technical Marketing at Samtec. Matt likes the embedded world event. There were about 36,000 visitors attending this year from almost 90 countries. In Matt’s words, “it’s the biggest embedded show on the planet.” He felt all the major areas of the world were in attendance.

He mentioned that AI at the edge was a big topic at the event this year. All the major suppliers were weighing in with approaches here. Samtec aims to have strong support for edge AI System-on-Module (SoM) form factors as this is a key enabler. Matt mentioned several other emerging standards in the edge AI area that Samtec is focused on. COM-HPC, or the computer-on-module form factor standard is one.

He also mentioned PCIe6 and the Open Harmonized FPGA Modules (oHFM), the first industry SOM standard targeted for FPGAs. The central theme of our discussion was the increasing list of emerging standards that are enabling efforts like AI at the edge and Samtec’s focus to build interconnect solutions that enable those new standards. Simply put, smaller, faster, denser interconnect is what Samtec delivers.

In terms of Samtec’s readiness to meet these challenges Matt pointed out that the cutting-edge demands come from AI datacenter applications. The demands of AI at the edge are not at the same level, so Samtec is positioned quite well to meet those challenges in a timely manner. Matt pointed out that the performance Samtec delivered to datacenters a handful of years ago is the class of problem AI at the edge is presenting today. So, the company is ready and quite capable of meeting those challenges.

This is the “lessons learned” part of the discussion. We also discussed product lifetimes. For embedded applications, the expectation is that a deployment will be in use for 10+ years. The quality and reliability Samtec is known for works well in this environment. Matt shared that Samtec is currently celebrating its 50th year.  That kind of staying power is what the embedded market really needs. Samtec appears to be in exactly the right place.

Matt also shared that he has been with Samtec for over 10 years. Stability seems to be prevalent at the company.

To Learn More

To summarize, Samtec is well-positioned to support the growing needs of AI at the edge for embedded applications. You can learn more about the work Samtec is doing to support key industry standards here.  You can learn more about Samtec’s FireFly technology here. You can learn more about PCIe 5.0 data rates here. And you can learn more about OHFM support here.

And that’s the story behind Samtec’s strong presence at embedded world 2026.

Also Read:

Samtec Ushers in a New Era of High-Speed Connectivity at DesignCon 2026

2026 Outlook with Mathew Burns of Samtec

Samtec Practical Cable Management for High-Data-Rate Systems


Silicon Catalyst and Microelectronics US 2026

Silicon Catalyst and Microelectronics US 2026
by Daniel Nenni on 04-02-2026 at 10:00 am

Silicon Catalyst Microelectronics US 2026 Conference

The designation of Silicon Catalyst as the exclusive strategic partner for Microelectronics US 2026 represents a significant alignment between a leading semiconductor startup ecosystem and a rapidly growing U.S. microelectronics industry event. This partnership reflects broader trends in semiconductor innovation, including the increasing importance of startup-driven technology development, cross-sector collaboration, and national supply-chain resilience.

Microelectronics US 2026 is scheduled for April 22–23, 2026, at the Palmer Events Center in Austin, Texas. The conference aims to convene senior engineers, technical architects, investors, manufacturing specialists, and policy stakeholders from across the U.S. microelectronics ecosystem. The event is designed to focus on semiconductor design, advanced manufacturing, embedded systems, AI hardware, and supply-chain innovation. By bringing together these stakeholders, the conference seeks to foster technical collaboration and accelerate commercialization pathways for emerging technologies.

The exclusive strategic partnership with Silicon Catalyst enhances this mission. Silicon Catalyst is widely recognized as an accelerator dedicated exclusively to semiconductor startups, providing incubation programs, technical resources, and access to investors and corporate partners. Through its ecosystem, startups gain access to industry advisors, design tools, manufacturing resources, and strategic guidance that can shorten development cycles and reduce the capital barriers typically associated with chip innovation.

Under the partnership, Silicon Catalyst collaborates with IQPC Exhibitions, the event organizer, to strengthen Microelectronics US 2026 as a key commercial and technical platform for the U.S. semiconductor industry. This strategic role positions Silicon Catalyst to influence program development, connect startups with industry stakeholders, and highlight emerging technologies from its accelerator portfolio.

The technical significance of this collaboration lies in the evolving nature of semiconductor innovation. Historically, semiconductor advances were dominated by large IDMs and fabless companies with substantial capital resources. However, the industry is increasingly driven by specialized startups developing domain-specific accelerators, photonics-based processors, chiplets, and heterogeneous integration technologies. These startups often rely on ecosystem partnerships to access Design IP, EDA tools, and manufacturing capacity. By integrating Silicon Catalyst into the conference structure, Microelectronics US 2026 aims to create a platform that supports this new model of distributed innovation.

Another key dimension is workforce and ecosystem development. Microelectronics US 2026 is expected to host more than 3,000 attendees and feature over 150 exhibitors covering chip design, AI, photonics, embedded systems, and power electronics. Such scale provides an opportunity to connect startup founders with suppliers, foundries, packaging companies, and system integrators. This interaction is critical for translating research concepts into manufacturable silicon solutions.

From a technical perspective, Silicon Catalyst’s involvement may emphasize emerging areas such as chiplet-based architectures, AI accelerators, MEMS sensors, and quantum-related semiconductor technologies. These domains require collaboration across multiple disciplines including device physics, packaging engineering, and system architecture. The accelerator’s network of advisors and in-kind partners can help bridge these disciplines, enabling startups to move from concept to tape-out more efficiently.

The partnership also aligns with broader national priorities. The U.S. semiconductor ecosystem has increasingly emphasized domestic innovation capacity, supply-chain resilience, and advanced packaging leadership. Conferences such as Microelectronics US serve as coordination points for academia, startups, and established companies. By leveraging Silicon Catalyst’s startup pipeline, the event can highlight early-stage technologies that may evolve into future production platforms.

In addition, the exclusive nature of the partnership suggests a deeper integration than typical sponsorship arrangements. Silicon Catalyst’s role may include curating startup showcases, facilitating investor meetings, and contributing to technical sessions. This could lead to more practical discussions focused on commercialization challenges, such as design-for-manufacturability, IP reuse, and advanced packaging integration.

REGISTER HERE

Bottom line: The designation of Silicon Catalyst as the exclusive strategic partner of Microelectronics US 2026 underscores the growing importance of startup ecosystems in semiconductor innovation. By combining a dedicated accelerator with a large-scale industry conference, the partnership creates a platform that connects early-stage innovation with manufacturing expertise, investment capital, and system-level integration. This collaborative model reflects the evolving structure of the semiconductor industry, where breakthroughs increasingly emerge from coordinated ecosystems rather than isolated organizations.

Also Read:

Post-Silicon Validating an MMU. Innovation in Verification

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation


Webinar – How to Reclaim Margin in Advanced Nodes

Webinar – How to Reclaim Margin in Advanced Nodes
by Mike Gianfagna on 04-02-2026 at 6:00 am

Webinar – How to Reclaim Margin in Advanced Nodes

This informative webinar discusses a significant issue that is cropping up for sub-5nm designs. As the graphic above shows, modeling uncertainty at advanced nodes results in excessive guard banding. These guard bands result in reduced performance and profit. A loss of 25 – 35% in PPA is discussed, along with the lost profit associated with paying for advanced node performance and not being able to take advantage of it.

You will learn a lot about the dimensions of this problem and how to fix it, resulting in improved performance, competitiveness and profit. A replay link is coming but first let’s examine how to reclaim margin in advanced nodes.

You can access the webinar replay here. And that’s how to reclaim margin in advanced nodes.

The Presenter

Dave Johnson

Dave Johnson is the webinar presenter. Dave works on strategic sales at ClockEdge. Prior to his decades long career in EDA, Dave was an ASIC engineer specializing in custom IC development. He has worked with many of the largest semiconductor companies around the world to optimize their design flow. He is someone who believes deeply that the choice of design methodology matters, significantly impacting the project’s success.

Dave is quite knowledgeable on the topic of design margins. He has an easy-to-follow presentation style. You will learn a lot during this short (22-minute) webinar.

The Webinar

Dave begins by describing the margin problem as a silent crisis in advanced node design. He discusses the widespread use of abstractions to drive design of ever-larger chips. He describes the “abstraction tax” that results from the difference between the estimates that drive design margins when compared with the actual performance needed. Dave gets into the details of what drives this “abstraction tax” and what penalties result. He then discusses a new and unique solution that enables design teams to reclaim the wasted margin at advanced nodes so the true value of advanced processes can be realized.

He describes the pessimism wall that exist sub 5-nm. He goes on to explain that at 3nm, the foundry promises and design teams expect a 15-18% performance improvement at constant power, or a 30-34% power reduction at constant frequency.

The Pessimism Wall

He goes on to explain that these gains are vanishing due to the pessimism wall. Now, the primary performance bottleneck is not silicon capability, but an abstraction-based methodology. Margins are now heavily inflated to compensate for methodology uncertainty. For example, clock sign-off guard bands routinely consume 25-35% of the available clock period. This results in over-designing the network by 2.5X. The figure at the right summarizes these points.

Dave then explores the details of the abstraction tax. He discusses the areas that contribute to the problem, including near-threshold voltage sensitivity, power supply-induced jitter, interconnect-dominated clock delay, aging, and local variability and Liberty Variation Format (LVF) residuals. You will learn a lot about the impact all these items have.

Dave then explores the ROI associated with recovering the lost margin due to these effects. Performance, clock tree area, dynamic power and binning yield are all discussed.

An effective solution to these problems offered by ClockEdge is then explored in some detail. Dave explains how the ClockEdge Veridian Engine can deliver full clock SPICE-level analysis overnight for over 100 million gate designs. He explains the significant impact a tool like this can have on advanced node design, allowing the abstraction tax to be removed. Design teams can now access the full capability offered by advanced nodes.

The webinar concludes with a very informative question and answer session.

To Learn More

If you struggle to get all the benefits offered by advanced process nodes due to excessive design margins, you need to watch this webinar. In a short 22 minutes, you will understand the problem much better and learn about a new and effective solution to unlock superior performance and increased profitability.

You can access the webinar replay here. And that’s how to reclaim margin in advanced nodes.

What is the 3nm Pessimism Wall and Why is it An Economic Crisis?

The Risk of Not Optimizing Clock Power

Taming Advanced Node Clock Network Challenges: Jitter


Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology

Alchip’s Leadership in ASIC Innovation: Advancing Toward 2nm Semiconductor Technology
by Daniel Nenni on 04-01-2026 at 10:00 am

Alchip’s Leadership in ASIC Innovation

Alchip Technologies has recently reported significant progress in the development of advanced 2nm  ASICs, positioning itself as a leader in next-generation semiconductor design for AI and HPC. The announcement highlights Alchip’s efforts to commercialize cutting-edge chip technologies and deliver highly customized silicon solutions for data centers, hyperscalers, and AI infrastructure providers. These developments demonstrate how the company is preparing for the transition to one of the most advanced semiconductor process nodes in the industry.

A key milestone in Alchip’s 2nm strategy is the creation of a dedicated 2nm design platform, which enables customers to develop high-performance ASICs using the latest manufacturing technologies. This platform supports advanced packaging and chiplet integration methods such as 2.5D and 3D integrated circuit technologies, allowing designers to combine a 2nm compute die with input/output (I/O) chiplets produced on mature nodes such as 3nm or 5nm. This approach improves yield, reduces cost, and allows developers to integrate complex computing architectures more efficiently.

The transition to 2nm technology represents a major shift in semiconductor architecture. Unlike earlier nodes that relied on FinFET transistor designs, 2nm processes introduce nanosheet or GAA transistors, which provide better electrostatic control and enable higher transistor density. These improvements allow chips to achieve better performance and power efficiency while continuing the scaling trends predicted by Moore’s Law. For AI workloads and large-scale data centers, these advantages are particularly important because they support faster processing speeds and reduced energy consumption.

Alchip has also successfully completed a 2nm test chip tape-out, which is a crucial step in validating the design methodology and manufacturing process. The test chip includes high-speed SRAM blocks and silicon performance monitors that provide real-time insights into chip behavior. These features allow engineers to evaluate PPA characteristics of the new process technology and refine the design flow for future customer products.

Another notable aspect of the test chip is the integration of Alchip’s AP-Link-3D input/output interface, which is designed to support advanced chiplet-based architectures and 3D integration technologies. Chiplet designs divide a large system-on-chip into smaller functional blocks that can be manufactured separately and then connected through high-speed interconnects. This method improves flexibility and scalability, allowing designers to combine different process nodes and specialized components in a single package. The success of the 2nm test chip demonstrates that Alchip’s design tools and intellectual property are ready for these emerging packaging approaches.

Developing chips at the 2nm node also presents significant challenges. The smaller transistor dimensions increase power density and thermal management issues, requiring careful floorplanning, power distribution, and cooling strategies. Alchip’s design methodology addresses these challenges by incorporating thermal-aware design techniques and early optimization of placement and routing. By solving these problems earlier in the design flow, the company aims to reduce development time and improve the likelihood of first-pass silicon success.

The company’s 2nm advancements are closely tied to the broader growth of AI and high-performance computing markets. Many hyperscale data center operators and cloud providers are increasingly turning to custom ASICs rather than off-the-shelf graphics processing units (GPUs) to optimize workloads and reduce operational costs. Alchip specializes in providing these custom silicon solutions, enabling companies to design chips tailored specifically for AI training, inference, networking, and other data-intensive applications. As AI systems continue to grow in complexity, demand for specialized ASIC designs built on advanced nodes such as 2nm is expected to increase significantly.

In addition, Alchip’s work on 2nm technology positions the company for future semiconductor generations. The insights gained from its test chips and design platform will help support the transition toward even more advanced nodes, including potential 1.6nm processes and new transistor architectures. By investing early in design methodologies and packaging technologies, Alchip aims to maintain its leadership in high-performance ASIC development.

Bottom line: Alchip’s reported ASIC-leading 2nm developments highlight a major step forward in semiconductor innovation. Through its new design platform, successful test chip tape-out, and focus on advanced packaging and chiplet integration, the company is preparing customers for the next era of AI-driven computing. These efforts reinforce Alchip’s position as a key player in the global race to deliver faster, more efficient, and highly customized silicon solutions for future technology demands.

Alchip will be at the TSMC 2026 Technical Symposium as will I. You can reach Alchip here. Check out their new website!  And of course you can reach me on SemiWiki email if you are a member.

I hope to see you there!

Also Read:

2026 Outlook with Dave Hwang of Alchip

Revolutionizing AI Infrastructure: Alchip and Ayar Labs’ Co-Packaged Optics Breakthrough at TSMC OIP 2025

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation