I have written earlier in this series that quantum error correction (QEC), a concept parallel to ECC in classical computing, is a gating factor for production quantum computing (QC). Errors in QC accumulate much faster than in classical systems, requiring QEC methods that can fix errors fast enough to permit production applications. I have read of leaders in the field using FPGAs or GPUs to support QEC, which to me sounded intriguing but also difficult to scale for several reasons. Qubits can’t exist in the classical regime, seeming to imply that they must be collapsed prior to transfer, which would destroy carefully constructed superpositions and entanglement. Communication to an external chip (and back again after calculation) must travel through bulky and constrained channels with significant latencies, particularly in superconducting QC. On the return trip into the QC, the corrected qubit would need to be reconstructed. Would all this added latency undermine performance expectations for a production algorithm?

What’s really happening in QEC today
It took quite a bit of digging, including an excursion into ChatGPT (surprisingly helpful in response to a very technical question), to figure out what is really happening: a more refined partitioning than I had understood for information to be communicated off-chip, and an acknowledgement that FPGA and GPU methods are widely used but temporary expedients, allowing QC builders to research and refine QEC techniques, but not expected to survive as a part of production fault tolerant systems.
First, the primary qubits on chip aren’t measured. The additional qubits used for QEC can be measured, and that data can be communicated to an off-chip device. Said device figures out what corrections must be made per qubit, communicates that back to the QC, where quantum circuitry takes over again to apply those corrections by a mechanism which does not break coherence.
Second, latencies in this path can be significant. High noise rates require frequent correction, but frequency is limited by those latencies. Equally this limits the number of qubits that can be managed (qubits X latency all needing to channel out to the coprocessor and back again) This is why, useful though FPGA and GPU are as QEC co-processors today, they are not seen as long-term solutions for production QEC algorithms.
From prototypes to production QEC support
All prototyping systems eventually evolve to ASICs unless prototype performance is adequate and volumes are not expected to be high. Since QC vendors aim for high performance and (eventually) high qubit count, they too are planning ASICs for QEC. But these must sit very close to the QC core to minimize communication overhead. Which puts them in or very close to the deep cooling cavity for superconducting QCs. IBM plans a QEC core built with cryogenic CMOS, I’m guessing on the bottom layer in their 3D-stack architecture: qubits on the top-layer, resonators on the middle layer, and QEC on the layer under that.
This is very nice technology advance. I don’t know where other QC vendors and technologies are at in this race, but I have to believe IBM QC, already a dominant player in the QC market, is aiming to further widen its lead.
Usual caveat that this is a fast-evolving market and promises aren’t yet proven deliverables. Still, keep a close eye on IBM!
Also Read:
2026 Outlook with Nilesh Kamdar of Keysight EDA
Verifying RISC-V Platforms for Space
2026 Outlook with Paul Neil of Mach42
Share this post via:


CES 2026 and all things Cycling