Of course there is a minimum requirement for the computer: enough qubits, fault-tolerant computing, support for hundreds of millions or more computations before a reset, that sort of thing. We’re still on that journey but even after we reach this goal it is important to have a sense of what delivers advantage since the computer itself simply enables quantum calculations. My research suggests a foundational principle to current algorithms, allowing for a range of high value operations though it is unclear to me how these would generalize more broadly. This is my brief attempt to summarize how quantum computing benefits are determined by algorithms, and what these algorithms must look like, at least in the near future.

What do we mean by advantage?
In the geeky world of computational complexity, there is very clear separation in performance between two main classes of algorithm: those which have polynomial complexity, running in times which grow roughly by some fixed power of the problem size, and those which have exponential complexity, running in times growing by some number raised to the power of the problem size. Algorithms which scale polynomially are practical in principle on classical computers, problems which scale exponentially are not. (I have over-simplified, but this is good enough for my argument.)
If quantum algorithms can demonstrate a polynomial advantage over classical computers that is of course good, though not necessarily unicorn-exciting for investors or customers. Grover’s algorithm, searching for some specified type of an element in an unsorted list, is of this type, showing a quadratic advantage over classical algorithms. Grover has the advantage that the limit on best performance for a classical algorithm has been formally proven, so the quantum method will always be superior. However best performance for almost all classical algorithms is merely the best we have discovered so far. As has been embarrassingly demonstrated on several occasions when claims of advantage have been upstaged by announcements of new classical algorithms with comparable performance.
(In fairness a polynomial advantage could signal a unicorn if the polynomial difference with classical alternatives is big enough.)
I’m sure there are more Grover algorithms waiting to be discovered which can demonstrate true polynomial superiority. But given the difficulty in proving upper limits for classical polynomial complexity algorithms and high expectations for quantum advantage, another way to go is to focus on beating classical algorithms with proven exponential complexity. There are already some examples in this class including Shor’s algorithm, so it’s worth digging into what makes these algorithms different and whether we can discern characteristics that might be extended to other problems.
Shout-out to the video tutorial series from Artur Ekert (professor of quantum physics at Oxford), from which I learned much of what I share here.
Going a bit deeper in Shor’s algorithm
The target problem is to find the factors of a large integer. An efficient solution will open up attacks on secure communications. Most encryption relies on the difficulty of factoring an integer N, the product of two very large (and unknown) primes, since complexity of this problem grows exponentially with N. Shor’s algorithm starts with classical compute setup to generate a seed value ‘a’, 2<a<N. The quantum stage leverages modular arithmetic (clock arithmetic), computing successive powers of a, mod N. When aR = 1 mod N for some power R, the calculation can stop and R appears as the output value of the quantum stage. Classical takes over again to reconstruct a divisor of N from R. The complexity of this algorithm is order of (logN)3, vastly faster that best-known classical methods.
The core principle behind this method is what I now see as an elaborate quantum interference experiment. Start with a list of qubits initialized to zero, run these through a QFT (Quantum Fourier Transform) to initialize for interference, then run through a phase gate which will modify phases between these qubits. Finally run through an inverse QFT to complete the interference. A function to determine phase gate operation sits outside this flow (though still within the quantum algorithm). This function computes successive aRmodN values through which it informs, in a very quantum-like way, phase gating in the interference flow.
Simon’s algorithm, a precursor to Shor, follows a similar if simpler pattern.
Quantum application to linear equation analysis
Linear equation problems come up everywhere in science, engineering, even business applications. The HHL algorithm can provide expectation values for x in the matrix equation Ax=b, under certain constraints. Such equations are classically soluble in low polynomial time in matrix size, even linear time in some cases, but the HHL method solves in polynomial of log (polylog) of the matrix size, so qualifies as an exponential speedup. It may also be significantly better than the classical method in space complexity.
The essential principle behind the method looks to me like Shor, except that interference starts with quantum phase estimation (QPE) (rather than QFT) and ends with inverse QPE. The phase gate in between these QPE gates translates matrix entries into phase values and is of course unique to this algorithm.
Whether or not HHL will prove to have commercial value I don’t know. Even Shor, while very well founded, has only been demonstrated on toy examples. Production value waits on sufficiently powerful and fault tolerant quantum computers to test production readiness.
Takeaway
There may be other possible quantum algorithm architectures, but what is more quantum than interference? The basic algorithm structure prepares superpositions/ entanglement of qubit states (through operators such as Hadamard, QFT, QPE, …), then applies phase gating which will control interference. It then realizes the interference through an inverse of the opening transform, delivering measured qubit output as an integer which may be the desired result or can be fed into further classical computation to generate the result.
This interference is the heart of any exponential advantage that can be realized by a quantum algorithm. All qubits in this path are evaluated simultaneously, a feat that a classical computer cannot hope to match. Not even massively parallel classical computers, since strong coupling between qubits undermines any parallelism speedup.
Exploiting this advantage in other algorithms will require new kinds of ingenuity. It will be fascinating to see how these evolve!
Share this post via:


Quantum Computing Technologies and Challenges