PCIe Webinar Banner
WP_Term Object
    [term_id] => 151
    [name] => General
    [slug] => general
    [term_group] => 0
    [term_taxonomy_id] => 151
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 441
    [filter] => raw
    [cat_ID] => 151
    [category_count] => 441
    [category_description] => 
    [cat_name] => General
    [category_nicename] => general
    [category_parent] => 0

Computability 2.0?

Computability 2.0?
by Bernard Murphy on 01-31-2017 at 7:00 am

There’s muttering among computing fundamentalists that perhaps we ought to revisit the definition of computability  given recent advances in methods of computing, especially machine learning and quantum computation.

Computability is about what can and cannot be computed, either by a human or non-human computer. This is a topic most of us associate with Alan Turing, the Turing machine and the Halting problem but the correspondence with human calculators is better expressed in the (related) Church-Turing thesis. It’s a thesis (hypothesis) because we don’t have a formal definition for what a human can and cannot calculate, but it’s widely believed to hold – that there is a potentially exact correspondence, in principle, between what a human and a Turing machine can calculate (though typically at very different speeds).

Can quantum computing or deep learning somehow transcend these bounds? Some theorists have proposed that hyper-computation – computation able to solve problems unsolvable by a Turing machine – should be possible at least in principle.

One such case was made in 1995 for a hypothesized neural net. The author of that paper starts by restricting the weights on nodes to be integers, which is of course possible. This is then relaxed to allow rational numbers – also possible. The theoretical range of problems that could be solved by such machines is countably infinite since integers and rationals are both countably infinite. This is still no more capable than a Turing machine, but then the author takes the next step, making the weights real numbers. As a limit to progressively more accurate approximations, this is still possible.

An interesting thing about reals is that there are vastly more than there are countable numbers, such as integers or rationals. Some are real representations of integers and rationals, some are irrational, like √2 or pi or e, and each of these is computable on a Turing machine to arbitrary accuracy using a finite algorithm. There is a countable infinity of these computable numbers because finite algorithms can be listed in some order, which makes them countable, and even if each can generate a countably infinite set of numbers, the total set of numbers that can be generated is the product of two countable infinities, which is still countably infinite.

The total set of reals is uncountable, which means that almost all reals are uncomputable numbers for which no finite algorithm is possible (an infinite algorithm for a given real is always possible – simply list out the digits in whatever base you use to express the number). Now suppose you use uncomputable numbers as weights in the neural net. Then in principle it would be possible for the net to calculate an uncomputable (in Church-Turing terms) result which could not possibly be calculated by a Turing machine. Does this hold out the promise that you could possibly, at least in theory, build a machine that could solve problems like the Halting problem?

Unfortunately no, because the argument is flawed. You can only compute an uncomputable result if you start with uncomputable weights. Either those must be computed on a Turing-like machine – which is not possible because Turing machines can only generate computable numbers – or you must assume the existence of a machine which can generate uncomputable numbers – which is circular reasoning. There is one other option – it is possible in principle to generate uncomputable random numbers (after all, you want random numbers to be uncomputable). The problem here is that while the weights and the result may be uncomputable, they will also be unusable – you have no control over what the net will calculate.

A different case has been made for quantum computing but this is primarily a performance argument rather than a feasibility argument (which is the substance of Church-Turing). It is true that for a narrow range of problems, quantum computing can be as much as exponentially faster than traditional/Turing computing but Church-Turing only cares if computation completes in finite time (using a finite algorithm on a finite machine), not how quickly it completes.

However one theoretical suggestion uses an infinite superposition of quantum states, which could in principle transcend the countability limit. There are obviously some physical problems in the proposal, but the idea is intellectually appealing in at least one sense. Combining a superposition of a countable infinity of quantum states with exponential performance allows (in principle) access to 2 [SUP]countable infinity[/SUP] states which is precisely the size of the set of reals. The machine could in theory access that complete set, though sadly you still must provide algorithms and inputs to do something useful, which would still limit calculations to a countable set.

Another physical assist requires that a (Turing) computer orbit a spinning black hole. It seems that relativistic time dilation around that massively warped space-time can be organized such that a programmer can interact with the machine in finite time but the computation in the frame of the machine can evolve in infinite time, allowing an infinite computation to complete in finite time for the programmer. However this is an approach unlikely to be tested any time soon.

One final apparent counter-argument. Humans can reason about infinite objects in quite constructive ways. For example, proof of the Goldstein theorem depends on this kind of reasoning. So if humans can do this, can machines also perform the same kind of reasoning? There’s no obvious reason why not. Does this break the Church-Turing thesis? No. Because these cases use finite algorithms which complete in a finite number of steps, reasoning about abstract infinite objects. But solving an uncomputable problem requires an infinite number of steps. You can’t reason about infinity – you must go there.

None of this means that novel forms of computing aren’t exciting and (for some problems) very useful. They can  certainly dramatically improve efficiency – that’s all part of advancing technology. But we should temper our expectations to what is asymptotically achievable; physical and mathematical limits don’t bend to enthusiasm. And there is a corollary – if humans can’t even conceivably compute a certain result (allowing for multiple people and a countable number of generations working on the problem) then it is quite likely that machines can’t conceivably do it either. And vice-versa. Church-Turing remains the upper bound for computability and at least in principle machines can never be fundamentally smarter than us (though they can certainly be much faster).

You can read more and peruse a challenging list of references HERE. I found a very nice critique of hyper-computation HERE (from which my neural net discussion is derived).

More articles by Bernard…

Share this post via:


0 Replies to “Computability 2.0?”

You must register or log in to view/post comments.