A little self-indulgence for the season, to lighten the relentless diet of DAC updates. I found a recent Wired article based on a TED talk on consciousness. The speaker drew a conclusion that consciousness was not something that could ever be captured in a machine and was a unique capability of living creatures (or at least humans). After reading an article on the the TED talk and watching a related talk, I’m not so sure but I am fairly convinced that whatever we might build in this direction may be quite different from our consciousness, will probably take a long time and will be plagued with problems.
The TED event (speaker Anil Seth, Professor of neuroscience at the University of Essex in the UK) is not posted yet, but there is a more detailed talk by the same speaker on the same topic, given recently at the Royal Institute, which I used as a reference.
First, my own observations (not drawn from the talk). AI today is task-based, in each case skilled at doing one thing. That thing might be impressive, like playing Go or Jeopardy, providing tax advice or detecting cancerous tissue in mammograms, but in each case what is offered is still skill in one task. A car-assembly robot can’t compose music and even Watson can’t assemble a car. Then why not put together lots of AI modules (and machinery) to perform lots of tasks or even meta-tasks? Doesn’t that eventually surpass human abilities?
I suspect that the whole of human ability might be greater than the sum of the parts. Most of us are probably familiar with task-based workers. If I am such a worker, you tell me what to do, I do it then wait to be told what task I should do next, as long as it is a task I can already do. Some other workers provide an obvious contrast. They figure out on their own what the next task should be, they develop an understanding of higher-level goals and they look for ways to improve/optimize to further their careers. This requires more than an accumulation of task or even meta-task skills. It requires adaptation towards goals of reward, a sense of accomplishment or a desire for self-betterment, which I’d assert requires (at a minimum) consciousness.
Which brings me to Anil Seth’s talk. He co-directs a center at the University of Essex for the scientific study of consciousness; the Royal Institute talk discusses some of their findings. To focus the research, he bounds the scope of study to accounting for various properties of consciousness, ducking obvious challenges in answering questions around the larger topic.
He narrows the scope further to what he thinks of as the first step in self-awareness which he calls bodily consciousness, which is awareness of what we see, feel and so on. His research shows a Bayesian prediction/reasoning aspect to this. Think of our visual awareness. We get input from our eyes, the visual cortex processes this, then our brain constructs a prediction of what we are seeing based on this and other input, and based on past experiences (hence the Bayes component) which is then compared again with sensory inputs and adapted. In his words, we create a fantasy which we adjust to best match between what we sense and prior experience; this we call reality. He calls this a controlled hallucination (hence the Matrix image in this piece).
This reality is not only based on what we sense outside ourselves; it is also based on what we sense inside our bodies. I see a bear and I sense the effects of adrenalin on my system, my heart runs faster, my hair (such as it is) stands on end and I feel the need to run (perhaps not wise). All of this goes into the Bayesian prediction which we continue to refine through internal and external sensing. I should add by the way that this is not mere philosophizing; all of this is derived from detailed experiment-based studies in the U.Essex consciousness group.
So just this basic level of consciousness, before we get to volition and sense of identity though our experiences and social interaction, is a very complex construct. It depends on sensory input from external sources certainly but it also depends on our biology which has evolved for fight or flight, attraction and other factors. So one takeaway is that reconstructing the same kind of consciousness without the same underlying biology would be difficult.
Anil Seth asserts that therefore to create consciousness without biology is impossible. That seems to me a bridge too far. What we are doing now in deep learning, in object recognition for example, transcends traditional machine behavior in not being based on traditional algorithms. And if we can reduce aspects of consciousness to mechanized explanations like Bayesian prediction, there is no obvious reason why we should not be able to do the same in a machine. We would have the same challenges probably in explaining the behavior of the machine, but not in creating the machine. This would be a non-biological consciousness (the machine could however introspect on its own internals), but not necessarily a lesser consciousness.
There’s an important downside. Just as the brain can have pathological behaviors in this controlled hallucination and those can have serious consequences not just for the owner of the brain but also for others, the same would be true for machines of this type. But understanding and control is potentially more difficult in the machine case because “reality” perceived by the machine may not align with our reality even in in non-pathological behavior. We may struggle to find reference points for normal behavior and struggle even more to understand and correct pathologies. Hence my view that trustable machine consciousness may take a while.
On that note, sleep well. What could possibly go wrong?
Share this post via:
Podcast EP267: The Broad Impact Weebit Nano’s ReRAM is having with Coby Hanoch