Can Artificial Intelligence Gain Natural ‘Consciousness’ by 2062? 

The Quint presents an excerpt from 2062: The World that AI Made by Toby Walsh and published by Speaking Tiger.

Toby Walsh
Books
Published:
In this excerpt from <i>2062: The World that AI Made</i> by Toby Walsh the author explores the complex and fascinating question of whether machines can one day develop consciousness like human beings.
i
In this excerpt from 2062: The World that AI Made by Toby Walsh the author explores the complex and fascinating question of whether machines can one day develop consciousness like human beings.
(Image: Aroop Mishra/ The Quint)

advertisement

A very special part of us is our consciousness. From the moment we wake in the morning to the time we fall asleep at night, our consciousness is central to our experience of life. It goes to the heart of who we are. We are not just intelligent, we are aware that we are intelligent. We reflect on who we are. We worry. We remember the past. We plan the future.

At times, this constant awareness becomes so oppressive that we seek out experiences to distract our conscious mind. To live simply in the moment. We meditate. Play music. Run marathons. Drink alcohol. Base- jump off cliffs.

What is consciousness? How is it connected to intelligence? Will machines ever be conscious? By 2062, might AI even let us upload our conscious minds to the cloud? Will Homo digitalis have a consciousness that is part biological and part digital?

The Hard Problem

The Australian philosopher David Chalmers has called consciousness the ‘hard problem’. Some have even argued that the problem is too hard for our limited minds, or that it lies outside the domain of scientific inquiry altogether.

Chalmers does believe consciousness will eventually be understood scientifically, but in 1995 he argued that we are currently missing something important: ‘To account for conscious experience, we need an extra ingredient in the explanation. This makes for a challenge to those who are serious about the hard problem of consciousness: What is your extra ingredient, and why should that account for conscious experience?’

Some years later, he observed that:

Consciousness poses the most baffling problem in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain. All sorts of mental phenomena have yielded to scientific investigation in recent years, but consciousness has stubbornly resisted. Many have tried to explain it, but the explanations always seem to fall short of the target

We have no instruments for measuring consciousness. As far as we can tell, there is no single part of the brain responsible for consciousness.

Each of us is aware of being conscious – and, given our biological kinship, most of us are prepared to accept that other people are conscious too.

We even attribute limited levels of consciousness to certain animals. Dogs have a level of consciousness, as do cats. But not many of us think that ants have anything much in the way of consciousness. If you look at the machines we build today, then you can be pretty sure that they have absolutely no consciousness.

The AlphaGo Experience

AlphaGo won’t wake up one morning and think, ‘You know what? You humans are really no good at Go. I’m going to make myself some money playing online poker instead.’ In fact, AlphaGo doesn’t even know it is playing Go.

It will only ever do one thing: maximise its estimate of the probability that it will win the current game. And it certainly won’t wake up and think, ‘Actually, I’m tired of playing games. I’m going to take over the planet.’ AlphaGo has no other goals than maximising the probability of winning. It has no desires. It is and only ever will be a Go- playing program. It won’t be sad when it loses, or happy when it wins.

But we can’t be sure that this situation won’t change. Perhaps in the future we will build machines that will be conscious. To behave ethically, it may be very important that computers can reflect on their decisions.

To act in a changing and uncertain world, it may be necessary to build machines with very open goals, and an ability to reflect on how to achieve and even adapt those goals. These may be steps on the road to a conscious machine.

Consciousness is such an important part of our existence that we must wonder if it has given us a strong evolutionary advantage. Our complex societies work in part because we are conscious of how others might be thinking. And if consciousness has been such a strong evolutionary advantage in humans, it might be useful to give machines that advantage.

ADVERTISEMENT
ADVERTISEMENT

Conscious Machines & Us

Consciousness in machines might come about in one of three ways: it might be programmed, it might emerge out of complexity, or it might be learned.

The first route seems difficult. How do we program something that we understand so poorly? It might be enough to wrap an executive layer on top of the machine that monitors its actions and its reasoning. Or we might have to wait until we understand consciousness better before we can begin to program it.

Alternatively, consciousness might not need to be explicitly programmed. It might simply be an emergent phenomenon. We have plenty of examples of emergent phenomena in complex systems. Life, for example, emerged out of our complex universe. Consciousness might similarly emerge out of a sufficiently complex machine.

It is certainly the case in nature that consciousness is correlated with larger and more complex brains. We know so little about consciousness that we cannot discount the possibility that it might emerge out of a sufficiently complex AI system.

The third option – that machines learn to be conscious – is also not unreasonable. Much of our consciousness appears to be learned. When we are born, we have limited consciousness. We appear to take delight in discovering our toes. It takes a number of months, if not a year or more, before we realise that our image in a mirror is, in fact, us. If we learn our sense of self, could not a machine do the same?

Consciousness might, however, be something you cannot simulate in a machine. It might be a property of the right sort of matter. An analogy is weather. You can simulate a storm in a computer, but it will never be wet inside the computer. In a similar way, consciousness might only come about with the right arrangement of matter. And silicon might not be the right sort of matter to have consciousness.

Zombie Intelligence

When I talk to people about artificial intelligence, they often focus on the second word, ‘intelligence’. After all, intelligence is what makes us special, and intelligence is what AI is trying to build. But I also remind people to think about the word ‘artificial’. In 2062 we might end up having built a very different – a very artificial – kind of intelligence distinct from the natural intelligence that we have. It might not, for instance, be a conscious intelligence.

Flight is a good analogy. As humans, we have been very successful at building artificial flight. We have built aeroplanes that can fly faster than the speed of sound, cross oceans in hours and carry tonnes of cargo.

If we had tried to recreate natural flight, I imagine we’d still be at the end of the runway flapping our wings. We came at the problem of flight from a completely different angle to nature – with a fixed wing and a powerful engine.

Both natural and artificial flight depend on the same theory of aerodynamics, but they are different solutions to the challenge of flying. And nature neither necessarily finds the easiest or the best solution.

Similarly, compared to natural intelligence, artificial intelligence might be a very different solution to the problem. One way in which it might be different is that it might be an unconscious form of intelligence. David Chalmers has called this ‘zombie intelligence’. We might build AIs that are intelligent, even super- intelligent, but lacking in any form of consciousness. They would definitely appear to us as artificial intelligence.The following excerpt has been taken from 2062: The World that AI Made by Toby Walsh. The book has been published by Speaking Tiger. The subheads have been modified by The Quint.

Zombie intelligence would demonstrate that intelligence and consciousness were somewhat separate phenomena. We would have intelligence without consciousness.

But unless you could develop consciousness without intelligence, which seems even more unlikely, zombie intelligence would not show that the two phenomena were completely separable. One implies the other, but not vice versa. Zombie intelligence would be a lucky ethical break for humankind.

If AI was only a zombie intelligence, then we might not have to worry about how we treat machines. We could have machines do the most tedious and repetitive tasks. We could switch them off without worrying that they would suffer.

On the other hand, if AI is not a type of zombie intelligence, we may have to make some challenging ethical decisions. If machines become conscious, do they have rights? Campaigns are already underway to grant legal personhood to great apes, such as chimpanzees and gorillas. Would intelligent machines gain similar rights if they were conscious?

Should we still be able to switch them off?

(The following excerpt has been taken from 2062: The World that AI Made by Toby Walsh. The book has been published by Speaking Tiger. The sub-headers have been added by The Quint.)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: undefined

ADVERTISEMENT
SCROLL FOR NEXT