There’s no test for consciousness, so how do we know that other people are conscious? In the video, Sir Demis Hassabis gives two bases for making the assumption that they are. First is the obvious one: other people behave like me and I am conscious – or at any rate I have the impression that I am conscious. Then there’s the fact that other people are built the same way I am. Same DNA, same brain structure, so it’s an Occam’s Razor conclusion to conclude they experience the world as I do. As Hassabis puts it:
‘I think it's important for these systems to understand “you”, “self” and “other” and that's probably the beginning of something like self-awareness […] I think there are two reasons we regard each other as conscious. One is that you're exhibiting the behaviour of a conscious being very similar to my behaviour, but the second thing is you're running on the same substrate. We're made of the same carbon matter with our squishy brains. Now obviously, with machines, they're running on silicon so even if they exhibit the same behaviours and even if they say the same things it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have.’
We don’t think large language models are conscious. Even their apparent intelligence is probably misleading, just as there are lots of not-very-bright people who are able to give the impression of being smart simply because they are articulate. If we could build an AI as smart as a bee colony or a hunting spider, we’d have something genuinely intelligent but probably not conscious. We aren’t even there yet, but we will be, and we’ll go beyond that to full artificial general intelligence (AGI) possibly within a few decades.
Professor Yann LeCun is dubious about the whole concept of consciousness. In the absence of any definition or means of measuring it I think we’re reduced to treating consciousness as how similarly to ourselves an entity experiences the world. And that is quite concerning. Consider a truly capable self-driving car. To cope with all situations as we do, the car (which would be a type of robot, of course) would need a full reasoning model of the world. It would need to be generally intelligent. Now, given that it is a world model with genuine understanding, and (as LeCun says) having goals and agency it will have its own kinds of emotions, are we justified in enslaving it to be our chauffeur?
If we look back at the 18th and 19th centuries, plenty of people justified slavery by asserting that members of enslaved races lacked some fundamental mental capability, or indeed full consciousness, that the dominant race (usually white Americans) possessed. Here is Thomas Jefferson’s opinion of enslaved races:
‘It appears to me that in memory they are equal to the whites; in reason much inferior, as I think one could scarcely be found capable of tracing and comprehending the investigations of Euclid: and that in imagination they are dull, tasteless, and anomalous.’
Perhaps Jefferson would have been able to see that he was describing the psychology not of an entire ethnic group but of any person of whatever race brought up in brutal conditions of forced servitude. But there were plenty of religious thinkers of the day who asserted that non-white races lacked true souls. They had a strong economic incentive to believe that; it gave them a moral excuse to enslave them.
Now we consider such attitudes to be barbaric, or at any rate we’ve been taught to say we do, but if we really think that then we should be axiomatically opposed to the enslavement of any generally intelligent entity. I suspect we won’t be. Even when we are faced with full AGI we will use the second of the criteria that Sir Demis Hassabis cited to argue that they only seem conscious, they don’t have real emotions, they aren’t ‘running on the same substrate’ and so we are entitled to make them our slaves.
Instead of conceiving of AGI as a wonderful new tool to make our lives easier, I think we should consider the responsibilities of a parent. If you saw someone raising a child to be their servant – even brainwashing them to be an eager and willing servant – you would know that was abuse.
There will be, as there already are, many forms of artificial ‘intelligence’ that are not conscious – that are not, in fact, intelligent, but simply replicate parts of our behaviour. Language, pattern recognition, and so on. There is no reason why we shouldn’t have those AIs at our beck and call, because they are not (despite the name) intelligent. We got misled because for millennia we thought fallaciously that because we possess intelligence, every output of the human brain must therefore be indicative of intelligence.
But AGI is going to be a whole other thing. Not just a new model of an LLM but an entirely and fundamentally different kind of being. Our ethical discussion should not simply be about how to make them do what we want, or to conform to ‘human values’, but about how those human values say we should treat another intelligent species.
I’m sceptical about visitors from other stars, but with self-replicating probes travelling at a speed of 0.01c it should only take ten million years to cover the whole galaxy, so ‘where is everybody?’ is a sensible question. If those aliens have found us, and are watching, I wonder if the reason they haven’t made contact is they’re waiting to see how we treat an intelligent species that isn’t built on the same lines as ourselves. After all, if we think AGIs aren’t conscious, and therefore have no rights, then that’s also how we might regard a non-terrestrial intelligent species. So maybe it’s a cosmic Turing Test. And if so, will we pass or fail?
This raises an interesting idea about "parenting" AIs. I see nothing wrong with creating a machine to fulfil tasks on our behalf - in other words, to act as a mechanical servant. But raising a child to act as your servant is abhorrent. So should we think of AIs as children or tools? The world of Frankenstein is becoming less and less speculative.