I have three examples from the animal kingdom that illustrate my ideas about intelligence. They’ll explain why I don’t set much store by the Turing Test.
First the hunting spider. If you only go by its ability to navigate in a complex 3D environment (sneaking up on prey by taking a roundabout route across the backs of leaves, for example) you’d have to rate it more intelligent than a rat. But the spider doesn’t have anything like the general intelligence of a rat. What it does have is a sort of hardware enhancement, like a 3D accelerator card, that is able to do complex route-finding in 3D. That’s a simple topological world model.
Next there’s the behaviour I observed once in a squirrel. Having a plastic mesh bag full of nuts that were past their sell-by date, I hung them on the washing line. (I may have had some idea that was the way to ensure the birds would get them.) The squirrel saw the bag, climbed the pole holding up the line, worked its way along, climbed onto the bag, and started gnawing at the plastic mesh. After a few seconds, unable to hang on, it fell to the ground. Landing on its feet on concrete paving looked painful, but after a few seconds the squirrel went back and did the same thing again. And again. And again. Despite repeated spills, which must have been increasingly uncomfortable, the squirrel persisted for about ten minutes, at the end of which as it fell the bag split open, and it watched the nuts shower down all around – and then spent the next half hour hiding them.
In this case obviously the squirrel had a world model – that is, a model of the world that it could use to predict the change in state of the world (the outcome) of an action. But more than that, it needed to have a conceptual model, by which I mean the ability to map the world model’s objects and relationships in abstract form in order to solve a problem. It knew the outcome it wanted and was able to reason back from that to find an action that would bring the desired outcome about.
With only a world model (like the spider’s mental map) you could predict forward, but it’s not enough on its own to let you work out the way to achieve a given goal. For that you need a conceptual model* on top of the world model.
However, we can’t tell from this behaviour that the squirrel is conscious. It could still be what philosophers call a zombie, searching a set of proposed actions for the one that achieves an inbuilt goal of making nuts accessible. I might personally speculate, having watched all this effort, that the squirrel felt a dopamine rush of triumph when it beheld the fruit (or nuts) of its arduous labour, but that’s just speculation. If you built an embodied AI that behaved exactly as the squirrel did you’d certainly deserve a Novel Prize, and nobody could deny its intelligence, but plenty of people would dispute whether it was consciousness.
My third example. When I was a child, our cat had a blanket that it liked to sleep on in a basket on the patio at the back of the house. But in the afternoons it preferred to laze about under the trees at the front of the house, which faced west so it got the afternoon sun. The trouble was that the earth under the trees was hard-packed and stony, and not nearly as comfortable as a blanket. One day the cat got the blanket in its mouth, dragged it along the side of the house, with some difficulty got it over the wooden gate, and positioned it in the patch of sunlight under the trees.
Now, I would argue that indicates a world model and a reasoning (ie problem-solving) model, but also that it’s a strong indicator of consciousness. The cat not only included itself as the agent in the world model (a philosophical zombie might still do that), crucially it included its own future state as part of the goal. It needed not just to decide to bring about a simple inbuilt objective (change a bunch of inaccessible nuts to accessible ones) it had to decide that it wanted to be both warm and comfortable at the same time, and then figure out a way to achieve that.
In saying I regard that as a demonstration of consciousness, I don’t of course mean that a cat’s consciousness is necessarily experienced like our own. Nagel’s point about bats applies also to felines. But it is as good a test of consciousness as I can imagine.
* It may seem idiosyncratic to separate the conceptual model from the world model, but I do so to differentiate a simple map of what is in the world from an abstraction of that map that can be manipulated in order to reason and make predictions. The squirrel had almost certainly never seen an orange plastic net enclosing anything, but it could abstract that as a container of some kind (the Platonic form, or root class if you prefer) and it could reason that damaging a container holding a bunch of objects together would weaken its so that eventually the weight of the objects would burst it and they would spill out. It could then translate those symbols back into actual objects in its world map. The conceptual model would also include abstract information not directly observable in the surroundings; for example, in humans, knowing what day of the week it is, or that a certain river marks a national boundary, or that a given person is a friend or a foe.
These are good case studies, but I think AI is a bit further along than you imply. There are already AI that form models of physical spaces and the properties of things in them (watch some of the Boston Dynamics Atlas videos, if you haven't already).
Similarly, I don't think a squirrel's possible "dopamine rush" is anything unknown to robots. It's just a reward feedback system, which from a certain point of view is fundamental to how neural nets work. Likewise, whilst I've yet to see an AI do anything at the level of the cat and the blanket, if a robot did that tomorrow I wouldn't be the slightest bit surprised. We're very close to that now without any need for consciousness.