These are good case studies, but I think AI is a bit further along than you imply. There are already AI that form models of physical spaces and the properties of things in them (watch some of the Boston Dynamics Atlas videos, if you haven't already).
Similarly, I don't think a squirrel's possible "dopamine rush" is anything unknown to robots. It's just a reward feedback system, which from a certain point of view is fundamental to how neural nets work. Likewise, whilst I've yet to see an AI do anything at the level of the cat and the blanket, if a robot did that tomorrow I wouldn't be the slightest bit surprised. We're very close to that now without any need for consciousness.
I'm certainly not one to underestimate the fast progress of AI. When people say, "AI is impressive but it can't do such-&-such" I always say, "Not this month." I don't think anyone is claiming we're about to get AGI, even though I bet LeCun, Hassabis, etc, all say it's five to ten years off while privately hoping we might get there by next year. But then, even AGI is a hard term to pin down (LeCun for one doesn't like it) though at least we can set some tests for what we expect it to look like.
Consciousness, on the other hand, for all that Ethan Siegel says we should regard it as a measurable phenomenon, I still think of it as being as objectively real as, say, beauty or goodness -- ie, not a bit, now or ever, but simply a subjectively experienced artefact of the system. Hence my lack of regard for the Chinese Room argument: if the cat (or the room) behaves in a way that exhibits traits I associate with consciousness, I'll give it the benefit of the doubt. That said, intelligent behaviour doesn't have to mean consciousness (the group learning of a fish shoal, for example).
And I completely agree that dopamine rush is just the same as any reward feedback system. I should have clarified that what impressed me about the squirrel was its pleasure at having formulated its own original solution and getting a result that enabled it to update its conceptual model. AGI is perhaps just a question of the scope of the conceptual model, not anything qualitatively different from a SRL system that teaches itself to play Go, Quake, etc.
Yes, re: scope - that sounds like a good definition to me.
I suspect consciousness will turn out to be like centrifugal force and the fact that it's allegedly observable will not prevent human thought from being completely modelled without a need for it. It's dualism for atheists, in my view.
I just came across Murray Shanahans' recent paper on what consciousness might actually look like in a disembodied AI of the sort we use today. That's not necessarily anything to do with the kind of AI we'll have in a few years, but it's still worth a read: https://arxiv.org/pdf/2503.16348
Interesting, thanks! I'm not sure I'm sold on this "continuous vs discrete" consciousness business. OK, it's worth considering, but it seems to me just an assertion without much basis. Is my consciousness really continuous? I'm not convinced it is. Certainly it doesn't consist only of snapshots several seconds apart, but if those gaps were nanoseconds I surely wouldn't know?
Shanahans' argument is fun but I think it'll date fast, as it applies only to instantiations of the core AI that exist for one conversation, which presumably will not apply to AIs that learn at runtime.
Regarding my own consciousness, I imagine it as like asking the manager of a team how things are going on a project. "Oh, fine, no worries at all," he or she may say, but that isn't necessarily what any given team member would tell you. (I just realized this is basically Searle's Chinese Room, but whereas Searle thinks it shows that machines can't be conscious, I maintain that it shows our concept of consciousness as a whole-brain phenomenon is probably illusory.)
These are good case studies, but I think AI is a bit further along than you imply. There are already AI that form models of physical spaces and the properties of things in them (watch some of the Boston Dynamics Atlas videos, if you haven't already).
Similarly, I don't think a squirrel's possible "dopamine rush" is anything unknown to robots. It's just a reward feedback system, which from a certain point of view is fundamental to how neural nets work. Likewise, whilst I've yet to see an AI do anything at the level of the cat and the blanket, if a robot did that tomorrow I wouldn't be the slightest bit surprised. We're very close to that now without any need for consciousness.
I'm certainly not one to underestimate the fast progress of AI. When people say, "AI is impressive but it can't do such-&-such" I always say, "Not this month." I don't think anyone is claiming we're about to get AGI, even though I bet LeCun, Hassabis, etc, all say it's five to ten years off while privately hoping we might get there by next year. But then, even AGI is a hard term to pin down (LeCun for one doesn't like it) though at least we can set some tests for what we expect it to look like.
Consciousness, on the other hand, for all that Ethan Siegel says we should regard it as a measurable phenomenon, I still think of it as being as objectively real as, say, beauty or goodness -- ie, not a bit, now or ever, but simply a subjectively experienced artefact of the system. Hence my lack of regard for the Chinese Room argument: if the cat (or the room) behaves in a way that exhibits traits I associate with consciousness, I'll give it the benefit of the doubt. That said, intelligent behaviour doesn't have to mean consciousness (the group learning of a fish shoal, for example).
And I completely agree that dopamine rush is just the same as any reward feedback system. I should have clarified that what impressed me about the squirrel was its pleasure at having formulated its own original solution and getting a result that enabled it to update its conceptual model. AGI is perhaps just a question of the scope of the conceptual model, not anything qualitatively different from a SRL system that teaches itself to play Go, Quake, etc.
Yes, re: scope - that sounds like a good definition to me.
I suspect consciousness will turn out to be like centrifugal force and the fact that it's allegedly observable will not prevent human thought from being completely modelled without a need for it. It's dualism for atheists, in my view.
I just came across Murray Shanahans' recent paper on what consciousness might actually look like in a disembodied AI of the sort we use today. That's not necessarily anything to do with the kind of AI we'll have in a few years, but it's still worth a read: https://arxiv.org/pdf/2503.16348
Interesting, thanks! I'm not sure I'm sold on this "continuous vs discrete" consciousness business. OK, it's worth considering, but it seems to me just an assertion without much basis. Is my consciousness really continuous? I'm not convinced it is. Certainly it doesn't consist only of snapshots several seconds apart, but if those gaps were nanoseconds I surely wouldn't know?
Shanahans' argument is fun but I think it'll date fast, as it applies only to instantiations of the core AI that exist for one conversation, which presumably will not apply to AIs that learn at runtime.
Regarding my own consciousness, I imagine it as like asking the manager of a team how things are going on a project. "Oh, fine, no worries at all," he or she may say, but that isn't necessarily what any given team member would tell you. (I just realized this is basically Searle's Chinese Room, but whereas Searle thinks it shows that machines can't be conscious, I maintain that it shows our concept of consciousness as a whole-brain phenomenon is probably illusory.)