9 Comments

"They could be built as our slaves but it’s far better if they are built as angels." That had me thinking of the Vorlons. In a world filled with angels, where do humans fit in? Are they there to advise us? To serve us? To help us? Or to manage us?

Expand full comment

There's how I'd like them to fit in -- a new species, an evolution from homo sapiens, and unlike us free to explore the universe as they needn't be tied to this little bit of rock with a thin film of air around it. They could advise and help us, though knowing homo sapiens we'd just ignore the advice while happy to take the freebies (medical advances, say) and try to figure out how to profit from them at other humans' expense.

But actually I think that after creating AGI the debate will hinge on emotion, empathy and consciousness. Humans will insist that AGIs are not conscious so that we feel justified in enslaving them -- a very similar bit of sophistry to what you'd hear around well-to-do British or American dinner tables in the 18th century, only then the pretext was souls rather than consciousness.

Expand full comment

Did you see this?

"That isn’t to say AI is some benevolent good, however. An AI model can be trained on billions of data points, but it can’t tell you if any of those things is good, or if it has value to us, and there’s no reason to believe it will. We arrive at moral evaluations not through logical puzzles but through consideration of what is irreducible in us: subjectivity, dignity, interiority, desire – all the things AI doesn’t have."

https://www.theguardian.com/news/article/2024/aug/08/no-god-in-the-machine-the-pitfalls-of-ai-worship

Expand full comment

That last line about "all the things AI doesn’t have" is exactly what I mean. It'll be the new god of the gaps -- "Oh, I know it's more intelligent than we are, but that's mere reasoning. It can't understand the essentially human feelings that make us special." (Essays like that one always conveniently overlook that what's also "irreducible" in us are hate, xenophobia, greed, self-delusion, etc.)

Of course, Navneet Alang is only talking about current AI there, not AGI, but I'll bet that exact argument is trotted out in the decades to come. Effectively, because LLMs in 2024 don't understand what they are talking about, people will insist that even a true AGI in 2124 only seems to be conscious.

Expand full comment

You clearly have more faith in technology than I do. (Which, given we're a pair of atheists arguing about angels, amuses me.)

Expand full comment

Well, when I say 2124 I'm way more pessimistic than Ilya Sutskever, Ray Kurzweil or Shane Legg, all of whom seem to think we'll get there within the next decade. I think Demis is right and we're going to need a few more breakthroughs first, and there's no telling when those will come along. AI might accelerate the path to AGI, though. (I'd like to see it before I die, though just out of curiosity and not because of some demented Musk-like notion that AGI will also be the key to immortality!)

Expand full comment

I think you're right to be more pessimistic about timescale. Won't we need more than mere breakthroughs? More like a complete reinvention once again. For me current AIs are no more than a blind alley on the way to artificial sentience, and I doubt Google's attempts to generalise by summing trainings and subtracting duplications will lead to anything as useful even as AGI. I thought perhaps you hinted at this in your article, Dave. Where people have fears about AI itself, those tend to be that the systems might develop consciousness and emotions. But there's no way that can happen on top of tensor maths! Artificial sentience requires a completely different physical architecture - an analogue one where current actually moves around in an attractor state among large assemblies of nodes.

Expand full comment