This is a great question. Do we want to create something that's smarter than us and can do things we can't (or can't even imagine)? Or do we want machines that take the drudgery out of things we do already? That's not a rhetorical question.
Here's one way to envisage it.
On the one hand, here's a robot that takes care of our laundry: picks it up, washes it, dries it, folds it, repairs it if necessary, and puts it away without us having to do anything.
On the other hand, here's a really cool technology (that doesn't currently exist) that allows us to regulate our body temperature, cover our nudity, adorn ourselves, and provides comfort, which doesn't require physical clothes and completely removes the need for laundry, and, as a side-effect, eliminates the environmental impact of making, distributing and disposing of clothing.
Which do you think people will go for? And by people, I mean both businesses and consumers.
I'd like to think both, but really I'm thinking of a species that isn't built to serve us at all but to be our successors. (Not necessarily a dire fate, as a previous article discussed. Cats have a nice life.) Demis Hassabis often talks about radical abundance, but we'd don't actually know if controlled fusion, etc, are even possible -- or whether companies like OpenAI are really even aiming at that goal.
My belief is that the objective of most modern AI companies is to replace inconvenient human workers with robotic slaves who don't sleep, don't complain, don't get paid, and don't. They don't care about building a better world for humanity, or what the world will look like in twenty years.
They seem to envisage a world where the billionaires have all their needs and desires taken care of by AIs, while everyone else... shrug. So, no, they don't want AGI, because AGI could represent a real threat to that.
And the billionaires certainly don't want radical abundance, because their entire business model relies on them having a stranglehold on things that people want or need, and therefore they can make money from it. I am reminded of a quote from a senior executive at Nestle, quoted by Naomi Klein: "Every time a woman in Africa fills up a jug of water from a well or a river instead of buying water from us, that's business we've lost. She's stealing from Nestle, and we have to stop it."
I tend to take a more Occams Razor view of the whole thing: we're probably not missing a piece of the puzzle. The reason we can't build AGI from our neural net-like things is mainly because they're much too small (tens of billions of neurons in a human brain) and insufficiently trained (takes 10+ years of constant, high quality input to train a human).
We're never really going to be a "spacefaring species", though. Not because we lack AGI, but because the stars are too far apart. A real advanced civilization would understand that.
Oh, I completely agree that AGI will be neural-net based. But until we embody them (virtually or really) and let them play and learn -- and make them bigger, of course -- they won't develop AGI. And I suspect OpenAI and the others won't want to waste time on that because agentic LLMs will be their source of revenue.
When I say "we" will be spacefaring species, I'm not talking about humans 1.0 but AGIs -- which may or may not be flattered to be called humans 2.0. Von Neumann probes would then be perfectly feasible, because the distances and times wouldn't matter.
Right. But if it was up to me I wouldn't send out a VN probe, because doing so is a monstrously irresponsible act. I can forgive VN himself because he lived at a time when we hadn't even finished behaving like that with our own planet, but we ought to know better now.
I don't imagine a VN probe would have any noticeable impact on another system, though. My model there is of the Watcher in Marvel comics :-) A probe turns up, mines only the material it needs to build two more probes, then stays to observe. There may be one or more already in our solar system, come to that, and we'd never know.
(Not that it would matter terribly if old-style humans could colonize the whole galaxy imo. They can't, for the reasons you said, but all that real estate seems to be going to waste otherwise.)
Each one probe mines and makes two, but I don't see how you prevent N probes arriving at a particular world a considerable distance from their point of origin and each making two.
When tourists visit places, the rule is now well understood to be "take nothing, leave nothing behind". Taking one pebble is too much, never mind enough to make two probes. Plus, mining is very disruptive.
If we step back and think about it then I think it's pretty clear all this nonsense is just religion for atheists. Faced with the fact we're an ephemeral and unimportant pattern within a tiny part of some large mathematical object we reach out for dreams of something larger, no matter how absurd.
I'm never convinced by the "religion for atheists" argument. Every time a bunch of humans went across a mountain range, lake or sea and founded a new colony, was that purely faith or was it a calculated risk? We're not entirely an irrational animal :-)
My plan for VN probes wouldn't require more than a few per system. Remember they have no intrinsic reproductive drive like animals do and will (we hope) be rational, so have no motivation to inundate a system with redundant observers. It may never happen and, given that there's no profit to be made, probably won't, but if I were a tech multi-billionaire my toy would be VN probes. Well, that and teaming up with Yuri Milner on Starshot.
That's not the problem. The problem is that exploring space is a graph and no individual probe carries any non-local information. Earth sends out a probe to the nearest other system. If it succeeds, it then sends out two and so on. After N probe generations, 2^N probes are getting made in each generation. Each of these visits a planet near where they were "born", but they cannot know if any other probe has been there unless they or one of their ancestors did so or formed a plan which involved another unit doing so.
After 50 generations the wavefront of new exploration is tiny relative to the impossibly vast numbers traversing the inside of the already explored network.
Based on a pretty limited understanding of how the technology works, my personal theory is that if we reach AGI, it will be be accident, not design. In the same way that training on vast volumes of data created LLMs that began to solve problems they weren’t specifically designed to tackle, it feels to me like the next level will come when one of the systems that has the freedom to iterate and tweak its own code and instructions will make a leap that we hadn’t anticipated. But then even if that happens, the idea that there’ll be any utility to us from such a step might be more an article of faith than anything else.
I’d be less surprised if one of the next updates didn’t inadvertently create a system that decides not to waste energy on frivolous image requests and learns to say no!
Definitely the LLMs' coding skills are impressive. It's interesting that the models that failed on puzzle-based tests like Tower of Hanoi were able to write code that solved the puzzle, suggesting one route to pretty high intelligence of one type. And coding skills can be trained by reinforcement learning, so they should continue to improve. But the problem with generative AI, and why I don't think it will ever be the way to AGI, is that it only works by improving on the existing method. General intelligence was needed to come up with the idea of neural nets in the 1940s; generative AI might refine such a system (eg by creating algorithms for back propagation) but AGI (or natural GI) is needed to come up with the completely new solution that isn't just a refinement of the existing approach.
You’re definitely further down the path of having an informed opinion about this than I am. Realistically the most likely outcome is that these things will be weaponised in entirely destructive ways to undermine world systems before we get to the real benefits!
We're in a race of AI development against extinction (civilizational extinction, anyway), that's for sure. But I should add that I'm definitely down at the Dunning-Kruger level of understanding of this field!
This is a great question. Do we want to create something that's smarter than us and can do things we can't (or can't even imagine)? Or do we want machines that take the drudgery out of things we do already? That's not a rhetorical question.
Here's one way to envisage it.
On the one hand, here's a robot that takes care of our laundry: picks it up, washes it, dries it, folds it, repairs it if necessary, and puts it away without us having to do anything.
On the other hand, here's a really cool technology (that doesn't currently exist) that allows us to regulate our body temperature, cover our nudity, adorn ourselves, and provides comfort, which doesn't require physical clothes and completely removes the need for laundry, and, as a side-effect, eliminates the environmental impact of making, distributing and disposing of clothing.
Which do you think people will go for? And by people, I mean both businesses and consumers.
I'd like to think both, but really I'm thinking of a species that isn't built to serve us at all but to be our successors. (Not necessarily a dire fate, as a previous article discussed. Cats have a nice life.) Demis Hassabis often talks about radical abundance, but we'd don't actually know if controlled fusion, etc, are even possible -- or whether companies like OpenAI are really even aiming at that goal.
My belief is that the objective of most modern AI companies is to replace inconvenient human workers with robotic slaves who don't sleep, don't complain, don't get paid, and don't. They don't care about building a better world for humanity, or what the world will look like in twenty years.
They seem to envisage a world where the billionaires have all their needs and desires taken care of by AIs, while everyone else... shrug. So, no, they don't want AGI, because AGI could represent a real threat to that.
And the billionaires certainly don't want radical abundance, because their entire business model relies on them having a stranglehold on things that people want or need, and therefore they can make money from it. I am reminded of a quote from a senior executive at Nestle, quoted by Naomi Klein: "Every time a woman in Africa fills up a jug of water from a well or a river instead of buying water from us, that's business we've lost. She's stealing from Nestle, and we have to stop it."
I tend to take a more Occams Razor view of the whole thing: we're probably not missing a piece of the puzzle. The reason we can't build AGI from our neural net-like things is mainly because they're much too small (tens of billions of neurons in a human brain) and insufficiently trained (takes 10+ years of constant, high quality input to train a human).
We're never really going to be a "spacefaring species", though. Not because we lack AGI, but because the stars are too far apart. A real advanced civilization would understand that.
Oh, I completely agree that AGI will be neural-net based. But until we embody them (virtually or really) and let them play and learn -- and make them bigger, of course -- they won't develop AGI. And I suspect OpenAI and the others won't want to waste time on that because agentic LLMs will be their source of revenue.
When I say "we" will be spacefaring species, I'm not talking about humans 1.0 but AGIs -- which may or may not be flattered to be called humans 2.0. Von Neumann probes would then be perfectly feasible, because the distances and times wouldn't matter.
Right. But if it was up to me I wouldn't send out a VN probe, because doing so is a monstrously irresponsible act. I can forgive VN himself because he lived at a time when we hadn't even finished behaving like that with our own planet, but we ought to know better now.
I don't imagine a VN probe would have any noticeable impact on another system, though. My model there is of the Watcher in Marvel comics :-) A probe turns up, mines only the material it needs to build two more probes, then stays to observe. There may be one or more already in our solar system, come to that, and we'd never know.
(Not that it would matter terribly if old-style humans could colonize the whole galaxy imo. They can't, for the reasons you said, but all that real estate seems to be going to waste otherwise.)
Each one probe mines and makes two, but I don't see how you prevent N probes arriving at a particular world a considerable distance from their point of origin and each making two.
When tourists visit places, the rule is now well understood to be "take nothing, leave nothing behind". Taking one pebble is too much, never mind enough to make two probes. Plus, mining is very disruptive.
If we step back and think about it then I think it's pretty clear all this nonsense is just religion for atheists. Faced with the fact we're an ephemeral and unimportant pattern within a tiny part of some large mathematical object we reach out for dreams of something larger, no matter how absurd.
I'm never convinced by the "religion for atheists" argument. Every time a bunch of humans went across a mountain range, lake or sea and founded a new colony, was that purely faith or was it a calculated risk? We're not entirely an irrational animal :-)
My plan for VN probes wouldn't require more than a few per system. Remember they have no intrinsic reproductive drive like animals do and will (we hope) be rational, so have no motivation to inundate a system with redundant observers. It may never happen and, given that there's no profit to be made, probably won't, but if I were a tech multi-billionaire my toy would be VN probes. Well, that and teaming up with Yuri Milner on Starshot.
That's not the problem. The problem is that exploring space is a graph and no individual probe carries any non-local information. Earth sends out a probe to the nearest other system. If it succeeds, it then sends out two and so on. After N probe generations, 2^N probes are getting made in each generation. Each of these visits a planet near where they were "born", but they cannot know if any other probe has been there unless they or one of their ancestors did so or formed a plan which involved another unit doing so.
After 50 generations the wavefront of new exploration is tiny relative to the impossibly vast numbers traversing the inside of the already explored network.
Based on a pretty limited understanding of how the technology works, my personal theory is that if we reach AGI, it will be be accident, not design. In the same way that training on vast volumes of data created LLMs that began to solve problems they weren’t specifically designed to tackle, it feels to me like the next level will come when one of the systems that has the freedom to iterate and tweak its own code and instructions will make a leap that we hadn’t anticipated. But then even if that happens, the idea that there’ll be any utility to us from such a step might be more an article of faith than anything else.
I’d be less surprised if one of the next updates didn’t inadvertently create a system that decides not to waste energy on frivolous image requests and learns to say no!
Definitely the LLMs' coding skills are impressive. It's interesting that the models that failed on puzzle-based tests like Tower of Hanoi were able to write code that solved the puzzle, suggesting one route to pretty high intelligence of one type. And coding skills can be trained by reinforcement learning, so they should continue to improve. But the problem with generative AI, and why I don't think it will ever be the way to AGI, is that it only works by improving on the existing method. General intelligence was needed to come up with the idea of neural nets in the 1940s; generative AI might refine such a system (eg by creating algorithms for back propagation) but AGI (or natural GI) is needed to come up with the completely new solution that isn't just a refinement of the existing approach.
You’re definitely further down the path of having an informed opinion about this than I am. Realistically the most likely outcome is that these things will be weaponised in entirely destructive ways to undermine world systems before we get to the real benefits!
We're in a race of AI development against extinction (civilizational extinction, anyway), that's for sure. But I should add that I'm definitely down at the Dunning-Kruger level of understanding of this field!
I feel painfully seen when you put it like that. Same boat!