I wouldn't start from here
Are AI companies on the right track for AGI? And is it even their goal?
Sam Altman is still talking about AGI being imminent, even though AGI seems to have receded a bit into the background of OpenAI’s vision statement. Nowadays they just emphasize that AGI should benefit humans, so it’s still in there but only by implication.
I’d like to believe we’re on the cusp of AGI, because human-level thinking is not going to fix the problems facing us now. I don’t think that transformer-based generative AI is going to get us there. Learning from patterns to fill in the next part of a sequence can give impressive results, especially when you’re using generative AI to write code. I’ve talked about what a useful tool I’ve found it to be. But I don’t see how we can have true AI without the system having a world model. The illusion of a world model is embedded in any library of data such as words or images, hence we get the grounding we see in the LLMs, but that isn’t the same thing as being able to manipulate concepts to come up with original solutions.
Part of the problem may be AI researchers’ assumptions about the nature of general intelligence. Even Sir Demis Hassabis, one of the smartest people on the planet, says that the human mind is the only example we have of general intelligence. That smacks of the same kind of error inherent in the Turing Test, confusing intelligence with “human-like”. Lots of animals have general intelligence, they just have smaller conceptual models. The squirrel in your back garden might be the Einstein of small nucivorous mammals, coming up with an entirely new solution to a problem that’s keeping it from food. In many cases, animals may see solutions that a human didn’t spot, precisely because they are manipulating concepts in a much more focused space. If I could build a machine that could genuinely reason like a squirrel or a dog, I’d count that as a major AI breakthrough.
Such a thing would not impress the board at Meta or OpenAI, though, because general intelligence is not their real goal. I suspect they continue to talk about it because it matters to the actual researchers. An offer of a hundred million dollar signing bonus won’t entice you away from somewhere that lets you tackle one of the most fundamental scientific problems. In the same way, Elon Musk talks about turning humanity into a spacefaring species because he knows that will appeal to the talented young engineers he wants to recruit. I doubt if Musk himself believes we’ll ever have major colonies on Mars.
I’m sure some of the researchers in the field do care about AGI. Sir Demis, for one, who I know is a genuine idealist and visionary. Dario Amodei probably too. And Yann LeCun, though he dislikes the term AGI and in any case his emphasis on building a true reasoning model looks to have led to Mark Zuckerberg sidelining him in favour of the simpler money-making approach taken by OpenAI.
When we look back on this time, I think we’ll see it as a dead end. Or perhaps that’s too harsh a way of phrasing it; Sildenafil, originally intended to treat hypertension, ended up having quite different benefits. LLMs are proving immensely useful. Deep learning algorithms like AlphaFold will turbo-charge innovation in medicine, materials science, weather forecasting, and so on. So the AI we’ve got is not a dead end, but it is on the wrong branch if we ever want to reach AGI.
And there’s the big question. Do we want AGI? Or, more to the point, do company shareholders want AGI? Compared to today’s LLMs, a general intelligence system would look like a step back. It wouldn’t be immediately monetizable. When it innovated a rocket-jump to solve a maze, most people would shrug and say, “But ChatGPT can plan and book my vacation, write code for my website, and balance the household budget.” Maybe that’s the kind of AI that it turns out people will settle for.
I hope AGI research won’t fizzle out, because I would like to see a spacefaring species and it won’t be naked apes in tin cans doing that. AGI could penetrate mysteries that will be forever beyond the 1.3 kilos of meat in our skulls. We should spawn beings better than ourselves. But as long as profit is the bottom line we probably won’t.
This is a great question. Do we want to create something that's smarter than us and can do things we can't (or can't even imagine)? Or do we want machines that take the drudgery out of things we do already? That's not a rhetorical question.
Here's one way to envisage it.
On the one hand, here's a robot that takes care of our laundry: picks it up, washes it, dries it, folds it, repairs it if necessary, and puts it away without us having to do anything.
On the other hand, here's a really cool technology (that doesn't currently exist) that allows us to regulate our body temperature, cover our nudity, adorn ourselves, and provides comfort, which doesn't require physical clothes and completely removes the need for laundry, and, as a side-effect, eliminates the environmental impact of making, distributing and disposing of clothing.
Which do you think people will go for? And by people, I mean both businesses and consumers.
I tend to take a more Occams Razor view of the whole thing: we're probably not missing a piece of the puzzle. The reason we can't build AGI from our neural net-like things is mainly because they're much too small (tens of billions of neurons in a human brain) and insufficiently trained (takes 10+ years of constant, high quality input to train a human).
We're never really going to be a "spacefaring species", though. Not because we lack AGI, but because the stars are too far apart. A real advanced civilization would understand that.