In a recent issue of Jacobin, Garrison Lovely tackles the question of whether humanity can survive AI.
It’s a question with many facets. Along the way, Lovely considers just about all of them. Do people overhype AI or not take it seriously enough? Are its harms primarily short- or long-term?
He looks at both human extinction (!) and much more immediate impacts like job loss, racist algorithmic decisions, and the continuing transformation of the workplace into a giant, soulless corporate warehouse that would terrify even Adam Smith.
The AI Trick: Artificial General Intelligence (AGI)
Let’s look at one slice of the broader problem.
In a post on Heideggerian AI, I pointed out something I’ll now call the ‘AI Trick.’ Historically, people working on AI in philosophy, psychology, and cognitive science wanted to develop an actual account of mind, meaning, and understanding.
AI would help them do it. It served, in effect, as an analogy. First through an analogy of the mind as a symbolic system of representations (GOFAI), and then the mind as a neural network (connectionism), and then the mind as a system that must be embedded in an environment (situated robotics), AI researchers attempted to provide this account via their research into artificial versions of it.
That was the whole point. We could learn about humans by building machine versions of us.
Of course, it was also cool to build intelligent robots. And everyone wanted to do that, too. But the best efforts of AI failed. They generated interesting enough results. But those results didn’t bring them much closer to the central goal. And by the early 1990s, AI researchers basically gave up the project.
Yes, we still have AI researchers. And, yes, they still do work. Much of the work in machine learning (e.g., chatGPT) is even interesting. Even more so, it’s profitable. That work, however, has got little to do with the original project of AI.
And that’s the AI trick. They gave up the project, but they never gave up the language. Now, AI company founders redefine works and pretend they’re doing the original cool stuff.
Not the Holy Grail
And that brings us to Artificial General Intelligence (AGI).
These days, AI researchers define AGI variously. But each definition amounts to the idea that AI can do most intellectual tasks better than people can do them. This contrasts to, e.g., chatGPT, which is a narrower system. Or to AI that can play chess, which really just plays chess.
AGI stands out to AI researchers as a ‘holy grail.’ But it’s not. In its origins, AI research set out to create artificial forms of genuine human intelligence. It would have a mind, mental states, and consciousness in the same sense we do. It wasn’t just about a robot that plays chess and chews gum at the same time.
Yes, AI researchers do interesting work. And they unearth issues of scientific note. But they’re not taking up the original project of AI. They left that behind long ago.