Near the end of his novel, The End of Eternity, Isaac Asimov has a few things to tell us about the use of automation and robots that bears interestingly on our use of AI today. Asimov wrote the book in the 1950s, during an earlier wave of automation. He had no specific knowledge of LLMs, of course. But in reading it, I felt as though the Good Doctor reached into our future.

Here’s a little speech a key character gives:

The greatest good? What is that? Your Machines tell you. Your Computaplexes. But who adjusts the machines and tells them what to weigh in the balance? The machines do not solve problems with greater insight than men do, only faster. Only faster! Then what is it the Eternals consider good? I’ll tell you. Safety and security. Moderation. Nothing in excess. No risks without overwhelming certainty of an adequate return.

The context here is that the character was condemning a time traveling group that ‘corrects’ imbalances and bad social trends. While this has lots of positive effects – less murder, revolution, social discord, and so on – it also leads to widespread social stagnation. It eventually leads to the extinction of humanity.

Obviously this differs from how we use AI today. If anything, AI leads to more social discord. It certainly leads to more unemployment and unrest.

But the broader point, I suspect, holds. Many people, especially younger people, use AI to do their ‘thinking’ for them. It’s a shortcut and a sign of laziness. Indeed, while AI may do things ‘faster,’ it doesn’t do them better. At the very least, it doesn’t do them better than a real person thinking real thoughts.

As a society, we’re still some distance away from seeing how and why all this ‘AI thinking’ will go badly for us. But it seems inevitable to me that it will go badly.

Image Source