I’ve been interested in questions concerning artificial intelligence (AI) for a long time. Back in the days way before I left academia. In my undergrad days at Indiana University, I even began as a Cognitive Science major. But the more I thought about these issues, the more I realized I was really interested in more philosophical questions about mind, meaning, and human understanding. Less so the science of AI. Eventually this led me to the work of Hubert Dreyfus and Heideggerian AI, Dreyfus’s application of the philosophy of Martin Heidegger to the field.

But I’m getting a bit ahead of myself.

Cognitive Science

Cognitive scientists study cognition rather than mind, meaning, or human understanding. Many cognitive scientists once thought those were the same thing, but philosophers put that assumption to increasing scrutiny.

It’s a broad field, incorporating elements of philosophy and anthropology along with its core focus in computer science, linguistics, neuroscience, and psychology. Despite the breadth, core debates were between defenders of a computer science-based symbolic AI and defenders of neuroscience-based connectionist neural networks.

Too Narrow

I’ll set aside the jargon. The gist of it is that this was all too narrow for me. And much of the work was fatally flawed anyway, in the sense that none of it captured how human minds and human understanding ultimately work.

Symbolic AI has been dead for about 30 years. Why? Researchers based it on a tight analogy between the human mind and a digital computer. Certain elements ring true. But, for the most part, human minds aren’t like digital computers. The science failed, and it foundered largely on what Dreyfus called the ‘frame problem.’ The problem relates to how humans apply and use rules in daily life – as well as the failure of symbolic AI programs to capture it. Dreyfus’s classic work on this topic – What Computers Still Can’t Do – holds up reasonably well, even if most of the work isn’t directly applicable to the direction of AI research since the late 1980s.

Neural networks fared better, and they formed the basis for ‘applied AI’ work in the tech industry. Connectionists ditched the idea that the human mind represents the world as its central feature. But they didn’t evade frame problems. Rather, they ran into them a step removed.

A Note on the Tech Industry and AI

Again, neural networks aren’t exactly a dead program. The basic concept – connected nodes, analogous to neurons, adjusting based on changing inputs from the world – drives much of what’s happening today in machine learning.

Machine learning allows companies to do lots of interesting things. And its practitioners love calling it ‘AI.’ But it’s not AI in any sense meaningful to the once-ambitious goals of the field. Its practitioners long ago gave up the project of arriving at an account of mind, meaning, and understanding.

Heideggerian AI

All this brought me to Heideggerian AI. What if we had a positive alternative to symbolic AI and neural networks? One that actually did get at human understanding?

Here’s how Dreyfus put it. Tim Van Gelder once pointed out that there’s a dynamic relationship between people and their environment. The brain doesn’t represent the world, but rather it supports the relationship between a person and the world. While promising, Dreyfus argues Heideggerian AI falls short. Why? It doesn’t account for the centrality of the body to human understanding. We use our bodies to interact with the world, and we interact with a world that’s already meaningful for bodies like ours.

Finally, this brings me to my question. Suppose a human body is essential to human understanding and that AI is therefore likely inherently limited. Is it then possible to have a non-human intelligence using non-human bodies? If so, what does it look like? In some sense, isn’t that the question AI should be trying to answer? One possibility for AI is to explore complex interactions between bodies of different types and worlds of different types. Perhaps even in a way that gets at broader conditions under which this works in intelligent systems.

Image Source