The Cantankerous Coder critique of an AI world. A dystopia of capitalistic greed and cultural collapse? CC’s work uses the term “intelligence” in AI, cautiously, and here I want to jump from that point. Not to appose CC’s poetic vision of the future, but rather to riff on how we readily assign the signifier “intelligence” to software; to “the machine”. This is a problem of language, the use of a signifier we readily assumed belongs to us “humans” alone, or at least allows us to enforce our will on nature, the brut facts, due to some unique quality of the intellect. The modern subject/object binary of Descartes.
The jumping off point is Martin Heidegger’s Being and Time of 1927. This is a story of being, which has been well articulated by Dreyfus. It’s relevance to the AI movement is documented in the text What Computers Still Can't Do (1972, and 1992).
The heart of the problem lies in what Heidegger saw as the 2500 year misunderstanding of being, in modernity articulated in Descartes cogito ergo sum. Given this misunderstanding persists, there is a possibility that it has made it’s way into the model for machine-base intelligence; essentially the representation of objects in the mind. This is not to say that attempts such as the GPT4 Large Language Model aligns to this model-of-the-world approach to AI, what we might call Good Old Fashioned AI. After all, GPT4 is not intelligent in an AI sense; it’s a pattern matcher. But, the “intelligence” signifier remains.
According to Heidegger, the history of Philosophy has been obsessed with a substance ontology; a meaning of being in which all things are substances. Descartes defined us as thinking things (substances, again); res cogitans. A mind which contains a representation of the world, along with our concepts and ideas (those last two have their own philosophical baggage). The infamous dualistic position disconnects us from “the world”; we are disembodied minds, a locus of knowledge about the world. It is this model that has been pervasive in AI researchers. Symbolic representations of the world and their relations, the facts about the world, codified so as to be intelligent. So as to be knowing.
It may seem outrageous that 400 year old concepts exist in contemporary hi-tech computer science. But fear not. We remain contemporary Cartesians. Functionalist theories of the mind continue the tradition, for instance. The continuing metaphor of the mind as a computer (consider Hobbes's idea of the mind as a “calculating machine”) remains prevalent, and as we become increasing technology-immersed these metaphors become very sticky. AI exists in this language game. The model of the mind as a functioning computer. What could be more modern than that? Perhaps the entire world functioning as a computer? It seems appropriate that our most dominant technological metaphors land well as a model of everything. Descartes sat in his warm cupboard (it probably wasn’t a cupboard) contemplating existence through radical doubt, while we do the same in contrived virtual worlds mediated by computers.
The Heideggerian perspective is a radical critique which might, also, be brought to bear on AI. In Being and Time, an impossible text to summarise but beautifully taught by Dreyfus, Heidegger counters the tradition’s substance ontology with being-in-the-world. There are substances or entities (the only type of being previously theorised), tools and equipment, and Dasein. Dasein’s (let’s call that us) meaning of being is existence. We don’t exists outside the world, a disembodied observer of worldly substances. Rather, we exist in the world. We take up skills to deal with the world, the world and us exist in a totality.
How does this lead to a critique of AI? This is one of the weird things about Us. In Heidegger’s view we don’t create models of the world in our minds (a persistent Philosophical view), we are always already involved in the world. Things (substances, and tools) have significance and relevance based on our dealings in the world. Ask an artist how they paint, and they may describe a process using a substance and tool ontology. But their skill results in the canvas, the oil paints, the brush disappearing into a sort of equipmental background, merging with their familiarity with and embodiment in, the painting world. Any theory of which will necessarily be mute; only substances remain.
We don’t have a model of the world; we have this totality. As embodied copers we somehow know how to be in it, while finding it very difficult to articulate. A rational (or let’s say logical) model of the world, as might constituent AI, the Cartesian Mind, can’t describe all the facts associated with intelligence (if we define this intelligence as something which approximates to what we Do), with being-in-the-world. Sure it can have facts, and lots of them, as we see the large language models, but they can’t have existence. Whatever these AI models are doing, they are not intelligent. And they are not learning, they don’t have know-how. Sure, if we redefine intelligence as the ability to store and retrieve huge quantities of data, to pattern match based on human-defined algorithms, to make inferences about the data based on defined relationships, then we are outside both the Cartesian and Heideggerian model’s of Us. Intelligence with a small “i”, maybe? We are just playing with the language. Our anxiety comes from the use of language. This AI needs another space in the cultural signs.
Its not a dystopia which sees us replaced by a higher, and non-corporeal intelligence. It might, however, be a dystopia in which these new tools create a new understanding of being for Dasein, for Us. A technological understanding of being. A stance in which the technology offers untold efficiencies in our dealing with the world that our skilful coping atrophies. Dasein becomes levelled. We lose the local and significant. This is not some fear of a non-human intelligence, rather a loss of skills which are part of our stance on being.
The Heidegger/Dreyfus critique does not appear dominant in the AI discourse. Maybe we should draw this into focus. It is not a dystopian vision of the rise of the machines. It is rather how technology might effect our very stance on what it is to be us.
CC’s prophetic vision might well (has?) come to pass. But not because the machines are (artificially) intelligent. Rather because life in the late capitalistic, early post modern era has the ability to alienated us from “the world”. We become immersed in the fetish of abstract signs. We simulate the real until is becomes more real than real. AI won’t kill anyone. Those that feed the machine, to use CC’s metaphor, on the other hand, just might.
Thought-provoking. I don't fully agree with Heidegger. I think we do live in models in our "heads". But I think that we also have an experience of the world outside of those models (Taoism describes this well).
So now, inspired by your essay, I am thinking that the models in our heads may be more related to language and the desire to communicate with others. I presume that before language (if such time existed), we could still think (some will disagree). But we didn't have a separate mental concept of the world.
I'm sure some smart folks have already discussed this to death. But whether we live in our models or not, we do have models, mostly acquired by osmosis from our parents, friends, teachers, peers, society, etc., and that these models are unique creates significant difficulty first, in communicating with others, and second, in understanding that the model is not the reality and that we can change the model.