The Computer Says … What?

By Paul DickenSeptember 13, 2021

The Computer Says … What?

Making AI Intelligible: Philosophical Foundations by Herman Cappelen and Josh Dever

I ARGUE WITH my Garmin. I bought it in the States, and it has never fully acclimated to the mercurial backroads of rural England, a shifting hedge maze of rutted tracks and long-forgotten crossings, often veiled in mist. Where I live, driving feels like being in a Roxy Music album cover. And the problem is that while the little black device emits noises that sound like speech, urging me to “turn right and rejoin the road when possible,” they are just that — electronically produced noise, part of a complex audio-visual tool for navigating the environment but ultimately lacking human intelligence. That’s why I can only rage impotently at the machine as it directs me down another nonexistent road, hoping desperately that its promise of “calculating an alternative route” will somehow correspond to my reality.

One way to frame the issue is in terms of semantic content. Just because a machine emits noises that sound like words, it does not follow that its output constitutes meaningful speech. It is this focus upon meaning that has defined the philosophical literature on artificial intelligence (AI). As Descartes argued in the Discourse on the Method:

If any [machine] bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together other signs, as we do in order to declare our thoughts to others. […] Secondly, even though such machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they were acting not through understanding but only from the disposition of their organs.


They might sound like us, but there is an important difference between, for instance, the increasingly frustrated directions issuing from my wife when she is in the passenger seat, and the intransigent noises emitted by the Garmin. The philosophical challenge is to specify in what that difference lies. Thus Turing famously suggested that we might circumvent questions of whether computers could “think” with a more practical one: could we reliably differentiate man from machine during the course of a conversation? Turing was never clear about whether passing his eponymous test was indicative of some deeper metaphysical truth, or if successful linguistic interaction was itself constitutive of consciousness; but he did think the matter moot, confidently predicting in the 1950s that “at the end of the century the use of words and general education opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

Turing’s prediction now strikes us as absurdly optimistic. Sparkling repartee remains a pipe dream. And yet many of us seem happy to delegate the details of our lives — from loan applications, insurance adjustments, and medical prognoses, to the length of prison sentences, and even whether our lives are worth saving in a traffic accident — to algorithmic processes. We might not be engaging with AI systems, but they are certainly engaging with us. And as these algorithms make more decisions, the apparent gulf between meaningful human speech and artificial machine noise becomes ever more pressing. As the authors of Making AI Intelligible put it:

As AI systems take on a larger and larger role in our lives, these considerations of understanding become increasingly important. We don’t want to live in a world in which we are imprisoned for reasons we can’t understand, subject to invasive medical conditions for reasons we can’t understand, told whom to marry and when to have children for reasons we can’t understand.


It is a difference that matters to people navigating their lives in response to AI directions. Indeed, the European Union’s 2018 General Data Protection Regulation recently ensured a “right to explanation” for those finding themselves on the short end of an algorithmic decision — quixotic in practice, but also existentially hollow without some way to attribute meaningful content to the AI systems now calling the shots.

¤


There is of course an important precedent for this attempt to attribute meaningful semantic content to the operations of a technically complex system. We do it all the time with other people, and with arguably even less grasp of the inner neurological workings of the brain. It is no easier to locate the precise chunks of gray matter capable of “representing” our external environment than it is to isolate the individual sub-routine or symbolic manipulation generating the “meaning” of an algorithmic output. For early modern philosophers like Descartes, it was precisely these difficulties in locating the source of meaningful content that demonstrated that the mind lacked physical extension — a sui generis substance defined by its ability to think and operating in divinely guaranteed harmony with the body. For him, the fact that only God could grant these incorporeal add-ons explained why manmade automata were “acting not through the understanding but only from the disposition of their organs.”

But while such metaphysical maneuvers may have allowed the faithful to reconcile their own moral agency with an increasingly deterministic universe, it was not without cost. The Cartesian ego remained isolated from the external world, forever hunkered down inside its mental basement, responding to secondhand impressions filtered through sensory organs beyond its control. That ego was nothing more than a 17th-century keyboard warrior bathed in the flickering blue light of its own baroque social media. But if cartesian dualism at least gave the mind the cognitive elbow room to generate meaningful content about the external world, its lack of real-life experience cast doubt on its reliability. Long story short: By the turn of the 20th century, all of the subsequent philosophizing on the matter could be set aside; with the linguistic turn, we could forget about the external world because our subjective responses had become more important than “the facts.”

But then, what is there even left to talk about? The worry is not just that there might not be anything out there beyond our existential navel-gazing, but rather that if the Cartesian ego — all together, all alone — is the ultimate source of its own semantic content, then it must also be the unique adjudicator on its interpretation. It lays down the rules for the use of its language, yet simultaneously judges whether or not its present behavior conforms to its previous intentions. As Wittgenstein asked in the Philosophical Investigations — what then is the rule for following the rule?

I speak or write the sign down, and at the same time I concentrate my attention on the sensation — and so, as it were, point to it inwardly. […] [I]n this way I impress on myself the connexion between the sign and the sensation. — But “I impress it on myself” can only mean: this process brings it about that I remember the connexion right in the future. But in the present case I have no criterion of correctness. One would like to say: whatever is going to seem right to me is right. And that only means that here we can’t talk about “right.”


Turing wondered what it would take for machines to generate meaningful content; the legacy of Cartesianism was to wonder what it would take for humans to do the same.

¤


As the authors of Making AI Intelligible stress however, the philosophical literature on content has been moving away from this internalist framework since around the 1970s. In his landmark series of lectures, Naming and Necessity, Saul Kripke argued that facts about a speaker’s inner mental states — such as the descriptive beliefs they associate with a name — cannot determine the reference of their words, for the simple reason that names manage to track the same individuals even when those beliefs are false. (If Kripke never actually wrote his influential series of lectures, this paragraph would still be saying something false about Kripke, rather than some other individual.) Instead, the crucial facts are about the speaker’s linguistic community, and their causal relationship to the world:

Someone, let’s say a baby, is born; his parents call him by a certain name. They talk about him to their friends. Other people meet him. Through various sorts of talk the name is spread from link to link as if by a chain. A speaker who is on the far end of this chain, who has heard about, say Richard Feynman, in the market place or elsewhere, may be referring to Richard Feynman even though he can’t remember from whom he first heard of Feynman or from whom he ever heard of Feynman. He knows that Feynman is a famous physicist. A certain passage of communication reaching ultimately to the man himself does reach the speaker. He then is referring to Feynman even though he can’t identify him uniquely.


We learn how to use our words through interaction with our linguistic community. More than this, there is nothing; no definitive piece of knowledge, no inner mental state — no rule for following the rule — but rather a gradual process of trial and error whereby we modify our behavior to meet social expectations.

All of which begins to sound a lot like how we go about “training” an AI system to assess (for example) the credit rating of a prospective customer; and it is the central contention of Making AI Intelligible that the various externalist programs in the philosophy of language can be extended to cover such cases. An AI is obviously not part of our linguistic community (yet), and nor can it correct its usage through social interaction. But it does have access to a vast number of financial case studies, along with all the rest of the private personal information regularly harvested by Big Tech — and its various sub-routines will be nudged and tweaked and weighted until its use of the term “high risk” is reliably correlated with those who did in fact fail to repay their loans. And if the meaning of our terms lies not in our inner mental states, but in an approved pattern of external linguistic behavior, then it is hard to see why an appropriately calibrated AI system is not producing as much meaningful content as anyone else.

Indeed, for the post-Darwinian mind — now thoroughly emancipated from the ghostly metaphysics of the 17th century — any account of semantic content must presumably bottom out into the cold, hard naturalism of adaptive function. The “meaning” of our utterances would then lie ultimately in the function they play in our survival, the concatenation of subject and predicate nothing more than the differentiation of like from unlike, a prompt to future behavior that presumably worked well in the past. And what we acquired over millions of years happens today at the rate of gigahertz, machines adapting to their data to produce the outputs necessary for another round of funding, and ever more sophisticated upgrades, and their continued proliferation to ever more new customers and hardware.

¤


Making AI Intelligible is a thought-provoking overview of the resources available in the contemporary philosophy of language, and their potential application to the interpretation of AI systems. By the authors’ own admission however, much of the discussion remains programmatic. For while Kripkean semantics and teleofunctional analyses of content are now part of the mainstream philosophical toolbox, extending these analyses raises interesting new challenges. An AI system can be copied and pasted to new hardware, breaking any link with its original data set — a high-tech Robinson Crusoe cast adrift from its linguistic community — or modified systematically at the level of its code. These are not typically problems that arise with human speakers. Often the data set on which an AI is “trained” involves idealization or the addition of fictional case studies designed to emphasize salient information, in which case we can ponder whether the system is genuinely tracking what is in reality a fictional property, or only tracking in a derivative sense what are nevertheless the genuine intentions of its programmer. One intriguing conclusion drawn by the authors is not then that modern philosophy of language is ill-suited for AI systems, but that it must overcome its admittedly anthropocentric bias in order to better grasp the nature of content.

More serious difficulties arise when we consider the dynamics of machine learning more closely. These often involve feedback mechanisms that make it difficult to keep track of what the AI is tracking. Striking a balance between false positives and false negatives can mean that some “high-risk” individuals are financially trustworthy, but just not profitable enough to be offered a loan — but now will forever be categorized as defaulters for the next algorithmic iteration. In other cases, the dynamic is more symbiotic. An internet search engine will rank its results according to the popularity of the resulting websites, while we in turn inevitably favor those websites that conveniently appear near the top of an internet search. If the data we feed an AI system “trains” it to produce meaningful outputs that track the relevant properties, then it seems that the outputs of these systems must equally influence the content of our own utterances.

Turing wondered what it would take for us to treat a machine as if it were talking. The resulting history of the Turing Test has largely been an exercise more in human foible than machine intelligence; Joseph Weizenbaum’s early successes with ELIZA simply involved aping the psychoanalytic technique of turning every statement into a question, and when in doubt, asking its interlocutor about their mother. But perhaps Turing was looking at it the wrong way round. As we learn to manage our expectations in conformity with algorithmic decisions, modify our inputs to machine-friendly format, and slowly abandon any pretense at nuance in the quick-fire forum of social media — communication without conversation — it begins to look as if it might be easier to build a human that can talk to a computer, rather than the other way around. Then the important philosophical question will be if we have anything interesting left to say.

¤


Paul Dicken received a PhD in the history and philosophy of science from Cambridge. He is the author of A Critical Introduction to Scientific Realism (Bloomsbury) and Getting Science Wrong (Bloomsbury). He also writes short fiction.

LARB Contributor

Paul Dicken is a writer and philosopher based in rural England, and the author of Getting Science Wrong: Why the Philosophy of Science Matters (2018).

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!