Dialogue
Meaning Making, Bodies, and AI
By David Lamberth
I want to start with a brief comment on a point we’ve no doubt all thought about, but on which conventions of usage have nonetheless taken over for us, and that is the meaning of the word “intelligence.” Artificial intelligence, general artificial intelligence, AI. What exactly do we mean when we are talking about this?
AI has quickly become a proper noun that denotes a class of functions, something like the way Coke became a generic word for soda of any type where I grew up, leaving people to ignore the actual specifics of it. ChatGPT is the new Coca-Cola, or at least OpenAI hopes it is, in that it is a stand-in for a host of things, even though it is a specific product, not definitive of all of what we actually might mean by AI.
I want to focus here on something lost behind all this branding and naming, and that is the question of what we mean by each of the terms we so casually use when we say “artificial intelligence.” Artificial could imply a number of things; colloquially, it actually reads as “fake,” as in artificial maple flavoring, or artificial color, even though I think it’s clear that when John McCarthy and others began to speak about “artificial intelligence” in 1956 at Dartmouth, they had in mind the contrast between humans and computers, humans and some kind of automata, that is, something that was built by artifice.
I don’t want to dwell long on this, but only to notice the suggestion of design, and to contrast this to neural network models that are more self-designing, or emergent. Artificial intelligence was once based on modeling exactly what humans do through strict coding. Think computer chess, IBM’s Big Blue. Now, in addition to that, there is a great deal of focus on AI adapting based on parameters that are more open.
The word intelligence in English is something of an ambiguous term. We talk about people who are highly intelligent, for example, as if intelligence is something that submits easily to measure. Indeed, the development of German psychometrics in the second half of the nineteenth century, along with the work of French psychologist Alfred Binet, among others, yielded quantitative measures of so-called “intelligence,” such as the Binet-Simon (eventually Stanford-Binet), and later the Wechsler IQ “intelligence quotient” tests, among others. Intelligence, on this positivistic, quantitative model, is something we can measure and rank people in relation to. Because of the formalizable and quantitative aspect, this approach yields well to the idea of machine, or artificial, intelligence. It has also, by the way, sat quite comfortably with eugenics, and other racialized and cultural forms of hierarchical categorization.1
That late-nineteenth- and twentieth-century version of intelligence still has a great deal of remaining cultural currency. But the concept and word “intelligence” is far older, deeper, and more complex than the recent domineering trajectory. Intelligence in English comes from the Latin verb intelligere (to understand), which, when contrasted with similar terms in Latin such as cognoscere (to know, often through experience), and sentire (to sense), clearly highlights having something like a conceptual, or maybe higher order, kind of grasp on things, not just being able to respond to a particular question with some accurate facts or data.
Literally, the Latin suggests “between the words,” and thus intelligere comes to be what, for example, Augustine, or Anselm—in his famous argument for the existence of God—are seeking as they both, sure in their faith, nonetheless seek something more, something different, perhaps higher, maybe even deeper: fides quarens intellectum. This is usually translated in English as “faith seeking understanding,” but you can see from the Latin we might also say “faith seeking intellection.”
Understanding, the English gloss for intelligere, comes from its German antecedent, unterstehen, which later is replaced (and perhaps clarified) by verstehen. Both of these emphasize “standing,” with verstehen (whose antecedent was firstan) emphasizing “standing for.” So understanding indicates a kind of standing for something, or perhaps better, a judgment of what something is standing for.
In the late nineteenth century, German philosopher Wilhelm Dilthey took the notion of verstehen, understanding, as central to his distinctive approach to what the human sciences—the humanities and social sciences—were doing, or trying to do. Dilthey took people like the positivistic IQ measurers I mentioned earlier to be typical of a natural science reductive orientation, focused on quantifications and explanations, and from those made various modelings of the world taken as a closed causal system. Seeing something deeply limited and limiting in this approach relative to human lived experience, Dilthey instead sought to develop the role of understanding, verstehen, in conjunction with the imagination, in order to show both how lived human experience is constituted, and how we should approach its complexities.
Taking the psychic whole as the object of interpretive study, Dilthey insisted that there is purposiveness in our lived experience, a set of drives, interests, and orientations that can only be approached interpretively, through application of a method seeking not to explain but rather to understand. So here, again, we have verstehen, intelligere, but this time overtly attentive to the complexity of the nexus of our lives, both inner and social. This kind of use of intelligence might suggest something that could be modeled and perhaps imitated, but Dilthey’s key idea is that there is something about the meaning of lived experience that is antithetical to that explanatory approach of the hard sciences.
Jumping forward a bit, I want to turn to the early period of the development of computers and computational systems. In 1967, Hilary Putnam (soon to be the noted Harvard philosopher and later my teacher) gave a paper initially titled “Turing Machines,” in which he proposed a view that came to be known as functionalism, or “Turing machine functionalism.”2 The idea was that the brain is essentially a piece of hardware that is running a set of functions that you might think of as software, which could, in principle, be replicated and run on some other hardware. Though the hardware might be distinctive, it is the functions of the software that are valuable in terms of knowledge production, and thus the mind as software is the takeaway. This is, I should note, not what Dilthey was thinking about.
Putnam’s functionalism caught on quickly with cognitive scientists, as well as some philosophers, and it inspired work in computer science, as well as early developments in artificial intelligence at MIT and other places. The idea of functionalism is that what goes on in our brains, and hence in our consciousness, our lived experience, and really our lives such as we have them mentally—complexity issues aside—is something that could be replicated either at the individual level or at scale.
So, per functionalism, if you built a machine that gave responses indistinguishable from a human in all cases, thus passing the Turing test, then there would be no reason to think that it wasn’t just doing the same thing as human beings, mentally speaking, and therefore it may be rightly called intelligent. Such a machine then would be potentially interchangeable with human beings. Cue the robots coming for your jobs, AI replacing all sorts of human functions, even the AI/ChatGPT intimate partner.3
Philosophers criticized Putnam’s functionalism, and Putnam himself came to renounce it in the 1980s based on his realization that the conditions for making meaning and reference (usually, in his cases, of sentences) can’t simply be reduced to something akin to software, whether in our heads or in other hardware media. John Searle’s famous Chinese room example, which was in part a rejoinder to Putnam’s functionalism, also showed that being able to produce the right output to a question didn’t actually imply that the one producing it had any understanding of its meaning. While Putnam didn’t fully agree with Searle, they both agreed functionalism should be abandoned.
But the idea that what it is to mean and understand can in fact be fully rendered through software has proved quite sticky, and we default to this idea often in thinking about AI. Technologist and innovator Ray Kurzweil’s vision of downloading his consciousness and upgrading his body before he dies, so that he might be immortal, depends in its basic vision on a version of the functionalism thesis. Kurzweil has refined this view in hybrid ways over time, but the key commitment to functionalism, to the idea that software can produce meaning, more importantly the same meaning as humans, is still at the core of this technofuturism.
We are often quick to oversimplify not only the human mind and our consciousness, but also specifically the depth of the importance of embodied, social realities that make us who we are.
I’m inclined, when I reflect on these directions in AI and futuristic prediction, to think that, as Dilthey wanted to point out, we are often quick to oversimplify not only the human mind and our consciousness, but also specifically the depth of the importance of embodied, social realities that make us who we are. It is convenient to think that the telos of all that we are just is intelligence in the positivistic sense, and that if we can simulate that, through whatever media, we’ll have captured what’s distinctive about us. Putnam came around to seeing that things were more complicated than that, typified in his famous anti-functionalist and anti-reductionist rejoinder that “meaning ain’t just in the head.” That comment related to particular views of language, and it’s a bit abstruse to my argument. But the idea that, in reproducing, say, our ability to crunch data, or to use language in convincing ways, we’re making the kind of human intelligence we most value—that idea is the one I want to question.
Let me turn for a moment back to the ChatGPTs of the world, admittedly an arbitrary, nonrepresentative sample of all we’ve heard at this symposium, to see if I can illustrate what I mean. When you engage one of these, you’re working with something which has a vast access to information, far more information quantity-wise than any human could have. But the caveat with a ChatBot is that you, the interrogator, have to judge the quality of the output that it is giving you. This is so for several reasons, among them whether you or someone else has sufficiently defined for a brief term the point of view, the interests, and maybe even the values, that the program should represent. But it’s also because the chatbot is fundamentally and structurally unable to judge the quality of its own data, unable to judge whether its data actually has the meaning that it seems to have to us. It says with all matter of factness that I, David Lamberth, am a specialist in American pragmatism, and that I am the author of this or that paper, but then it adds two or three papers that I didn’t write. That is obviously a function, you say, of it being insufficiently constrained to a reliable dataset. True enough. But what constitutes knowing what a reliable dataset is, and how constrained to it one should be?
Perhaps these are simply issues of complexity, and iterating in the software will eventually take care of them. Indeed, with ChatGPT, good prompt design and iteration can make much of this seem to recede. But I suspect there is something else at work in the disjunction here that derives from features of human consciousness, cognition, and experience, and that has to do with the biological embodiment of our minds, experiences, meanings, and intelligence into individuated selves experiencing in individuated bodies.
Neurologist and cognitive scientist Antonio Damasio has written a number of books over the last 25 years that draw attention to the role that feeling, or affect, plays not only in our consciousness and mental lives, but in the very making of the brain over time. In one of his recent books, The Strange Order of Things, Damasio offers a brief argument as an aside about why he doesn’t think something like the Kurzweilian singularity, or the downloading or offloading of our minds to silicon or other artificial systems, will work.4 Damasio’s point is that all of our consciousness, our unconscious brain and bodily activities, and indeed all of our experience, is modulated among other things by feeling, and that the medium of that feeling—what both produces it and mediates it—requires our actual physical bodies. Affect, on Damasio’s read, indicates biological valence, and this valence itself is tied in with our attempts not only to regulate life, homeostasis, but to orient towards increasing the positive value for our own individual biological organisms, and sometimes for our groups.
Damasio’s whole view is complex, but the takeaways for this are fairly easy to convey. First, our minds and bodies have an idiosyncratic point of view, one which is determined by the outer limits of our bodies and is modulated through our body’s complex neurological and chemical systems, not just our brains. Second, the materiality and sociality of that body, with its brain, produces not only thoughts and actions, but also value, in the form of valence, which is related to each body’s discrete, distinctive reality. We’re all humans, but we’re all also individual selves, and thus our valences vary, our points of view differ, and our judgments and actions follow individual suits. The condition for all of this meaning making is necessarily mediated through the kind of biologically complex organism that we have evolved to become. As a result, Damasio thinks, if you wanted to simulate our consciousness, you’d really need to reconstruct, down to the cellular level, an actual biological body.
The point here isn’t that such simulations couldn’t be done, or that AI systems can’t reproduce and exceed us in many things that we do. Obviously they can, and to the extent that we think well about what their strengths and limits are, we can adapt and utilize them to do amazing and potentially valuable things. But there is something about many of these systems as we’ve built and adapted them so far that is just slightly off, whether it’s the mildly disturbing superficiality of a chatbot friend or intimate partner, the heartless quality of writing that you have a vague sense of when you read an LLM generated paper, or the inability to be able to know what is meaningful to focus on beyond the statistical probability that is in the wheelhouse of our current quantitative systems.
We’re still at the very beginning with this whole set of innovations we’re calling AI. Even at this early stage, it’s clear how remarkable, how potentially valuable, how world-changing, they might be. We should keep in view, however, that the judgments here are all synthetic, and crucially, they are relative to us, relative to our points of view, relative to the meaning that we individually and collectively make and feel in relation to them. They are, also, only available by virtue of the peculiar mix, in my own case, of my own experiences, what I’ve read and what I’ve thought about, who has said what to me and how I took that, feeling wise, and how the “pulse” of my own selfhood, as Damasio or William James might put it, shapes the valence and the judgment thereof. “Immediate luminousness, in short, philosophical reasonableness, and moral helpfulness”—those are the only criteria of judgment we have when we’re judging value, spiritual value—as James puts it in The Varieties of Religious Experience.5
For now, at least, it is important that the AI optimists look past the novelty and possibility of it all toward some additional, different human questions, and that the pessimists look beyond its failures and inadequacies to the reality of what changes AI is already making and will make. As human beings with valenced points of view, and individual and collective values that are deeply formed products of our own lives, we all should think seriously about understanding, really understanding, in the classical sense of intelligence. It is crucial that we think together about what AI means for advancing our human values and meaning, for advancing our humanity.
Notes:
- For a history of the connection to eugenics, see e.g., C. Chitty & D. Scott, “IQ, Racism and the Eugenics Movement,” in Theories of Learning (SAGE Publications Ltd, 2013). For IQ and racial and class bias, see e.g., R. E. Nisbett, J. Aronson, C. Blair, W. Dickens, J. Flynn, D. F. Halpern, and F. Turkheimer, “Intelligence: New Findings and Theoretical Developments,” American Psychologist 67, no. 2 (2012): 130–159, and Joseph F. Fagan and Cynthia R. Holland, “Racial Equality in Intelligence: Predictions from a Theory of Intelligence as Processing, Intelligence 35, no. 4 (2007): 319-34.
- Published as “Psychophysical Predicates,” in Art, Mind, and Religion, ed. W. Capitan and D. Merrill (University of Pittsburgh Press, 1967), and reprinted later as “The Nature of Mental States,” in Hilary Putnam, Mind, Language and Reality: Philosophical Papers, vol. 2 (Cambridge University Press, 1975), 429–440. The term “Turing machine” is in reference to cryptographer and British genius Alan Turing.
- Hilary Putnam, Representation and Reality (MIT Press, 1988).
- Antonio Damasio, The Strange Order of Things: Life, Feeling, and the Making of Cultures (Pantheon Books, 2018).
- William James, The Varieties of Religious Experience: A Study in Human Nature (Longmans and Green, 1902), 18.
David Lamberth is Professor of Philosophy and Theology at Harvard Divinity School and is currently the faculty chair of the Master of Theological Studies program. He is the author of William James and the Metaphysics of Experience (Cambridge University Press, 1999). This is an edited transcript of a lecture he delivered at HDS as part of the “Humanity Meets AI Symposium” held February 27-28, 2025.
Please follow our Commentary Guidelines when engaging in discussion on this site.
