Dialogue
The Fog of AI
By Swayam Bagaria
In a metafictional short story by Jorge Louis Borges entitled “Tlön, Uqbar, Orbis Tertius,” the two primary characters—eponymously named after the author and his fellow writer Adolfo Bioy Casares—discover, in an obscure encyclopedia, a puzzling entry on an otherwise unknown country called Uqbar.1 Captivated by an accompanying aphorism believed to come from a heresiarch who lived in Uqbar—“Mirrors and copulation are abominable, for they multiply the number of men”—both characters decide to find out more about this place. During their search, however, they find minimal information about Uqbar but elaborate anthropological descriptions of another world called Tlön where Uqbar’s legends are supposed to have taken place. They are taken in by the extreme version of philosophical idealism of Tlön, allegedly modeled after the thought of George Berkeley—there are no concurrent objects but only interdependent happenings, languages which have monosyllabic stochastically compounded adjectives (“round airy light on dark” as a descriptor of the moon), and objects that disappear or decompose once their categorical boundaries become fuzzy.
The two primary characters additionally discover historical information about the existence of a benevolent secret society in seventeenth-century London, the members of which want to undertake a multigeneration project to build a new country called Uqbar. After a series of historical contingencies that involve persecution in London and eventual reemergence in America, the secret society gets a new lease of life when an eccentric millionaire wants to fund the construction of an entire world based on the information about Tlön and compile everything about its worldview in a series of encyclopedias.
Fast forward to the present, the two characters start noticing the slow appearance of Tlönian references and objects in their surroundings, which Borges suggests are products perhaps of a surreptitious technology rather than forgeries. Soon after, the full forty volumes of the actual Tlönian encyclopedia are published, causing an intellectual upheaval in the world. By the time the short story ends, the world of the two characters has already become palpably Tlönian. Tlön, which was a fictional world of the Uqbar, and in turn was potentially a cerebral effect of the overactive imagination of the two protagonists, has now crept onto the naturalistic world of the author to the point of erasing its previous reality.
There are many reasons why this is an appealing way to begin a conversation on AI and human flourishing, but the most intriguing one for me is that Borges’s ability to show the unbridled effects of our imagination on our sense of reality has a lesson for how we think about our own discursive preoccupation with AI. Borges’s story is an ingenious narrative of the magical effect that our fascination and immersion in an imagined world (that of the Tlön) has on our own surrounding world to the point that the latter might start appearing to us a holographic rendition of the former.
Our discourse about AI filters and refracts the very presentation of those ideas about our world that we think will only be somewhat transformed by it.
I think our fascination with AI has a similar dynamic to it. Like Tlön, we are captivated by the vast landscape of this set of technologies that we generically call AI. But like Tlön, our imaginative investment in AI also has the inadvertent effect of redefining the reality of our world in its own image. Our discourse about AI filters and refracts the very presentation of those ideas about our world that we think will only be somewhat transformed by it. This is what I consider to be the fantastical element of the conversation around AI that often and unavoidably surfaces in our visions of human flourishing.
I am not saying that AI is a self-fulfilling prophecy as that would assume its claims about ability are built on false priors. I am also not saying that conversations about AI and human flourishing always take the form of the fantastic. There is an extant scholarly and intellectual conversation around AI and human flourishing that is more sober and that often involves speaking about human flourishing as a proxy for positive psychology rather than the fantastical discourse I outlined above. But this measured assessment often does not allow us to gauge this other more cerebral side of the conversations around AI that considers it to be a technology that will cause a fundamental philosophical rupture and that considers the more near-term goals of using AI to increase enterprise value or accelerate service tasks as a prelude to the more long-term goals of changing the means and ends of human flourishing. Given Harvard Divinity School’s penchant for the speculative and the unmoored, I want to focus on the latter.
My assigned job as part of the opening keynote at the Humanity Meets AI Symposium was to lay out the issues that might fall under the broad tent of AI and human flourishing. To do this, I spoke briefly about three ideas related to human flourishing that will be remolded by AI: the nature of work, labor, and leisure; ideas of selfhood; and language and communication. Our imagination about AI encompasses all three concepts but it also represents them in a form that already makes them seem more amenable to the transformative impact of AI. This is what I think of as the fog of AI. Somewhat like its correlate, the fog of war, the fog of AI captures both the uncertain informational provenance around these three concepts of human flourishing and the confounding effects that a transformative technology like AI has on these very same concepts.
In this atmosphere of blurry boundaries and false reveals, discerning the difference between the concepts themselves and the versions of them that get crystallized as part of the discourse of AI is a lot harder than one might think. Thinking of the former as more real than the latter might satiate our defensive proclivities but is intellectually dishonest. I want to opt in for an honest reckoning, but before I begin, I want to sound a few precautionary notes about the scope of this talk.
First, we must be cautious of classifying highly disparate technologies under the umbrella of AI and assuming that they are all the same. This is what the computer scientists Arvind Narayanan and Sayash Kapoor call the “AI hype vortex,” and that could involve passing of old-style unsupervised machine learning as AI.2 For this talk, I use AI to specifically mean the foundational LLMs.
Second, when we talk about the impact of AI on human flourishing, we often confound different types of goals that we might have. To give just one example, there are specific applications of AI such as accelerating drug discovery that will undoubtedly accelerate human flourishing. These might require thinking of narrow AI agents, a term coined by Ethan Mollick, as opposed to AI broadly defined.3 And then there are whatever the opposite of narrow agents are. One can speak about AI purely in terms of specific applications, as in Molick’s “narrow agents,” without speaking about any of the broad uses of AI that people resort to in their daily lives. Being aware of this distinction is important to understand what this talk is not about.
Third, there is a temporal unevenness to the effects of AI that we often collapse. For example, when we talk about how AI will result in untold abundance, we are not talking about the short-term. Yet we often use it in the conversation as if it were an anchor that would allow us to think about our immediate expectations of AI. In some way, this temporal collapse is part of the political economy of the AI enterprise given that it is not only considered another general-purpose technology but a special general-purpose technology that will radically affect every single aspect of our lives. It behooves us to at least be aware of this recurrent enough feature of the discourse around AI.
With those clarifications about the more meat and potatoes aspects of AI, I want to move to the main part of my talk, which lays out conceptual landscape for the three ideas I referenced above.
1. The nature of work, labor, and leisure. I think we all know that AI will fundamentally change the relationship between labor and work but the range of possible scenarios can range from the current reality of the world where entry-level jobs are most at risk to a fully automated world where the requirement of work itself might become redundant. Let’s begin with a brief discussion of scenarios more akin to the former case and then move to the latter.
Economists who study the impact of AI on the labor market have a particular way of modeling this possible set of scenarios by not only looking at the types of jobs that disappear but also the types of jobs that might be created. They might argue that the labor market is not finite but keeps expanding, and new types of jobs that we can’t anticipate yet will be created.
A more organizational approach to this problem of work and labor might look at the way in which organizations absorb AI. For example, prior incipient organizational change might be needed for it to be prepared to absorb a new technology. An example is how gig work would not have been possible without an antecedent history of organizations allowing for more flexible employment contracts. The point is that organizations change first, and the absorption of technology comes later which makes any impact of AI on work and labor constrained by the organizational receptivity to it. In the realm of AI, this is called the diffusion problem, in which the main bottlenecks for AI are human beings who might prevent a new technology from being integrated right away, because diffusion happens at a much slower pace than technological innovation.
This is the more pragmatic way of thinking about the idea of work and employment that focuses on the nature of organizations. If discussions around AI and work were confined to this realm, it would not be as interesting. But, as mentioned above, AI is thought of as a special general-purpose technology that does not aim to merely be another technological innovation that augments our capacities to do things but fundamentally changes our relationship to our own work and labor by potentially automating a lot of the latter. It is here that the more historical and philosophical ruminations about the concept of labor and the value of work becomes central, and it is here that you will find an added dimension to the otherwise staid conversation about employment.
This is not a new set of conversations. Let me provide a random assortment of past versions of this conversation. One can find this set appearing in Marx and his 1844 manuscripts where he presents a more idealized image of humans as homo faber who realize their own value through their relationship with their own labor (a value from which we get alienated under conditions of factory production) to his shift in the 1857-88 manuscript, entitled the Grundrisse, where he started considering the labor theory of value as a framework that was endogenous to the mechanics of capitalism but that did not make any transhistorical sense.4 One can hear an anthropological resonance of this argument in Marshall Sahlins’s famous, albeit not uncontested, hypothesis about the original affluent societies of the hunter-gatherers for whom leisure rather than resource abundance was a yardstick of affluence.5 However, what this leisure entailed and in what way it becomes the metric for realizing self-worth is not clear.
This can also be said for John Maynard Keynes’s famous prediction in his essay, “Economic Possibilities for Our Grandchildren,” where he anticipated a 15-hour work week by the year 2030.6 This, of course, seems unlikely to occur but what is more interesting is that Keynes rarely provides a picture of what we will do otherwise when we are not working. Will we be painting, cultivating hobbies, fine-tuning the art of conversation? Alternately, is the sculptor obsessed with getting the right cut of the marble not working? Is he not ambitious? A more recent example of this dilemma about the value of work and leisure can be found in Nick Bostrom’s recent book on deep utopia where he recounts character portraits from literature such as the highly canonized “superfluous man.” In the writings of Turgenev and Pushkin, this is a personality who often descends into ennui and cynicism when left to their own devices without the everyday demands of a working life.
What the discourse around AI provides is not so much an original angle into these conversations but an activation of a whole host of evergreen issues about the value of work and labor for self-realization and actualization. One can notice this in discussions around the returning value of jobs based on human connection, or the arrival of the so called “passion economy.” Suddenly, the whole history of work and labor seems up for grabs and becomes fodder for rethinking the connection between humans and their work, albeit under the shadow of how AI is reconfiguring the types of work that are desirable but also the value, monetary and otherwise, that we attribute to work and labor more generally.
2. Ideas of selfhood.The second idea I want to explore related to human flourishing, not completely disconnected from the first, is the impact of AI on our ideas of selfhood. Again, there is a hybrid concept of personhood that is built out of insights from our evolutionary history and our evolving theories of mind that is assorted in a piecemeal way to give a particular flavor to the idea of personhood that is amenable to a seamless integration with AI.
One piece of this puzzle of personhood comes from the cognitive scientist Andy Clark, who in a well-known essay with David Chalmers argued for the “The Extended Mind” hypothesis.7 Their basic claim is that our mind is not our brain; rather our mind is the set of all the tools, supports, and techniques that are involved in the processes of our cognition. For example, when we use a calculator or when we use a paper and a pencil to do a math sum, they ask, is this paper and pencil something that I’m using to just support my cognition or is it a part of my cognition? They will argue that it is the latter. For them, mind is something that by definition is extensible, and our history is a history of our minds recurrently externalizing their functions. This involves externalizing functions of the mind as well as forgetting the prior way in which that same cognitive function was performed.
In his book on transhumanism, Natural-Born Cyborgs, Clark calls himself, and by implication all of us, electronic virgins.8 For Clark, we have not yet figured out what we actually are, or what capabilities we will realize in the future. This is the transhumanism element in his philosophy of mind that looks to a set of non-specific capabilities in the future to provide an orientation into the limitations of the present. But one does not have to resort to speculation to see the ordinary ways in which this integration between our mind and AI might be happening.
The Artificiality Institute published a recent report in which they notice regular occurrences of what they call “identity coupling,” or the feedback between the users’ self-identity and their interaction with AI, and “cognitive permeability,” or the frequent outsourcing of even the most mundane instances of thinking that we used to do by ourselves.9 In Clark’s terms, this is our mind extending itself and AI is, perhaps, a radical opportunity to rewire our mind than was afforded by the previous set of technologies. Leaning into this long history of ourselves as cognitive outsourcers might be the key to human flourishing.
What Clark—and he is just one amongst others—aims for here is a partial naturalization of the impact of AI on our sense of selfhood by isolating our mind, specifically our cognition, as the primary anchor of our self-identity. But is this necessarily the case? Hans Moravec, a computer scientist, coined a paradox now called Moravec’s paradox which stated that though we can create code that can help us beat other people in chess (considered to be one of the most cognitively involved activities), we can’t write code to simulate the sensorimotor movements of a one-year-old baby. The sensorimotor mediation of intelligence is as important to us human beings as the linguistic one. Even within the field of frontier AI researchers, there are people like Yann Lecun, the chief AI scientist at Meta, who consider the path of using LLMs to achieve what they call “general intelligence” an intellectual dead end. What makes us adaptive to our environments might not coincide with what makes us augment our cognition.
AI is placing us in the gap between ideas of selfhood that think of us as “electronic virgins” and those that think of us as being susceptible to exhibiting a highly exaggerated tendency for technological dandyism.
If a person is not their mind, what are they? There are innumerable arguments provided by theologians, philosophers, anthropologists, and cognitive scientists about how our ideas of personhood cannot be restricted to our capacity for thought but I still think that AI seems to put its finger on a palpable contemporary anxiety when such conflation seems pervasive. One can almost say that AI is placing us in the gap between ideas of selfhood that think of us as “electronic virgins” and those that think of us as being susceptible to exhibiting a highly exaggerated tendency for technological dandyism. It might be prudent to mind this gap.
3. Language and communication. In December 2024, MIT Technology Review published an article in which they profiled eight people having a personal conversation with a chatbot.10 The people often used these conversations to get answers to all kinds of questions that they might have: What is my kink? How do I parent a child? What should I do with my life? How do we understand this existentially exploratory and non-task-oriented use of AI?
The phrase “stochastic parroting” was coined by the linguist Emily M. Bender in 2021 to describe what LLMs do—they generate texts by mimicking existing corpora without any meaningful understanding of it.11 While being technically right about what LLMs do, I think this falls short of capturing the full scope of the communication that users are undertaking with LLMs for two reasons.
First, our deployment of language is filled with redundancies. We’re not communicating like Shakespeare in our daily lives. We repeat the same kinds of sentences, the same kinds of phrases, the same kinds of references on a daily and weekly basis. Aren’t LLMs just mimicking our own tendency toward redundancy and self-patterning? Sixty years ago, Roman Jackobson, a Russian linguist, argued that a good volume of our conversation is not referential, it is what he called phatic. Phatic conversation does not necessarily communicate any information, rather it maintains social connections. Phatic communication, for Jakobson, does not have semantic functions.
Think back even to the first chatbot ELIZA, developed by Joseph Weizenbaum between 1964-67. ELIZA was designed as a therapist chatbot but all ELIZA did was to take your question and repeat the terms of your question to you as part of a further question. That’s it. The technical term for this is lexical entrainment. And guess what happened when he actually disbursed this chatbot to his students? A lot of them felt heard! All of this to say that a lack of semantic density might not necessarily be a deficiency when it comes to the socio-pragmatic functions of our daily communication.
Second, does our reception of LLMs necessarily involve an attribution of semantic deficiency? In a recent article, Murray Shanahan, a former engineer at Google DeepMind, independent researcher Tara Das, and Robert Thurman, one of the foremost scholars of Tibetan Buddhism, write about the XenoSutra, a Buddhist sutra generated by LLM. They find it possesses the same level of dense allusion and layered symbolism about central ideas in Buddhism—such as emptiness or dependent origination—that can be found in the original scriptural corpus.12 They also show that the XenoSutra uses its own internal structure and form to assist the readers to realize the central ideas in a step-wise fashion, something that was central to the pedagogical technique of the Buddhist canon. While they don’t go so far as to assert that LLMs generate writing that possesses semantic meaning and value, they do assert that meaning and value can be discerned in the output regardless of whether it is the user providing that ascription or the LLMs generating that sense by themselves.
How will AI change our own relationship to language, communicative or otherwise? Again, I think it is hard to give specific answers, thanks to the fog of AI. But thinking of them as stochastic parrots might be beside the question.
Whether we like it or not, I think that the fog of AI will change our concepts of human flourishing. We can either actively engage with them as they happen or choose to be like the eponymous character in Borges’s short story who, even after realizing how Tlönian was consumptively redefining the reality of the world, goes back to working on “an uncertain Quevedian translation of Browne’s Urn Burial.”
Notes:
- Jorge Louis Borges, “Tlön, Uqbar, Orbis Tertius,” first published in the Argentine journal Sur in May 1940. The first English-language translation of the story, by James E. Irby, was published in New World Writing 18 (1961) and was included in the short story collection Labyrinths, ed. Donald A. Yates and James E. Irby (New Directions, 1962), 20-33.
- Arvind Narayanan and Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Princeton University Press, 2024).
- Ethan Mollick, “The End of Search, The Beginning of Research,” One Useful Thing, February 3, 2025.
- Moishe Postone, Time, Labor and Social Domination: A Reinterpretation of Marx’s Critical Theory (Cambridge University Press, 1993).
- Marshall Sahlins, “The Original Affluent Society” in Stone Age Economics (Aldine de Gruyter, 1972), 5-41. The theory was first introduced in a paper Sahlins delivered at “Man the Hunter,” a famous symposium held in 1966 at the University of Chicago’s Center for Continuing Education.
- John Maynard Keynes, “Economic Possibilities for Our Grandchildren,” in Essays in Persuasion (Palgrave Macmillan, 2010), 321-32.
- David Chalmers and Andy Clark, “The Extended Mind,” Analysis 58, no. 1 (1998): 7-19.
- Andy Clark, Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence (Oxford University Press, 2004).
- Helen Edwards and Dave Edwards, “How We Think and Live with AI: Early Patterns of Human Adaptation,” artificiality, June 28, 2025.
- Rhiannon Williams, “The AI Relationship Revolution Is Already Here,” MIT Technology Review, February 13, 2025.
- Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Association for Computer Machinery, 2021): 610-23.
- Murray Shanahan, Tara Das, and Robert Thurman, “The Xeno Sutra: Can Meaning and Value be Ascribed to an AI-Generated ‘Sacred’ Text?” preprint (2025).
Swayam Bagaria is Assistant Professor of Hindu Studies at Harvard Divinity School. This is an edited and revised version of a keynote address he delivered at HDS as part of the “Humanity Meets AI Symposium” held February 27-28, 2025.
Please follow our Commentary Guidelines when engaging in discussion on this site.
