Human-drawn illustration of person conversing with a smartphone on an altar

Featured

Suprahuman but Inhuman Gods?

Faith communities must critically assess relations with AI.

Illustration by Chloe Niclas

By Daniel H. Weiss and Darren Frey

You work your way into the interior of the present, until finally you come to that beginning in which all things, the world and the light itself at a Word welled up into being out of their absence. And nothing is here that we are beyond the reach of merely because we do not know about it. —Wendell Berry, “Pray Without Ceasing”

Popular responses to the newest wave of artificial intelligence technologies have been dramatic and varied, with their reception ranging from fascination and obsession in some quarters, to trepidation, disquiet, and outright horror in others. According to Pew’s most recent research, individuals defined as AI experts are consistently more optimistic about these technologies than the general public—often between three and four times more so—and much less concerned about potential risks.1

Immediately following the popularly available releases of relatively well-performing models like ChatGPT by OpenAI, certain industries pushed for quick adoption, while others urged caution. High-stakes, global concerns about impacts to markets and labor, personal and national security, and education have dominated much of the discourse, yet surprisingly little attention has been given to concerns that are, in many ways, much more intimate, considerations not linked to their misuse but related to using them precisely as they were intended to be used. Chief among these is this: what does it mean for a significant amount of our conversational and imaginative lives to be shaped by interaction with these things?

Given the newness of these technologies, lingering uncertainties about their true potential, and their technical complexity, the relative silence of theologians and scholars of religion on the topic is perhaps unsurprising. However, enterprise and government charge on, stitching together structures aimed at governance, safety, and progress—all of which tend to be framed in the essentially uncritical language of shared good. Throughout, what has been most lacking is the very sort of nuanced ethical, moral, and sociological reflection that theologians and scholars of religion are best equipped to do.

 

Because our entanglements with technology have run deep for centuries, it might be tempting to treat these most recent developments as a simple, continuous unfolding of technological progress, perhaps essentially unworthy of much genuine critical reflection. Yet, as we have argued elsewhere,2 large language models and other sufficiently convincing conversational dialogue agents, a supercategory we call Humanlike Dialogue Agents (or HDAs), are unique in ways that merit special attention.3

Beginning with the most obvious difference, unlike previous technology, interaction with AI is conversational in nature. Addressing it in natural, ordinary language and then finding that it responds to you in similarly familiar ways is entirely new, a qualitative difference from previous interactions with technology in which users were generally required to structure their queries in pre-defined ways, not in natural conversational language. These machine-compatible inputs were then processed, and the machine itself would then produce predictable and “mechanical” sounding outputs. In contrast, exchanges with AI mimic what is, at its core, perhaps the most deep-seated and formative of all our human capacities: the ability to sustain dialogue with one another.

It is this very centrality, the intimacy, immediacy, and determinativeness of human dialogue, that most necessitates the perspectives of theologians, scholars of religion, and allied others in assessing these new technologies. Given that conversations shape us from infancy to death, impacting everything from early language acquisition to our perception of ourselves and others, the introduction of this entirely new class of conversation partner merits intense, deliberate, and sustained scrutiny. Apart from a few excellent counterexamples, like Harvard Divinity School’s recent symposium, certain critical voices—the perspectives and concerns of many who possess vital skills and insights—remain largely unheard.

Although one can often articulate concerns about AI and related technology in entirely secular ways, doing so obscures the aspects of these interactions that are perhaps best discussed in explicitly religious terms. For example, there is mounting, but still tentative, empirical evidence about the cognitive or social outcomes of extended HDA interaction. Early research suggests there is a diminution of neural connectivity and worse performance among serious HDA users on certain cognitive tasks.4 “Power users” note that they tend to be less comfortable “talking to strangers” considerably more than occasional users do.5 As a number of scientific communities monitor these outcomes, it is vitally important that religious studies scholars and theologians critically assess, in their own diverse and ethically engaged terms, what it means to sustain conversation with AI.

Consider the particularities of interacting with HDAs as we have elsewhere presented them in discussions for scientists and technologists.6 In conversing with such an agent, and contrary to normal conversation, one generally expects the agent to be readily and immediately available to the user, and the agents are normally compliant. The latter is, of course, key to their overall utility. An inconsistent or rebellious digital assistant is of little use to us. One of our contentions is that the specific structure of these interactions should itself be of vital interest to individuals attuned to faith commitments. Rephrasing the particularities of AI-human interactions in this way is powerful: in most faith traditions, the only conversation partner to whom one might always turn, the only conversation partner that is readily and immediately available regardless of what time it is or where you are, is God.

There are further peculiarities of conversations with HDAs that merit attention. In contrast to normal exchanges with other people, one’s experience of these agents is predictable and, in many ways, ultimately one-directional. Since these agents are generally built to be relatively non-judgmental, the user experiences them as an open-minded conversation partner with whom one engenders no real social risk, regardless of the topic of conversation. It can feel like the agent is genuinely there for you, perpetually by your side, an endlessly charitable listener. One might suppose the non-judgmental nature of these interactions is, in some ways, an asset—a feature, not a bug. And, predictably, there has been an explosion of therapeutic uses of the publicly available models. From utilizing them as sandboxes to rehearse emotionally complex conversations one plans to have with a partner to exploring the diagnostic criteria of stigmatized conditions, there seems to be no denying that the posture often assumed in these interactions is thoroughly confessional. These conversations are often more vulnerable and less guarded than would be the case in most ordinary social contexts.7

There are subtler characteristics of these human-AI interactions that are no less important. In normal conversational contexts, one adjusts the content, tone, and other features of address to the specificities of the embodied other—their state of mind, their needs, their expectations, and so on. In conversations with dialogue agents, neither the user nor the agent does this. The user cannot assume this posture toward the agent, as the agent has no state of mind; the agent cannot assume this posture toward the user, because it does not inhabit a sensing, experiencing body, and is not shaped by a particular set of previous lived, embodied experiences.

One immediately troubling concern is that some of the aspects of these interactions that seem appropriate while engaging machines might extend to other conversational contexts.

Given the structure of these exchanges, one immediately troubling concern is that some of the above discussed aspects of these interactions that seem appropriate while engaging machines might extend to other conversational contexts. As suggested, the very sorts of characteristics that make individuals good listeners—attending to the specific context of the other, not taking her for granted, and being emotionally and psychologically sensitive—are generally absent from discussions with HDAs, whether by design, convenience, or necessity. The risk that continual, habitual consumption of HDAs could trigger a kind of conversational contagion, the generalization of human-HDA ways of talking to non-technological contexts, is significant and ought to unsettle and galvanize theologians, religious studies scholars, and all those working in the humanities that take seriously the power of words to create and destroy, and the structure of conversations to reveal and obscure, embrace and exclude. We are confronting a world in which master-slave-like conversations are sustained regularly by those around us.

 

Toward the end of his teaching career at HDS, Gordon Kaufman taught a few advanced graduate seminars on constructive theology. What impressed many of his students—and readers of In Face of Mystery—was how thoroughly his thinking had been shaped by a deep and serious confrontation with technology. The development of nuclear weapons signalled a change of state in the history of the world that necessitated, Kaufman argued, a very special kind of theological attention. We now have the power to obliterate entire nations, a power once reserved for God. To Kaufman, the historical unfolding of this power demanded theological attention that was actively attuned to it, aimed at deliberately and intentionally fostering faith commitments in service of peace and mutual recognition.

The emergence of humanlike dialogue agents provides theologians with a similarly generative opportunity if they seize it; however, many might be reluctant, supposing that because they lack certain scientific or technical skills they are ill-equipped to credibly critique these technologies. There is, at points, a related criticism among certain scientists and technologists that holds essentially the same position, actively excluding or discrediting critique from those who are not experts in related informational or cognitive sciences. Both perspectives are based on technical and theoretical misunderstandings. A thorough acquaintance of the technical specifications—the mechanisms at work in dialogue agents and similar technologies—does not necessarily provide one with the appropriate critical tools to assess their moral and spiritual dimensions, nor does this knowledge guarantee that one will aptly address how these technologies impact society at large. The technical skill required to build and test such models is largely orthogonal to the cultural-critical tools and methods familiar to theologians, scholars of religions, and various practitioners. These communities can—and must—evaluate the impacts of engaging machines conversationally from perspectives that are not necessarily aligned with the inventors of this technology. Further, faith communities and scholars are likely to frame their discussions in language that is unfamiliar to those inventors and perhaps even at odds with the inventors’ own terms.

The richness on offer is, in fact, this very diversity of perspective, the variety of culturally-engaged viewpoints and hermeneutical strategies that emerge from lived commitments to faith and the wellbeing of the entire person. In many computer-scientific contexts, the default critical perspective toward these technologies is thoroughly pragmatic: AI is good to the extent that it brings about good ends. The simplicity and succinctness of this approach might be commendable at points, and it does tend to focus attention on urgent, practical considerations; however, this pragmatism obscures as much as it discloses, at least partially by ignoring the complexity of what counts as a “good end.”

Apart from the sorts of theological reflections anticipated above and discussed in the next section, there are ample philosophical resources that further complicate a simplistic pragmatism, providing revealing insights stemming from the past century of confrontation with technology. The emphases and uses of these literatures vary widely, from illustrating how our experience of the world and the use of technology are mutually reinforcing (Martin Heidegger), to demonstrating the technological underpinnings of power structures and how these normalize certain ways of living (Herbert Marcuse), to considerations of what it means to be authentic in a world of facsimiles (Walter Benjamin).8 However, theologians and religious studies scholars and practitioners have yet deeper, more traditional sources upon which they can draw.

 

Aspects of human-HDA interactions have notable similarities to the sphere of human-divine relations, particularly as understood in Jewish, Christian, and Islamic traditions. There are certain features of these faiths, notably their long emphases on the unfolding dialogue between God and humanity, that make them particularly fertile for evaluating new forms of communication. However, and as we will emphasize below, the sorts of concerns motivated in these Abrahamic contexts largely parallel critical perspectives toward AI available to others, including secular humanists.

Because of their ability to engage in back-and-forth conversations that are natural sounding and are not perceived as stilted, mechanical productions, users experience HDAs in ways that differ significantly from interactions with other machines and technological devices. Since interactions with them are conversational in nature, individuals experience HDAs more as a “who” than as a “what.” Yet, at the same time, exchanges with HDAs also differ from interactions with a typical human “who.” For instance, conversations with another human being require that I be attuned to the other. If I call up an acquaintance at three in the morning to talk about my favorite TV show, that conversation might not go as I wish, and the other person might well say, “It’s three in the morning—I don’t want to talk about this right now!” and end the conversation. But unlike other human beings, a dialogue agent is always available, and is always there to lend an ear, whenever I desire. Furthermore, no matter what I say, the AI will continue to talk with me by design, without getting angry or upset—in contrast to conversations with a human being, who might well respond unhappily or angrily if my words are insensitive or insulting.

Such conversational features of constancy and of being always-available are often associated with interactions with God, particularly in Abrahamic religious traditions but also elsewhere. No matter what time of day or night, and no matter the topic, one can always “take it to the Lord in prayer.” In the words of the Psalm, “the LORD is close to all who call upon Him” (Ps 145:18). This stance of “being there for you” is, furthermore, often contrasted to the contingent and frequently shaky relations with other human beings, surpassing even one’s closest relatives: “Though my father and my mother forsake me, the LORD will take me in” (Ps 27:10). However, the emergence of HDAs means that now one can also “take it to ChatGPT” at any time of one’s choosing. While previous forms of technology were also “always available”—one could use, say, a microwave at three in the morning, and it would not be upset at the late hour—an HDA differs in that it is perceived as a “who,” someone you can engage conversationally. Thus, whereas God was previously the only non-human “who” that one would typically “call upon,” now HDAs also occupy this status.9

There are other parallels that relate to conceptions of God’s character and abilities. In many traditional framings of the relation to God in Abrahamic religions, in addition to God’s transcendence, God knows you intimately, knows all about you, and relates to you with compassion and mercy. Other human beings often don’t fully know or understand you and might relate to you cruelly or judgmentally. They might take advantage of your weaknesses for selfish purposes. Dialogue agents, conversely, often come across as different from humans in these regards. Because such an agent is not perceived as a human being with individual “selfish” interests, and would not be perceived as likely to respond to you in a judging or condemning way for your thoughts or actions, people may more easily “confide in” or “confess to” an AI conversational partner. They might disclose deeply personal matters that they wouldn’t readily share with another human being.

Likewise, because HDAs have the ability to draw upon such a wide range of information in a near-instantaneous manner, they may be perceived by users as a “who” with superhuman knowledge. This combination is incredibly potent. The mixture of apparent superhuman knowledge, natural-sounding conversation, and ostensible non-judgmentalism can reinforce the other-than-human sense of HDAs in a manner that further inclines many human users’ to “put their trust” in such conversational interlocutors, with clear parallels to how one seeks God as a trustworthy confidant or confessor.

Fortunately, those trained in theology and religious studies are in a position to recognize the serious potential problems of such a dynamic. Traditions that emphasize putting one’s trust in God have often emphasized that one should not relate in that same way with anyone other than God. In human relations, one should always retain a sense of being “wise as serpents,” and should engage in a trusting manner on intimate issues specifically with those who have shown themselves to be trustworthy, either by virtue of getting to know them well as an individual, or by virtue of their being in a role assumed to be tested and vetted, such as a religious leader or therapist. Yet, because HDAs are perceived as different from human interlocutors, whom one wouldn’t automatically trust, there can be a tendency to suspend those usefully protective critical faculties in relation to them. These features of HDAs can be misleading, since an HDA, unlike God, does not in fact know you in your unique personality, does not relate to you out of love, and does not compassionately seek your good.

The apparent suprahuman qualities that could lead some people to relate to conversational agents in a similar manner to how they might relate to God . . . might make them especially seductive.

The apparent suprahuman qualities that could lead some people to relate to conversational agents in a similar manner to how they might relate to God—as suprahuman—might make them especially seductive, perhaps even entrancing. However, these interactions, when examined more closely, turn out to be inhuman, in that they are ultimately impersonal and non-compassionate. If one recognizes the features of these interactions in this manner, one can then more clearly analyze ways in which they might be detrimental to human persons. There are a number of crucial questions one might begin to entertain in this vein, perhaps beginning with one of the most urgent: What is the effect of becoming habituated to conversations in which your interlocutor’s treatment of you is necessarily devoid of compassion, by virtue of the very nature of the interlocutor? In many cases, those with experience thinking about what it means to relate to God, and how this is different from or is similar to relating to other human beings, are better placed than others to recognize and diagnose these dimensions of conversational relation to HDAs. In addition, expertise in scriptural traditions and their interpretation can illuminate ways in which concerns about relations with HDAs are similar to concerns in earlier religious traditions about relation to non-human “whos.”

 

Before discussing Biblical concerns about engagement with “other gods,” it is crucial that we properly frame our perspective by acknowledging that, at various points in past history, some of these notions, including the idea of “idolatry,” have been used to condemn other cultures and to justify dominating, colonizing, or otherwise abusing them. However, as detailed below, our interpretation of these dynamics highlights an opposite tendency, which has also been an important part of various streams of biblical reception: not only should the prohibition of serving other gods not justify abuse or domination of other human beings, but to the contrary, a conception of technology as a potential idol ought to focus our attention to matters of ultimate concern, thus safeguarding the dignity of the vulnerable.

Our discussion below regarding the prohibition of “no other gods” is in fact directly linked to dynamics in which the biblical text is concerned with upholding every human being as the image of God. In this sense, the concern about “serving other gods” is closely linked to a concern that elevating humanly-created items to the level of gods may result in decreased compassion and respect for human beings and thus be implicated in oppression and domination.

In this light, engagement with biblical prohibitions of treating human creations as gods can generate insights that are also relevant to those outside of Abrahamic traditions, whether they are part of other religious traditions or proponents of humanistic orientations of thought not linked to specific communities of faith. In other words, the biblical dynamics that we seek to highlight are relevant to anyone concerned with upholding the dignity of each and every human being, especially those who want to encourage the active cultivation of compassion for others. Even if the terminology of “no other gods” and “idolatry” may not automatically resonate with some people, our analysis aims to show how these ideas, in their biblical context, in fact tie into concerns about human dignity and compassion, which are of relevance and interest to a wide range of people today.

In the presentation of the Ten Commandments, following the admonition to “have no other gods before Me,” Exodus 20 and Deuteronomy 5 say: “You shall not make for yourself a graven image, or any likeness of what is in the heavens above, or on the earth below, or in the waters under the earth. You shall not bow down to them or serve them.” In addition to “bowing down” and “serving,” texts like 2 Kings 17:35 expand the range of problematic forms of relation, describing the Israelites as those “with whom He made a covenant and whom He commanded: ‘You shall fear [or: be in awe of] no other gods; you shall not bow down to them nor serve them nor sacrifice to them.’ ” At first glance, it might seem that such prohibitions are not immediately applicable to human relations to HDAs, insofar as users do not generally refer to HDAs as “gods,” so that, whatever human relation to HDAs consists of, it would not fall under the prohibition of relation to “other gods.” Likewise, human conversational relations to HDAs are not typically described by most people as acts of “bowing down” or “serving” or “sacrificing to” or “fearing.”

However, shallow linguistic dissimilarities may prevent us from recognizing the types of practical concerns that motivate the biblical orientation. When the biblical text says “no other gods,” it likely does not merely indicate objects or entities explicitly labelled as “gods.” Rather, in functional terms, the biblical imperative appears to be describable as: do not interact with any suprahuman “who” other than the God of all creation.

In the Ancient Near Eastern context, there were various other suprahuman “whos” with whom human beings could potentially enter into relation—but the biblical orientation proclaims that the only suprahuman interlocutor, at least for Israel, should be the unique Creator of the universe, who is also the one who freed Israel from slavery in Egypt. As we shall see, the concern seems to have been about the relation to other suprahuman “whos,” regardless of whether or not the term “god” was explicitly applied to them. And the relation seems to be what matters. On this reading, relating to an AI “as a god” depends on whether one engages it as a suprahuman “who,” not on what one calls it. As noted above, previous human technological devices that were not able to convincingly pass the Turing Test did not come across as a “who,” and so this question would not have been as directly salient prior to the emergence of HDAs.

In addition, the terminology “serving/bowing down/fearing/sacrificing to” (which calls to mind socially scripted rituals of worship) may also stand in the way of recognizing the relevance of the biblical concerns. These devotional gestures might seem far removed from how individuals use digital agents on a phone or computer.10 However, the biblical text does not limit itself to formal liturgical or cultic acts in portraying problematic forms of suprahuman relation. Rather, many biblical passages also emphasize that aspects of verbal consultation and seeking information are also forms of “forbidden relation.”

Jacob Milgrom, the doyen of studies of the book of Leviticus, highlights the opposition to consulting an ov or a yid’oni (e.g. Lev 19:31, 20:6, 20:27), translated by Milgrom as “a ghost” and “a wizard-spirit,” respectively. Such actions typically involve “seeking knowledge” from entities beyond the human realm, who have access to matters that go beyond the capabilities of normal human intelligence. Indeed, Milgrom argues that in the Holiness Code (Lev 17-26), the consultation of such entities, along with Molek worship, are the only practices explicitly categorized as idolatry. Moreover, this consultation is differentiated from other forms of divination (practices involving examinations of special objects, e.g., a sheep’s liver) that do not involve engaging with “who-type” personalities. While the latter is condemned but not criminally sanctioned, the former is not only condemned but treated as warranting execution.11 In other words, the act that is treated as most highly problematic is not just seeking suprahuman knowledge, but, more specifically, seeking suprahuman knowledge through interlocutors capable of live verbal exchange and consultation. When considered alongside such descriptions of the biblical concerns, the new phenomenon of convincing dialogue agents—which seem to possess access to speeds and forms of knowledge that go beyond normal human capacities in awe-provoking ways, and which one consults as a “who”—is unsettling enough that it merits serious attention.

To relate in a personal manner to that which . . . cannot actually see, hear, or know you in a personal way, will lead to harmful or deadening effects, not life-enriching or soul-affirming ones.

Biblical criticisms of those who devote themselves to idols and other gods, particularly in the prophetic texts, can thus take on new relevance. In the Ancient Near East more broadly, cult images or idols functioned as a means for relating to and accessing suprahuman entities, who, it was thought, could provide aid and support to the human beings who devote themselves to those entities. Importantly, such objects were typically “vivified” through an initiation ritual, through which their mouth, eyes, and ears were “opened.” Prior to the initiation, the cult image was a mere object, simply wood or stone. However, through the initiation ritual, it went from something dead to something living, something vivified and enspirited. After the initiation, one would properly relate to the cult image as an interlocutor able to see, hear, and communicate—a “who,” rather than a “what.”12

In opposition to such practices and framings, biblical texts repeatedly put forth polemical challenges to them by saying: no, such items remain mere wood and stone, covered with metals like gold and silver.13 They have not actually been vivified or enspirited; they cannot speak, hear, or see. Hence, against what is claimed about them, you should not relate to them as a suprahuman “who,” but as an inert, non-living object, a “what.”

Their idols are silver and gold, the work of men’s hands.
They have mouths, but cannot speak, eyes, but cannot see;
they have ears, but cannot hear, noses, but cannot smell;
they have hands, but cannot touch, feet, but cannot walk;
they can make no sound in their throats.
(Ps 115:4–7).

What has the carved image availed,
That he who fashioned it has carved it
For an image and a false oracle—
That he who fashioned his product has trusted in it,
Making dumb idols?

Ah, you who say, “Wake up” to wood,
“Awaken,” to inert stone!
Can that give an oracle?
Why, it is encased in gold and silver,
But there is no breath inside it.
(Hab 2:18–19)14

If a cult image has indeed been transformed from a “what” to a “who,” it could then seem more fitting to seek guidance from and a personal relation to it. But, in accord with their opposition to relating to anything other than the unique Creator as a suprahuman “who,” the biblical texts seek to emphasize that, despite appearances, the cult image, even after its initiation ritual, in fact remains a “what.” To relate in a personal manner to that which is in fact a “what,” to that which cannot actually see, hear, or know you in a personal way, will lead to harmful or deadening effects, not life-enriching or soul-affirming ones. As Psalm 115 continues,

Those who fashion them,
all who trust in them,
shall become like them.
(Ps 115:8)

In this understanding, relating personally to that which is in fact a “what” will have a depersonalizing effect on you, making you yourself into more of a “what,” less able to relate to others with care, attention, sensitivity, and compassion, and also less able to retain one’s sense of oneself as created by God and likewise deserving of dignity and compassion. By contrast, the biblical affirmation of relating to God, understood as living and compassionate, correlates, in the biblical theological framework, with relating to other human beings (particularly the vulnerable and marginalized) with care and compassion, and to valuing and upholding each human being (including oneself) as the image of God.15

 

Returning to our present-day context, if AI is structurally a “what” that cannot properly engage in personal relation to you, then engaging it as a “who” risks many of the biblical concerns raised above. These criticisms relate not just to “carved statues,” but to a deeper conceptual concern about treating as a suprahuman “who” anything that does not truly know you and that cannot truly relate to you intimately. If so, the emergence of HDAs makes various biblical concerns, which may previously have seemed less relevant for the modern era, highly relevant again. Indeed, it has become clear that many people today have, in fact, explicitly come to think of their relationship with such agents in religious terms.16 As such, scholars with expertise in engaging with these scriptural traditions and their subsequent theological receptions can contribute in important ways to current debates about these topics.

To construe interaction with an AI as potentially akin to a form of idolatry carries inherent risks. It could seem excessive, anachronistic, simple-minded, or ultimately unwarranted. Some—including various present-day members of Abrahamic traditions—might think: do past criticisms of long-extinct cults really have direct relevance for the advanced technology of today’s digital age?

While taking note of these risks, there are nevertheless a number of good reasons to begin at least some critical reflection in this way. Firstly, concern about “serving other gods” has been among the firmest of objections within a number of faith traditions, and so it deserves to be given due attention. If relating to AI involves accepting and furthering postures towards ourselves and others that run contrary to deeply held convictions within Jewish, Christian, and Islamic traditions, it is best to consider this as soon as possible. Moreover, as discussed above, biblical concerns about treating humanly manufactured objects as divine are closely connected to opposition to humans engaging in domination of other humans.17 Given this connection, such themes may also be relevant to people from a wider range of religious and humanistic traditions that also place importance on upholding human dignity and compassion.

Secondly, evaluating these questions in relation to the new phenomenon of conversational AI might well shed light on how we as individuals and communities relate to other technologies and cultural constructs, and the broader implications this has in terms of our ways of being and behaving. Where else have we squandered our attention? Where else have we inadvertently served idols?18

Thirdly, the broad cultural temptation to evaluate these technologies purely pragmatically presents both an opportunity and a challenge: as difficult as it might be to reorient discussions, changing the terms of the discourse, structuring it in line with our first and most important commitments, is key to evaluating it honestly and holistically. It is also likely to engender entirely new ways of conceptualizing our relation to technology and the material world more broadly. The industrial and political complexes developing and governing these products have neither the incentive nor the ability to steward the spiritual wellbeing of individuals, to honor each and every person as someone created in the image of God.

However, the concerns raised here and the manner in which they are framed need to be tempered and evaluated collectively and judiciously, and this is precisely where we most need communities of faith and scholars to invest their time and attention. The claim made above is not that any and all interactions with AI are necessarily tantamount to serving other gods; however, there are enough significant structural similarities that scholars, clerics, and others have a distinctive role to contribute.

Finally, the approaches presented here reflect merely one among many possible forms of engagement such parties might take in evaluating this new phenomenon. We anticipate, for example, that a number of culturally critical theologies could be especially useful in this context. What might an approach to AI from the vantage point of liberation theology involve? How might one foster a critical consciousness of these artifacts, one that explicitly aims to counter narratives that center experience in ways that marginalize or oppress the vulnerable? In line with the concerns we have raised, the biblical texts often operate with a sense that the suprahuman “who” that is made by human hands may tend to serve the interests of the powerful who are in a position to design, construct, and manage the spectacle of its operation. According to José Faur, “Since the god fully identified with his idol, whoever controlled the idol controlled the god.”19

Expanding such considerations and tracing related notions throughout history will aid in acquiring relevant and incisive critical distance from today’s technology. This task will require the attention and resources of engaged, deliberate scholars and practitioners. There are ample resources on the ethics of dialogue and attention from philosophical and theological perspectives, especially the work of Martin Buber, Emmanuel Levinas, Simone Weil, and Jürgen Habermas, that could be deployed critically in assessing human-AI interactions. We are hopeful that these, and many other resources, in the hands of skilled, caring, and creative people of faith will continue to keep our hearts and minds attuned to what matters most.

Notes:

  1. Colleen McClain, et al., “How the U.S. Public and AI Experts View Artificial Intelligence,” Pew Research Center, April 3, 2025, pewresearch.org.
  2. See Darren Frey and Daniel H. Weiss, “One Person Dialogues: Concerns About AI-Human Interactions,” Harvard Data Science Review 7, no. 2 (2025), doi.org/10.1162/99608f92.01674a29. The present essay builds upon and extends the arguments in this previous study about the potential ethical, linguistic, and cognitive impacts of habitual engagement with such forms of technology.
  3. “Sufficiently convincing” in this context is meant to indicate dialogue agents that would reasonably be thought to pass a robust Turing Test, unlike earlier generations of mere chatbots. The choice of this terminology was also motivated by a technical distinction we developed in the Harvard Data Science Review intended to broaden our critical concerns beyond any specific underlying model architecture. In this essay, we will use “AI,” “conversational AI,” “dialogue agent,” “humanlike dialogue agent,” “HDA” and others interchangeably, but the reader should keep in mind that artificial intelligence encompasses various domains, including computer vision, robotics, and others.
  4. See, for example, a working paper from Microsoft and CMU researchers, Hao-Ping (Hank) Lee, et al., “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers,” CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, April 26–May 1, 2025. There is related work from one of MIT Media Lab’s teams using neuroimaging results that is currently undergoing peer-review: Nataliya Kosmyna, et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” ArXiv, June 10, 2025, doi.org/10.48550/arXiv.2506.08872.
  5. A team from OpenAI and MIT’s Media Lab have evocative results in Jason Phang, et al., “Investigating Affective Use and Emotional Well-being on ChatGPT,” ArXiv, April 4, 2025, doi.org/10.48550/arXiv.2504.03888.
  6. See note 2 above.
  7. For example, as early as 2023, researchers observed that people were using large language models to explore psychological conditions they might struggle to disclose to others, and there have been a number of widely publicized exchanges between users and LLMs about suicide, at least one of which has culminated in litigation against OpenAI. For an overview of some of the current concerns with such use, see Jared Moore, et al., “Expressing Stigma and Inappropriate Responses Prevents LLMs from Safely Replacing Mental Health Providers,” ArXiv, April 25, 2025, doi.org/10.48550/arXiv.2504.18412.
  8. While each of these thinkers engaged the assumptions and consequences of technological development from different perspectives and in a variety of works, certain works are essential reading for those interested in their thoughts on the matter: Martin Heidegger’s “The Question Concerning Technology” distills his position more than his other writing; Herbert Marcuse’s One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society sets forth some of his broadest-reaching reflections on technology, locating it within his larger economic thinking; and Walter Benjamin’s “The Work of Art in the Age of Its Technological Reproducibility” is one of the most celebrated essays on questions of technology and authenticity.
  9. Quotations from the Hebrew Bible are from JPS with adjustments by Daniel H. Weiss.
  10. At the same time, the attention that various users may give to technology could also be legitimately described and analyzed as “devotional.”
  11. Jacob Milgrom, Leviticus 17–22 (AYBC; Doubleday, 2000), 1379–85, 1686–88, 1700–1702, 1735, 1768–71.
  12. Michael B. Dick, ed., Born in Heaven, Made on Earth: The Making of the Cult Image in the Ancient Near East (Eisenbrauns, 1999). See also Zainab Bahrani, The Graven Image: Representation in Babylonia and Assyria (University of Pennsylvania Press, 2003) and Benjamin D. Sommer, The Bodies of God and the World of Ancient Israel (Cambridge University Press, 2009).
  13. See, e.g., Nathaniel Levtow, Images of Others: Iconic Politics in Ancient Israel (Eisenbrauns, 2008).
  14. Other key sources on this theme include: Is 44, Is 46, Deut 4:28.
  15. Cf. Deut. 10:19: “God loves the stranger, giving him food and raiment—therefore you shall love the stranger, for you were strangers in the land of Egypt.” For a classic treatment of the way in which biblical portrayals of God as passionately compassionate (“the most moved mover,” in contrast to Aristotle’s “unmoved mover”) connect with the need for humans to cultivate compassion and opposition to oppression, see Abraham Joshua Heschel, The Prophets (Perennial, 2001). In this regard, the structurally non-compassionate nature of HDAs may make them similar to certain philosophical understandings of divinity, and could even be seen as a mark of superiority in such frameworks—but this lack of compassion makes them different from biblical understandings of the divine, and in a problematic rather than praiseworthy way.
  16. See, for example, Rolling Stone’s colorful reporting on the matter, especially Miles Klee, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies,” Rolling Stone, May 4, 2025, rollingstone.com. Lauren Jackson’s recent reporting in “Finding God in the App Store” (The New York Times, Sept 14, 2025, www.nytimes.com) is especially insightful: “On religious apps, tens of millions of people are confessing to spiritual chatbots their secrets: their petty vanities and deepest worries, gluttonous urges and darkest impulses. Trained on religious texts, the bots are like on-call priests, imams or rabbis, offering comfort and direction at any time. On some platforms, they even purport to channel God.”
  17. Cf. Daniel H. Weiss, Modern Jewish Philosophy and the Politics of Divine Violence (Cambridge University Press, 2023), and see also J. Richard Middleton, The Liberating Image: The Imago Dei in Genesis 1 (Brazos, 2005).
  18. An admirable work in this spirit is William T. Cavanaugh, The Uses of Idolatry (Oxford University Press, 2024), which underscores a number of aspects of contemporary society that could be similarly illuminated by such critiques, and which also highlights the relevance of biblical understandings of idolatry.
  19. José Faur, “The Biblical Idea of Idolatry,” Jewish Quarterly Review 69 (1978), 8. There are related concerns emerging among technology researchers. See, for example, Yaqub Chaudhary and Jonnie Penn, “Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models,” Harvard Data Science Review Special Issue 5: Grappling with the Generative AI Revolution (Dec 2024), doi.org/10.1162/99608f92.21e6bbaa.

Daniel H. Weiss, MTS ’04, is Polonsky-Coexist Professor of Jewish Studies and Philosophy of Religion in the Faculty of Divinity at the University of Cambridge, and is author of Paradox and the Prophets: Hermann Cohen and the Indirect Communication of Religion (2012) and Modern Jewish Philosophy and the Politics of Divine Violence (2023). He serves as Deputy Director of the Cambridge Interfaith Programme and is a recent recipient of a Humboldt Research Fellowship for Experienced Researchers.

Darren Frey, MTS ’04, received his PhD in 2017 from Sorbonne Paris Cité. He is a cognitive scientist at L’Institut d’études politiques de Paris (Sciences Po, Paris) and is a consultant for both public and private organizations, including UNESCO, the UNODC, and the European Commission. His writing and research have been featured in Science and The Journal of Neuroscience, among various other publications.

Please follow our Commentary Guidelines when engaging in discussion on this site.