Human-drawn illustration cityscape with high tech buildings up on a hill and glowing lines looming over the lower poorer buildings

Featured

AI Is Not Truly Innovative

Tech design and profiling reinforce power imbalances.

Illustration by Chloe Niclas

By Jenn Louie

This summer, in Addis Ababa, Ethiopia, I was delivering workshops on Artificial Intelligence (AI) to civil servants from the federal ministries. One of the women in the course asked about reporting feedback to AI companies. If and when she discovers something bad or wrong in an AI that impacts her and her community, and she reports it, will they fix it? Will they listen? How does one make a good enough case to get those in control of the AI to listen or guide them to a good solution? Another man in the workshop asked several pointed questions during the course about data control and ownership. How does one know if their data is protected? How does one manage data protection to ensure their data sovereignty? What controls do he and his countrymen have over AI to ensure the safety and security of their citizens, and are they the same controls as people in the US have? Are these security methods enough to protect them?

These might seem like technical or AI management questions that I could have answered with a technical or business-oriented response. However, my own learnings from Harvard Divinity School, chaplaincy, and religious literacy have taught me to listen from a fundamentally different place. I would be doing everyone a disservice if I only answered the technical aspects of their questions without holding space to listen for the histories and relationships motivating the questions that were being asked.

What I uncovered when I listened in this way revealed the underlying hope, precarity, fear, and distrust that technologies built outside of their communities introduce into what people have long held sacred—their relationships and sense of self-preservation. Deep down, what I heard in each question was something that ached within me too—who, in the designing, building, and refining of AI, will take into consideration the nuances needed to best care for them, their culture, and their people? Who will care for their personhood and all they hold sacred with the level of consideration and dignity they hope for and deserve? Will their needs and interests be prioritized, respected, and tended to? How do they, as potential adopters of this technology, remain relevant and not become an afterthought? How do they ensure that their communities and cultures are not rendered collateral damage in the race towards AI advancement for competitive advantage?

Most questions I have encountered professionally on AI over the last year are fundamentally focused on human concerns couched within technical or governance questions. Usually, this is because in the design of these systems, appealing solely to one another’s humanity has not made these questions and concerns legible enough to get specific communities’ needs prioritized by those who control the development of technologies, so people from the global south reframe their concerns in terms they think technologists from the global north will understand. There is a history of cultural violence reflected in the dynamics of trying to make yourself legible to those who have power and privilege when you are the one victimized and aggrieved. Quite often, their questions express the painful scars of historically being treated inhumanely or grossly overlooked, often from systemic inequities perpetuated with no clear resolutions offered. I share this to illustrate that AI ethical frameworks cannot be presumed to be inclusive or equitable. As a society, we are being asked to rapidly adopt technologies without much regard to whether there is a company or user commitment to human care or whether these technologies will contribute to human conflict as found in the biases and power imbalances already in place in global culture.

AI is commonly depicted as a savior of our future and positioned as an antidote to what ails us as a society. It is often promised to be liberatory and democratizing because it transcends the borders and authority of nation-states that have previously limited our capacities. However, for all its salvific potential, how is it that digital technologies like social media and AI are simultaneously implicated in the world’s greatest social ills and moral conflicts of our time? Are the ethical logics we have traditionally applied towards conflict (not always with success) enough to safeguard us from the violences of the AI revolution? And what wisdom does religious literacy impart in our approach to the emerging conflicts that are created or amplified by AI?

The populations who feel the most significant anticipatory grief about AI that I bear witness to through my work reflect their lack of proximity to privilege. The very tenor of the questions about its design reveals what they love, and what wounds are being activated by the world’s rapid consumption of these technologies. Our proximity to the benefits of AI versus our relationships to those who disproportionately bear the cost of societal “advancement” from AI is reflected in the relative levels of indifference we may carry with regard to AI’s risks and harms. In truth, most of the world does not have the privilege to casually ignore the grave environmental impact of AI1 or its role in war and conflict.2 For the many communities living in legacies of inequality, it is easy to recognize the moral conflicts of the past and present being woven into AI, and how those conflicts are being translated into the ethical positions that thus inform AI’s governance. I assert that AI or any technology is not truly innovative if it merely reinscribes, deepens, and perpetuates the moral conflicts and inequalities of our past into our present and futures.

 

Polarizing Postures Towards Ethical Solutionism

In tech, we are taught to start with defining the problem first. Religious literacy has taught me to interrogate the ontologies and epistemologies that inform how we qualify what a problem is to begin with. I refer to this as our moral situatedness —the stories, meaning-making, and contexts that orient our moral sensibilities and shape how we perceive a problem (what we see as ethically wrong) and determine the solutions (what we see as ethically right) appropriate for rectifying it. Within the desire to protect, serve, and save people, the moralizing systems and pedagogies that orient us towards problem recognition and building solutions often lean towards privileging some people over others and carry, for anyone in positions of privilege (especially those in the global north), the risk of embodying a posture of saviorism. This posture towards problem-solving in my line of work, AI Trust & Safety, which manages how to govern and mitigate AI’s problems, mostly from afar, increasingly perpetuates conflict by re-creating hierarchies and difference rather than centering human equity.3

Interrogating underlying morals is a means of revealing the ways technologies can and have become a carrier and incubator for our social ills; even the ones it often claims to be solving. John Paul Lederach, Professor Emeritus of International Peacebuilding at Notre Dame University, advises:

First, we must understand and feel the landscape of protracted violence and why it poses such deep-rooted challenges to constructive change. In other words, we must set our feet deeply into the geographies and realities of what destructive relationships produce, what legacies they leave, and what breaking their violent patterns will require. Second, we must explore the creative process itself, not as a tangential inquiry, but as the wellspring that feeds the building of peace.4

Compassionate interrogation offers an intervention to interrupt the unintended creation of the often invisible systems of inequity and violence so that we may innovate with greater consciousness.

Scholar versus practitioner is perhaps a familiar dynamic in religiously aware circles, with each posturing for authority and perhaps supremacy, but it is too often exclusionary and reductive. From either camp, arguments that lack compassion, diminish and/or undermine the position of the other have become both worrisome and weary. I have heard tech practitioners dismiss scholars for being too out of touch to be practical. I have heard scholars reductively refer to technologists as being entirely driven by greed and without integrity. In my experience working with people implementing AI governance across global majority countries, this ethical debate is fundamentally polarizing and rarely inclusive enough. These postures towards tech ethics result in diminishing returns and don’t expand the field of what is ethically relevant and possible. The moral positions taken often lack the tenderness, depth, and curiosity needed to widen our ethical awareness.

When scholars make statements calling on tech practitioners to fix biases, as they did during the symposium on AI ethics at HDS, do they question their own authority on these matters or challenge the consensus to make appeals to the same bodies of power that led to the biased inputs and outputs in the first place? When listening to such statements, I have felt a grief and doubt akin to that which resonated in the questions I received in Ethiopia. Would these scholars advocate for the needs of people like me who have been victims of intersectional bias? Would they tend to the needs of my BIPOC non-binary friends and family, whom I love and hold sacred? As a practitioner in AI alignment and risk mitigation, and the only female and person of color on the “Building Ethical AI” panel, there are a range of intersectional biases that inform my moral situatedness. It is clear to me that all of our purported ethics carried biases, including my own, and that our respective moral situatedness and proximities to marginalized peoples inform whether we possess heightened levels of advocacy and deepened ethical commitments to serving those victimized by AI bias. In extremely privileged spaces, the ethical positions on bias often feel very performative to me, for they lack commitment to the one thing that matters with respect to addressing bias: the relinquishment of privilege and power.

Appealing to power will never truly root out bias or the social ills reflected in AI because it only reinforces the existing power structures that led to the bias to begin with.

Many of us in the global north may be aware of the same concerns, but we feel a different level of urgency than those civil servants in Ethiopia who are asking how to make themselves and their rights legible to those who create and control the technologies they are encouraged to become dependent on. Reporting biases or other ways in which AI contributes to harm through techniques like red-teaming or by using the reporting button designed into an AI interface may feel like screaming into the wind for all of us. As Audre Lorde famously put it, “the master’s tools will never dismantle the master’s house.”5 Appealing to power will never truly root out bias or the social ills reflected in AI because it only reinforces the existing power structures that led to the bias to begin with. A posture of virtuousness is dangerous if we lack fundamental awareness of how we relate to existing systems of power. The only way to shift the balance is to confront and liberate ourselves from ethical logics that operate through control built upon power accumulation.

 

Legacies that Inform the AI Ideal

When I am speaking to people from places living under the weight of colonial legacies—those whose personhood and cultures are not centered in how AI is being crafted and conditioned—there is an undeniable consensus and familiar recognition that AI is positioned to be the greatest colonizing force of our time. The current governance and management of AI’s risks and its harms are undemocratic, extractive, and contribute to the accumulation of power and wealth by a very few. The appeals of the marginalized towards rectifying biases feel futile in a system weighted towards reinforcing power imbalances and systems of caste from the past and present into the future.

Ruha Benjamin, transdisciplinary scholar and Princeton University professor, posits that “[tech] design is a colonising project” because it reflects a way of structuring hierarchy in the world. Design thinking “could build a foundation for solidarity. . . but it could also sanitise and make palatable deep-seated injustices, contained within the innovating practices of design.”6 Thus, AI easily risks replicating colonial ideologies of “civilizing,” “developing,” and “advancing” other cultures and societies. These ideologies of modernity and progress also frame the world into binary classifications of: advanced/inferior, industrious/lazy, modern/primitive.

AI operates under the legacies of how the ideal rational, modern human is envisioned. For science and tech this ideal is often traced to Kant. According to philosopher Iris Murdoch,

We are still living in the age of the Kantian man. . . . He is the offspring of the age of science, confidently rational and yet increasingly aware of his alienation from the material universe which his discoveries reveal. . . . He is the ideal citizen of the liberal state, a warning held up to tyrants.

The centre of this type of post-Kantian moral philosophy is the notion of the will as the creator of value. . . . The sovereign moral concept is freedom, or possibly courage in a sense which identifies it with freedom, will, power.7

These ideals have guided what we deem to be rational for centuries and can be found in current approaches that center a rational human set of interests in AI development but don’t qualify precisely whose human experience is privileged as a result. Who is marginalized based on their ideological or religious differences that fall outside of the parameters of hegemonic ideals? This conception of the “ideal man”—cultured, modern, trustworthy, and desired—has led to identifying matching ideal traits used to group people in their proximity to this conception.

AI systems and their governance can reify logics of grouping people and classifying differences, such as gender, race, and faith, in historically stratified ways because AI is about parsing, organizing, and deriving insights from data steeped for centuries in alignment with these ideals. According to Theodore Vial, scholar of modern Western religious thought:

Our modern social imaginaries, our modern conceptual architecture, continue to rely on teleological principles. We are led, despite our best efforts. . . to theorize difference by comparing groups based on their proximity to a historical telos. When we rank parts of the world by how developed or progressive or modern they are, by how compatible their religions are with democracy, and when we notice what color the people are who live there, we find that our categories are not so different from Kant’s. . . . Our options here are to stop comparing. . . or to compare in full awareness of the structure of the concepts we use to compare.8

These types of moral inheritances orient tech towards unwittingly reinforcing social inequities that lead to conflict and violence while positioning it as a benevolent solution. On a theoretical level, this is because tech operates under long-standing teleological frameworks of modernity that are predicated on moral assumptions that have historically organized the world in hierarchies that engender prejudice and discrimination. Understanding the frameworks that guide tech’s systemic moralizing power can help reveal how AI is positioned to replicate systems of social inequity and divisiveness that fuel human conflict and war.

 

The AI-enabled Inquisition Era

I am writing from a conflict zone. I have anxiously re-written and in some cases self-censored, knowing that my writing will go into my online record and be added to my algorithmically compiled risk score that, depending on how AI surveillance systems are weighted and coded, alongside evolving geopolitical positions, could judge me—and my loved ones by association—as friend or foe. This is based upon reductive identity markers—my race, ethnicity, national identity, schooling, and presumed politics and loyalties based on where I have travelled.

As is likely for everyone who travels these days, I have undergone multiple AI-enabled checkpoints, passed countless AI-powered surveillance cameras, been submitted for AI background, social graph, and social media checks, and have had my biometrics logged into different national and private systems—all in the name of safety and security. I have experienced tech-enabled systems of interrogation through the categorizing of humans based on risk-scoring practices. Through the visa processes I submitted online, I have consented to AI-powered background checks against my social media profiles. For how long, I am not sure. In perpetuity? I was asked to submit information about my family. Did I also consent to my parents being surveilled and risk-scored as well in this process? Whatever classifications my profile reflects are approximations to qualify my trustworthiness via my lineage and presumed loyalties and affiliations. I am being confronted by systems I am all too familiar with—systems I was educated in, trained others on, and contributed to developing when I worked on managing risk and escalations at companies like Facebook and Google.

Is this our AI Inquisition era? These practices are currently AI-enabled and less visible, but they are historically rooted in the same type of ethical logics that undergirded the Inquisition, colonization, internment, immigration exclusion acts, and enslavement. Within the ethical frameworks of risk, safety, and security, some groups of people are determined to be deserving of certain protections and privileges (like digital economic inclusion and online virality or freedom of speech), and others are classified as potential threats and undeserving of such access. In this modern era, people may be digitally fenced in rather than physically, but the logics and impact of oppression are similarly disabling. Risk classifications algorithmically mask biased judgments. These practices are not regulated to be aligned towards justice, and yet they are globally imposed, borderless practices of governance that algorithmically apply a very narrow set of ethical positions of safety and security while inadvertently perpetuating systemic inequality. It’s useful to ask who is centrally protected in these ethical frameworks? Safety for whom and from whom?

It is the banal systems decisions that I am more attuned to than most, such as setting transaction fees or adjusting algorithmic account restrictions based on risk. Higher fees and limitations are algorithmically imposed upon certain geographies and people in delineated markets, sectors, and demographics. In areas where we most desire safety and security, including banking, borders, and healthcare, we are often asked to relinquish consent to systems of broad surveillance or agentic systems that will act on our behalf, without full transparency or the ability to elect whose ethics and logics inform those automated choices in our digital lives. When in a state of conflict or war, these higher-risk markets and sectors lend towards stripping autonomy and removing choices, as well as the collection of more data and the active monitoring of users by broadening the definition and scope of who qualifies as “high risk.”

I have witnessed financial institutions being raided by armed military forces and millions being confiscated and withheld from civilians who bank or send money from these institutions, contributing to wide-scale economic instability. In these situations, the monies are rarely recovered. The explanation for this type of action is typically that risk is being mitigated, warranting the forceful removal or restricted access of all civilian monies. I have received testimonies from people who were profiled and interrogated at gunpoint based on their risk profile. These acts are experienced as lawful criminality. Is this really for civilian protection? Or are such actions equivalent to an armed bank heist?

Any one of us, or all of us, can be cut from participating in the digital financial and information economies as a result of risk profiling.

I have experienced my own online banking accounts (paypal, zelle) and credit cards being restricted and blocked due to risk profiling. Because I travelled and sent money within a “high-risk” market, and the identities (ethnicities, nationalities, banking institutions they use, social graphs) of the people I attempted to pay were classified with higher potential risk, this led to algorithmically automated restrictions on my account, according to the scoring constructed by the companies whose services I use. This is more subtle than a military raid, but equally algorithmically justified using the same logic. Any one of us, or all of us, can be cut from participating in the digital financial and information economies as a result of risk profiling. For many in the global south, this is experienced as private-sector sanctioning based on what my industry has coined as “market-risk.”

Legacy ethical frameworks that are currently woven into AI-enabled safety and security systems reflect the same ideals and teleological frameworks referenced by Theodore Vial. We theorize, compare and rank each person’s compatibility against very biased and narrow Western views of modern civility, reinforcing hegemonic class divisions that have perhaps not greatly evolved since the Inquisition era. This may be represented in terms of star ratings, likes, or a risk score, but all of these are systems that are biased towards the sensibilities of those in power, and codified into online policies, algorithms, and terms of service that orient the world into divisive scoring classifications.

Risk and AI are formulations that draw from structuring data, and they assign humans into classifications of speculative risk and desirability. This is not a new or technically driven problem; it is a familiar posture towards virtue and power that reflects legacies of human witchhunts. These systems and logics reinforce adversarial ways of seeing people which organize us into hierarchical social castes. The current ethics of safety and security applied to AI narrows the field of vision by which we can recognize each other’s humanity and may prove to be one of the greatest impediments to peacebuilding.

A just peace may only be possible if we liberate ourselves from the belief that peace and security is achieved by creating coercively oppressive environments that subjugate and restrict people based on frameworks of a biased definition of trust coded into algorithmic controls. To be truly innovative in the age of AI requires a posture of collective liberation. We must start by simply listening with the care and human tenderness that our world has always demanded of us.

Notes:

  1. James O’Donnell and Casey Crownhart, “We did the math on AI’s energy footprint. Here’s the story you haven’t heard,” MIT Technology Review, May 20, 2025.
  2. Kristian Humble, “War, Artificial Intelligence, and the Future of Conflict,” Georgetown Journal of International Affairs, July 12, 2024.
  3. When considering tech design, I have witnessed several solutions-orientations from tech practitioners and scholars that perpetuate inequity in tech ethics by asserting positions that inadvertently create new hierarchies and polarizing viewpoints, usually by diminishing the logics of the other.
  4. John Paul Lederach, The Moral Imagination: The Art and Soul of Building Peace (Oxford University Press, 2005), 5.
  5. Audre Lorde, Sister Outsider: Essays and Speeches (Crossing Press, 2007), 112. She says this about relying on the master’s tools: “They may allow us temporarily to beat him at his own game, but they will never enable us to bring about genuine change. And this fact is only threatening to those [women] who still define the master’s house as their only source of support.”
  6. Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Polity, 2019), 176.
  7. Iris Murdoch, The Sovereignty of Good (Routledge, 1971), 77–79.
  8. Theodore Vial, Modern Religion, Modern Race (Oxford University Press, 2016), 19.

Jenn Louie, MRPL ’23, is the founder of the Moral Innovation Lab and leads AI Trust and Safety at the United Nations Development Programme. Previously, she served as the Head of Platform Integrity at Facebook, Head of Trust & Safety at Meetup.com, Senior Program Manager at Google, resident fellow at the Integrity Institute, and affiliate at the Berkman Klein Center at Harvard University. This article is partly based on comments she delivered on the “Building Ethical AI” and “AI and Religion” panels at the “Humanity Meets AI Symposium” held February 27-28, 2025.

Please follow our Commentary Guidelines when engaging in discussion on this site.