Human-drawn illustration of four people attempting to hold guardrails up around a chaotic void.

Dialogue

AI Harms Are Not Ethically Inevitable

Illustrations by Chloe Niclas

By Richard J. Geruson

I was an early adopter of dictation applications and this sparked my enduring interest in AI speech technology. As CEO and co-founder of VoiceSignalTechnologies, I helped build the first commercially successful company to bring voice recognition to mobile phones. The breakthrough came from hiring MIT physicists whose String Theory research required mastery of Hidden Markov Models—the same statistical framework at the heart of AI voice recognition. Our innovations led to Siri, Alexa, and other widely adopted voice applications, ultimately resulting in VoiceSignal’s acquisition following competitive bids from Google, Microsoft, and others.

Following VoiceSignal, I continued my involvement in AI, serving as CEO of several technology firms and holding governance roles on 32 boards and 45 advisory boards across three continents. Currently, I chair the boards of both a public AI company and a private AI robotics firm—the latter developing expressive robotic heads that have enabled autistic children to talk and communicate where traditional human interventions have failed.

Yet even as I helped advance AI technologies, I sensed something critical was being overlooked. A provocative New York Times headline proclaimed that gravity did not exist—an attention-grabbing summary of physicist Erik Verlinde’s proposal that gravity is not a fundamental force, but an emergent phenomenon. This idea resonated deeply with me, aligning with my earlier Oxford studies of Henry Mintzberg’s insights into business strategy emergence, and my longstanding interest in the self-organizing dynamics of economic markets and biological evolution. The explanatory power of emergence drew me to complexity science, information theory, and thermodynamics, inspiring me to write a second book on emergence.

Emergence, it turns out, applies across a broad range of seemingly unrelated phenomena—from the physical universe to biological life, consciousness, human intelligence, and economic markets. These connections led me to see its relevance for understanding the opaque operations and unpredictable outcomes of artificial intelligence. I came to realize that AI was not merely another entry in the long history of human innovation but a radical departure from all previous technologies. Unlike conventional tools, AI is emergent. Its outcomes—whether profoundly beneficial or deeply harmful—cannot be reduced to, fully explained by, or reliably predicted from its constituent components. Artificial intelligence, to paraphrase Aristotle, represents a whole greater than the sum of its parts.

From this vantage point, I observed a critical gap: prevailing ethical, policy, and governance frameworks inadequately address AI’s emergent, opaque, and rapidly evolving nature. Developers, users, and corporate leaders frequently treat AI systems as neutral and objective, neglecting unexpected behavior, hidden biases, and harmful results that diverge from intended goals. Witnessing firsthand how unchecked AI amplifies injustice and outpaces regulatory responses compelled me to seek new intellectual foundations capable of confronting these ethical challenges effectively.

Harvard Divinity School offered precisely the multidisciplinary depth I sought, allowing me to integrate professional insights with rigorous inquiry, distilling perspectives from theology, philosophy, sociology, psychology, economics, and religious thought to fully engage with AI’s moral complexities. Here, I propose a framework for diagnosing and mitigating AI harms that is designed to serve product developers, corporate leaders, policymakers, scholars, and interested users of AI. My aspiration is to offer tools for ethical engagement—to contribute to the creation of safe, responsible technologies grounded in human dignity and shared responsibility, in a world increasingly shaped by the intelligence of machines.

The Structure of AI Harms

The history of civilization is punctuated by disruptive technologies instigating social upheaval. The impact of artificial intelligence will be chronicled on the same scale used for the printing press, the Industrial Revolution, and the Information Age. With its unparalleled transformative capacity to enhance productivity, augment decision-making, and catalyze innovation, AI has rapidly permeated virtually every facet of contemporary life, from critical infrastructure to online behavior. The Age of AI signals a paradigmatic shift—not only in how we work and interact, but in the very architecture of social systems and distribution of power. Yet its relentless trajectory has far outpaced our collective ability to comprehend, much less govern, its expansive and often opaque consequences.

Humanity now stands at a critical juncture: either we assert meaningful control over AI—ensuring it reflects shared human values and safeguards dignity across all communities—or we risk engineering a future in which algorithmic systems calcify inequity and amplify injustice.

Compounding this challenge, AI often operates through subtle, concealed mechanisms that intensify existing injustices. It not only reinforces these harms but amplifies them through negative feedback loops and obscures them behind claims of neutrality and utilitarian efficiency. While AI holds enormous promise for advancing human progress, at the same time it has the potential to reshape society in ways that deepen structural inequities and psychosocial harms. Humanity now stands at a critical juncture: either we assert meaningful control over AI—ensuring it reflects shared human values and safeguards dignity across all communities—or we risk engineering a future in which algorithmic systems calcify inequity and amplify injustice.

Although issues such as discriminatory policing, restricted healthcare access, and psychological manipulation are widely studied, AI’s expanding role as a contributing factor to these injustices remains poorly understood. This research aims first and foremost to raise awareness—uncovering and explicitly recognizing AI’s active role in exacerbating social ills. However, awareness alone is insufficient. We need to understand the structural processes behind these harms and the subtle operational mechanisms that intensify them. Without a coherent account of these underlying dynamics, effective interventions remain elusive, and ethical strategies will struggle to confront these injustices at their root. With such understanding, society can construct technical mitigations and ethical guidelines integrated into corporate governance and public policy.

The challenge lies in untangling the complex, often obscure, and seemingly unrelated ways AI causes harm. Toward this end, I have engaged in an intellectual excavation that reveals the anatomy of AI harms—a two-dimensional foundational structure for developing a unified framework. The result is both a diagnostic tool that illuminates AI dangers and a guide for crafting ethical guardrails before harms become irreversibly entrenched.

Unlike other approaches, the framework presented here is uniquely rooted in religious studies and constructed using transdisciplinary scaffolding; it draws on scholarship from theology, philosophy, sociology, psychology, and economics. This study employs a mixed methodology, synthesizing academic research tempered by practical experience to develop a principled yet pragmatic framework. Understanding AI’s harms and implications is specifically important to religious studies, which has long been concerned with the moral imperative of social change.1 We cannot guard against harmful AI mechanisms and the injustices they perpetuate unless we equip ourselves with the knowledge necessary to dismantle them.

At the core of this framework is a novel typology that distinguishes AI harms along two primary dimensions: somatic versus ideational, and overt versus covert. Somatic harms threaten physical safety or perpetuate structural inequities; ideational harms distort psychological identity, knowledge systems, and cultural norms through manipulation and the dissemination of harmful ideologies. Overt harms are visible and explicit; covert harms remain hidden within social infrastructure systems and normalized cultural beliefs.

As a diagnostic tool, this framework identifies the root causes, operational mechanisms, and mitigation strategies for AI harms. More specifically, it reveals the underlying processes by which infrastructure biases and behavioral manipulations generate outcomes that are recursively reimported as new data, amplifying injustice. It also exposes cultural narratives that mask and legitimize these harms. By distinguishing somatic from ideational harms, the framework encompasses a full range of harms and responses—physical, structural, psychological, and cultural—within a cohesive structure. This, in turn, enables fully integrated mitigation strategies that combine technical solutions with ideological reforms. Four specific categories of AI harms are illustrated and defined on the next page.

  • Somatic overt harms are “direct harms”—AI behaviors that pose tangible threats to physical safety or human survival. These result from technical failures, unintended emergent behaviors, misalignment with human goals, and potential loss of human control due to escalating AI capabilities.
  • Somatic covert harms are biases embedded in societal infrastructure such as justice systems, healthcare, and economic institutions. These harms are caused by unrepresentative training data and narrowly constructed algorithms that reinforce existing injustices—patterns that then become new data looped back into the AI, thereby amplifying the original harm (as represented by the left-side curved arrows).
  • Ideational overt harms include psychosocial manipulations that exploit cognitive vulnerabilities. Manipulation is driven by profitability goals or political agendas that shape behavior and produce feedback loops in which harmful outputs become training inputs, compounding the injustice (as represented by the right-side curved arrows). These harms further entail dispossession that undermines individual and collective identity, violates privacy, appropriates intellectual property, and compromises cultural sovereignty.
  • Ideational covert harms arise from cultural beliefs that tacitly reinforce infrastructure bias and psychosocial manipulation (as represented by the wide arrows). These beliefs include the persistent myth of AI objectivity and the exclusive reliance on utilitarian ethics that undervalue the dignity of individuals and marginalized communities. These harms mask other forms of harm by embedding them within dominant knowledge frameworks—widely accepted ideas, symbols, and narratives—that hide, normalize, and potentially reproduce them from generation to generation without scrutiny.

To effectively address these challenges, I propose a set of foundational ethical principles and concrete interventions grounded in theory, empirical research, and lived experience. As I’ve noted, we must move from diagnosis to action by offering a structured set of mitigation strategies. These recommendations are intended to serve as a practical guide for navigating the ethical complexities of AI development and deployment, with relevance for corporate governance, managerial decision-making, and public policy.

AI developers, technology professionals, executives, and board members can use this framework to guide ethical innovation and embed AI ethics into corporate governance. Policymakers can draw on it to craft effective and equitable AI regulations. Professionals across healthcare, criminal justice, and social welfare will find tools for addressing AI’s ethical implications in their fields. Nonprofits and advocacy organizations can apply the framework to advance digital equity, while scholars and students gain a robust understanding of AI’s societal impact. Finally, it aims to make AI ethics more accessible to the public, encouraging informed, inclusive dialogue about the responsible use of artificial intelligence.

Possible Mitigations

The ethical challenges posed by AI require not only identification and diagnosis but also interventions that translate ethical insight into concrete action. Here, I shift from critique to implementation, presenting a set of mitigations that function both as technical remedies and as embodiments of a “reimagined” ethical foundation.2 These strategies form a practical guide for AI development, deployment, and oversight, with recommendations directed toward managerial decision-making, public policy, and corporate governance.

The conceptual framework I have developed differentiates between somatic and ideational harms to provide a comprehensive and systematic method for understanding how different types of harm emerge—and for linking each to tailored mitigation strategies. While legal penalties remain important for deterring malicious use, I focus here on proactive interventions that can be integrated into every stage of AI’s lifecycle, from design and development to deployment and management.

To guide these interventions, the framework identifies core ethical principles for AI governance—safety, fairness, transparency, autonomy, privacy, awareness, and accountability—which serve as foundational commitments. These principles translate into actionable responses aimed at recognizing and reducing harm while advancing a more just and responsible AI ethics. I summarize and diagram the principles and corresponding mitigations this way:

  • Addressing the principle of Safety entails incorporating testing, security, and control measures into AI design to help mitigate unsafe AI behavior and existential threats.
  • Fairness and Transparency can be improved with participatory and inclusive design, data and development diversity, fairness optimization, and explanation engines that help mitigate systemic biases and infrastructural injustices.
  • Autonomy and Privacy can be protected through education, watermarking, encryption and establishing data sovereignty, mitigating manipulation vulnerability, and protecting intellectual and cultural rights.
  • Implementing auditing, public policy, and corporate governance promotes and ensures Awareness and Accountability by embedding ethical considerations into AI, while simultaneously addressing all the principles and reducing all forms of harm (as represented by the arrows).

Reimagining AI

While AI holds immense promise for human progress, it also has a demonstrated potential to reshape society in ways that reinforce, amplify, normalize, and conceal exclusion through infrastructure bias, psychosocial manipulation, dispossession, and normalizing cultural narratives. Humanity now faces a critical choice: align AI with robust and inclusive human values or risk creating a less just and equitable world driven by machine intelligence. My proposed framework helps stakeholders identify harms, recognize their sources, and develop interventions that encompass both technical design and ethical discernment.

Somatic harms demonstrate the risks posed by autonomous escalation, biased decision-making, and embedded infrastructural discrimination. Ideational harms expose the exploitative dynamics of algorithmic manipulation and the ideological function of AI systems that present themselves as objective and scientific. This framework thus connects seemingly disparate concerns—from autonomous weapons and child welfare to online social media interactions—equipping policymakers, product developers, and corporate leaders with a structured tool for both analysis and intervention.

The typology of harms developed here provides the scaffolding for targeted mitigations grounded in just foundational ethical principles—safety, fairness, transparency, autonomy, privacy, awareness, and accountability—the normative spine of a responsible AI regime. Yet the work of AI ethics does not end with the formulation of principles. It requires structures of accountability, implementation, and oversight—particularly as AI systems become more autonomous, opaque, and embedded within the fundamental operations of public and private life. The challenge, then, is not merely to recognize harm but to incorporate ethical commitments into the design, deployment, and governance of AI technologies at every level of influence.

By mapping harms to principles and principles to interventions, this framework serves not only as a mirror for critique but as a roadmap for reform. The identified mitigations comprehensively address ethical principles through technical, institutional, and cultural reforms. Technical interventions encompass strategies such as red teaming to enhance safety, explanation engines to promote transparency, encryption methods to safeguard privacy, and algorithmic assessments to ensure accountability. Institutional reforms include participatory design to ensure fairness, data sovereignty to preserve autonomy, and auditing processes to maintain accountability. Broad cultural transformations are fostered through educational initiatives that heighten awareness and governance structures—both corporate and public—that uphold the full spectrum of ethical principles.

Addressing all four harm types ensures a thorough and integrated approach to potential mitigations. For example, somatic overt harms are addressed through human oversight, fail-safes, and bounded optimization to prevent escalation and enhance operational reliability. Somatic covert harms necessitate structural changes achieved through initiatives such as development team diversification, fairness-aware optimization, and dataset augmentation. Overt ideational harms can be countered with AI literacy to combat overt manipulation, while covert ideational harms can be disabled by embedding robust values within systems to address normalized cultural biases.

Organizations such as the Markkula Center for Applied Ethics, Data for Black Lives, and the Partnership on AI exemplify transformative approaches by actively promoting ethical AI development through various initiatives. Collectively, these interventions signify a shift from reactive regulation to proactive governance, positioning ethics not only as a risk-management strategy but as the foundation of just technological development.

Recursive negative-feedback loops emerge when algorithmic systems generate biased outputs that are later recycled as “ground-truth” training data, causing each successive model iteration to hard-code, intensify, and expand the very harms it helped instantiate. This pattern recurs across a range of critical domains, including recidivism scoring, predictive policing, facial recognition, child-welfare risk analytics, housing allocation, lending and hiring recommendation engines, healthcare triage, content-based stereotyping that limits job mobility, and ad targeting skewed by gender and ethnoracial identity. In each case, historical power imbalances shape the input data; the model encodes these biases as predictions or classifications; those outputs trigger real-world decisions that further skew the social terrain; and the newly altered reality is then recaptured as updated training data.

Because the system draws its statistical signal from the distortions it helps to produce, the loop functions less as a mirror and more as an engine—ratcheting inequality upward over time.

The result is a feedback loop that is not only self-reinforcing but self-amplifying, compounding disparities with each retraining cycle. Because the system draws its statistical signal from the distortions it helps to produce, the loop functions less as a mirror and more as an engine—ratcheting inequality upward over time. Breaking this chain requires intervention at every stage: curating lineage-aware datasets, injecting counterfactual or synthetic corrections, imposing outcome audits, and reengineering feedback mechanisms so that model outputs do not re-enter training pipelines without rigorous debiasing. Unless we disrupt this reflexive re-ingestion, machine learning risks becoming a perpetual motion device of injustice.

Crucially, the challenges AI presents are not solely technical; they are ethical, epistemological, and political. The deployment of AI systems has often been driven by a worldview that prioritizes optimization, marginalizing considerations of justice, care, and relational accountability. These harms are not incidental side effects; they arise from biased infrastructures, compromised data pipelines, manipulative design, and algorithmic feedback loops that recursively re-ingest their own outputs. What appear to be technical failures are deeply rooted in cultural assumptions, market logics, and secular ideologies that masquerade as neutrality, obscuring and normalizing harm under the guise of rationality.

Ethical action requires more than compliance. Meaningful mitigation requires both new tools and new values. It requires “the moral imagination” to reenvision AI as ethically grounded systems built to honor human dignity, equity, and mutual accountability. This means rejecting manipulative architectures, embedded biases, and the epistemic arrogance of AI objectivity that permeates current AI system infrastructures. Above all, it means recognizing that AI harms are not ethically inevitable. The tools for discernment exist; the frameworks for action are emerging. The task ahead is moral as much as it is technical: ensuring that our most powerful technologies reflect not the hierarchies of the past but the inclusive values of a just and humane future.

What I am offering here is meant to be a beginning: a comprehensive framework grounded in a nuanced typology of harms, enriched by transdisciplinary insight, and animated by an ethical vision that insists AI must serve—not subjugate—human dignity. The way forward is not merely to regulate the tools we have built thus far, but to reimagine the systems we design to enable a just and equitable humanity in this new era of intelligent machines.

Notes:

  1. Examples abound of justice movements grounded in religious commitments, including the nonviolent resistance of Mahatma Gandhi, the activism of Dorothy Day, the Civil Rights Movement, liberation theology, and the insights of Black, feminist, and womanist theologians. Religious leaders and theologians today are needed to engage with this emerging and profound source of inequity.
  2. See John Paul Lederach, The Moral Imagination: The Art and Soul of Building Peace (Oxford University Press, 2010).

Richard J. Geruson, MRPL ’25, is chairman and CEO of Global Board Services & Investments (GBSI), which provides services and investments to boards across three continents with specialized expertise in board governance for safe and ethical AI. He holds three graduate degrees from Oxford University and is the author of A Theory of Market Strategy (Oxford University Press, 1992). He is currently writing a second book on “Emergence.” This article is based on his MRPL project and the lecture he delivered at HDS as part of the “Humanity Meets AI Symposium” held February 27-28, 2025.

Please follow our Commentary Guidelines when engaging in discussion on this site.