
Reclaiming Virtue Ethics as a Framework for Artificial Moral Agents:
An Anti-Colonial Perspective
Written by Priscila Chaves
January 27, 2025
Reclaiming Virtue Ethics as a Framework for Artificial Moral Agents:
An Anti-Colonial Perspective
-
This document provides a summary of key ideas and insights from my research. It is not a comprehensive representation of the full study and may omit detailed analyses, methodologies, and supporting evidence for the sake of brevity. The findings and interpretations presented here have not undergone formal peer review and should be considered preliminary.
Readers are encouraged to approach the content with this context in mind and to seek further clarification or discussion if needed. For inquiries about the complete research or for collaboration opportunities, reach out to me directly.
Abstract
This brief article explores the core ideas of how the development of Artificial Moral Agents (AMAs)—autonomous systems capable of making moral decisions— presents profound ethical questions. The original essay from which the article was based, argues for reclaiming virtue ethics as the guiding framework for their design and development, emphasising inclusivity and global moral dialogue. Utilitarianism, often viewed as a default, is critiqued for its colonial legacy, reductionism, and inability to address diverse needs. Instead, virtue ethics offers a flexible, community-centred approach to cultivating moral habits and addressing the asymmetries in power and resources that shape AI. By rooting AMA development in care, accountability, and respect for cultural diversity—particularly engaging the Global South—AMAs have the potential to transform human-machine relations and foster collective flourishing.
This article is a summary from research conducted between January and March 2023. The research employed an extensive literature review, included in the last section of the article, offers a thorough examination of prior studies and theoretical frameworks, positioning this research within the broader academic discourse. It serves as a valuable resource for scholars and practitioners alike, facilitating a deeper understanding of ethical AI design.
Introduction to the research
In my latest research, I delve into a pressing question: what kind of Artificial Moral Agents (AMAs) should humanity strive to build? My essay explores the limitations of utilitarian ethics in shaping these technologies and argues for reclaiming virtue ethics as a more inclusive and globally conscious framework. By rooting AMA development in care, accountability, and respect for diverse cultural traditions—particularly those of the Global South—I outline how AMAs could transform human-machine relationships and foster a world where both technology and morality flourish together.
Reflecting on Artificial Moral Agents: A Virtue Ethics Approach
In my recent essay, I explored a fascinating yet deeply challenging question: what kind of moral agents should we be building, if any? Artificial Moral Agents (AMAs)—autonomous systems designed to make moral decisions—are no longer a speculative concept. From self-driving cars to care robots, these systems are already shaping our world. But what kind of ethical foundation should guide their development? My research suggests that we need to look beyond the well-trodden path of utilitarianism and embrace a virtue ethics framework, one rooted in inclusivity, care, and mutual respect. Let me walk you through the key arguments and ideas from my essay.
The Ethical Debate: Is Building AMAs Possible, and Should We?
I started by addressing two foundational questions: is it even possible to build AMAs, and if so, should we? On the first point, AMAs are already making moral decisions—albeit limited ones. But the bigger challenge lies in their aspirations. Can these systems ever become “full ethical agents,” capable of understanding and embodying human values? I argue that even if AMAs never achieve this level of sophistication, their development forces us to confront our own moral assumptions and biases.
On the question of whether we should build AMAs, I argue that the answer is tied to our ethical intentions. The global AI market is growing at an astonishing pace, and ignoring the implications for the majority world—the Global South—would be reckless. Historically marginalized communities are often excluded from conversations about technology, despite being disproportionately affected by its consequences. For this reason, the South must have a central role in shaping AMAs, ensuring these systems do not perpetuate inequality or data colonialism.
Why Utilitarianism Falls Short
Utilitarianism often feels like the default framework for AI ethics, with its clear focus on maximizing happiness and minimizing harm. However, I argue that it is not the right approach for AMAs, and I present five key reasons:
Colonial Legacy: Utilitarianism’s history is steeped in imperialist logic, which has justified harmful interventions in the name of “progress.”
Data Colonialism: Similar patterns of exploitation are visible in AI today, where data from the Global South is extracted without accountability.
Bias and Oversimplification: Utilitarian calculations often exclude marginalized communities, failing to account for diverse lived experiences.
Short-term Thinking: Its focus on immediate gains undermines long-term sustainability, a critical issue in our era of planetary crisis.
Insufficient Reform: Efforts to decolonize utilitarianism have not addressed its structural flaws, leaving us in need of a more transformative approach.
These limitations are not merely philosophical—they have real consequences for how we design systems that shape our lives.
Embracing Virtue Ethics
So, what’s the alternative? I argue for reclaiming virtue ethics as the foundation for AMAs. Unlike utilitarianism, virtue ethics focuses on cultivating moral habits and character traits—like empathy, care, and justice—that can adapt to complex, uncertain situations. Here’s why I believe this framework is a better fit for our technological future:
Urgency: Virtue ethics helps us address the ethical challenges of emerging technologies, from social networks to AI, in ways traditional frameworks cannot.
Relational Thinking: It recognizes that morality is not just about isolated decisions but about relationships, responsibilities, and context.
Managing Uncertainty: In a world where technological consequences are often unforeseeable, virtue ethics provides flexibility and resilience.
Human Flourishing: It harmonizes with diverse moral traditions, fostering global well-being rather than imposing a single standard.
Care Ethics: By emphasizing care and interdependence, virtue ethics offers a roadmap for engaging with the Global South and addressing global inequities.
Addressing the Challenges of Virtue Ethics
Of course, virtue ethics isn’t without its challenges. One major concern is how to reconcile competing cultural values without defaulting to Western norms. To address this, I propose two strategies:
Moral Circles: Inspired by Peter Singer’s concept, these circles expand moral concern to include diverse communities, non-human beings, and even artificial agents. “People’s Councils” could serve as forums for participatory decision-making, fostering care and solidarity across cultures.
Ecosystems of Accountability: I draw inspiration from grassroots movements in Latin America, like the Khipu Declaration, which emphasize cultural sovereignty and resistance to homogenization. These models show how communities can reclaim agency in shaping the ethical principles of AI.
The Bigger Picture: AMAs as Catalysts for Human Flourishing
Ultimately, my essay argues that AMA development is as much about cultivating virtue in humans as it is about programming morality into machines. If we can design systems that embody care, empathy, and accountability, we might not only improve human-machine relationships but also challenge ourselves to live more virtuously.
The hope for “human flourishing,” as envisioned by virtue ethics, is not just an abstract ideal. It’s a call to action—a reminder that technology must serve humanity, not the other way around. By centering the voices of the Global South and embracing community-driven approaches, we can build a future where AMAs are not tools of oppression but instruments of collective well-being.
In this journey, the ultimate question isn’t just what kind of AMAs we want to create, but what kind of world we want to live in—and whether we have the courage to design both with care.
-
Detailed citations from the original essay are included to emphasise the academic grounding of my arguments and call for further exploration of ethical AI design.
Allen, C., Smit, I., & Wallach, W. (2005). Artificial Morality: Top-down, Bottom-up, and Hybrid
Approaches. Ethics and Information Technology, 7(3), 149–155. https://doi.org/10.1007/s10676-006-0004-4
Arun, C. (2020). AI and the Global South: Designing for Other Worlds. In The Oxford Handbook of Ethicsof AI. Oxford University Press.
Campbell, C. G. (2010). “Mill’s Liberal Project and Defence of Colonialism from a Post-Colonial Perspective.” South African Journal of Philosophy, 29(2), 63–73. https://doi.org/10.4314/sajpem.v29i2.57049
Couldry, N., & Mejias, U. A. (2019). The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism. Stanford University Press.
Grand View Research. (2018). Artificial Intelligence Market Size, Share | AI Industry Report, 2025. Grand View Research. https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market
Khipu, E. L. de I. A. (2023). Declaración de Montevideo sobre Inteligencia Artificial y su impacto en América Latina. https://docs.google.com/document/d/1maoIc9BKnJbM_iv1QXvbU0DofgmmOQne3qjmQb0rFHM/edit
Lewis, J. E. (2020). Indigenous Protocol and Artificial Intelligence Position Paper. The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR). https://spectrum.library.concordia.ca/986506
MacKinnon, B. (2012). Ethics: theory and contemporary issues (7th ed.). Wadsworth/Cengage Learning.
Manuel DeLanda. (2016). Assemblage Theory. Edinburgh University Press.
Mazower, M. (2013). No Enchanted Palace: The End of Empire and the Ideological Origins of the United Nations. Princeton Univ Press.
McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Bristol University Press.
Misselhorn, C. (2022). Artificial Moral Agents: Conceptual Issues and Ethical Controversy. In The Cambridge Handbook of Responsible Artificial Intelligence (pp. 31–49). Cambridge University Press.
Moor, J. H. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/mis.2006.80
Mordor Intelligence. (2022). AI Governance Market Analysis - Industry Report - Trends, Size & Share. www.mordorintelligence.com. https://www.mordorintelligence.com/industry-reports/ai-governance-market
Persson, I., & Savulescu, J. (2012). Unfit for the Future: The Need for Moral Enhancement. Oxford University Press.
Pettersen, T. (2011). The Ethics of Care: Normative Structures and Empirical Implications. Health Care Analysis, 19(1), 51–64. https://doi.org/10.1007/s10728-010-0163-7
Ricaurte, P. (2022). Ethics for the Majority of the World: AI and the Question of Violence at Scale. Media, Culture & Society, 44(4), 726–745. https://doi.org/10.1177/01634437221099612
Searle, J. R. (1980). Minds, brains, and programs. Cambridge University Press.
Singer, P. (2002). One World: The Ethics of Globalization. Yale University Press.
Singer, P. (2011). The Expanding Circle: Ethics, Evolution, and Moral Progress. Princeton University Press.
United Nations. (2022). World Population Prospects - Population Division - United Nations. Un.org; United Nations. https://population.un.org/wpp/
Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.
Wallach, W., & Allen, C. (2009). Moral Machines: Teaching Robots Right From Wrong. Oxford University Press.
Wallach, W., & Vallor, S. (2020). Moral Machines: From Value Alignment to Embodied Virtue. In Ethics of Artificial Intelligence (pp. 383–412). Oxford University Press.