Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorAl, Pepijn
dc.contributor.authorVisser, Lara
dc.date.accessioned2025-11-02T00:02:06Z
dc.date.available2025-11-02T00:02:06Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/50647
dc.description.abstractThis thesis examines whether artificial systems can genuinely possess moral agency and what this possibility means for human exceptionalism. It asks: Can moral agency remain a foundation for human exceptionalism in light of advancing artificial intelligence? The thesis argues that moral agency is not an exclusively human trait but an algorithmic process that artificial systems could, in principle, instantiate. After establishing a working definition of moral agency as the ability to recognize and understand moral reasons, deliberate upon them, and act accordingly with sufficient autonomy to be held responsible for the outcome, it reconstructs key objections to machine moral agency. Drawing on William Hasselberger’s Ethics Beyond Computation (2019) and Robert Sparrow’s Why Machines Cannot Be Moral (2021), these objections are framed as the input problem—whether machines could perceive and interpret morally salient features of a situation—and the output problem—whether their actions could bear genuine moral significance. In response, the thesis develops the Computational Identity Theory of Moral Agency (CITMA), which integrates mind–brain identity theory with the computational theory of mind. CITMA holds that moral reasoning and decision-making are algorithmic in nature; therefore, if such processes can be instantiated by artificial systems, moral agency cannot be restricted to humans and other biological entities. The final chapter demonstrates that this conclusion exposes the historical and ethical fragility of human exceptionalism. Across history, boundaries drawn to mark human uniqueness have proven porous and unstable. Traits once taken as uniquely human—rationality, language, tool use, creativity, and moral capacity—have repeatedly been reassigned, eroded, or shown to exist in other beings to varying degrees. The challenge of artificial moral agents continues this pattern, compelling us to reconsider how responsibility, accountability, and moral status are distributed. Human exceptionalism has always been fragile. Artificial moral agents force us to confront that fragility once more. If so much of our ethical self-understanding rests on the belief that humans are special, then the challenge posed by AI is not only about machines—it is about us. Who are we, if not special?
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThis thesis examines whether artificial systems can genuinely possess moral agency and what this means for human exceptionalism. It introduces the Computational Identity Theory of Moral Agency (CITMA), which combines mind–brain identity theory with the computational theory of mind to argue that moral cognition is algorithmic. If so, artificial agents could instantiate genuine moral capacities—challenging the long-standing belief that moral agency is uniquely human.
dc.titleYou are not Special - The Challenge of Artificial Moral Agents to Human Exceptionalism
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.courseuuApplied Ethics
dc.thesis.id55089


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record