Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorMiltenburg, Niels van
dc.contributor.authorBouwman, Bart
dc.date.accessioned2022-12-06T01:00:55Z
dc.date.available2022-12-06T01:00:55Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/43279
dc.description.abstractThe problem of the responsibility gap as described by Andreas Matthias in 2004, showed that our legal and moral responsibility practices are challenged in the context of Artificial Intelligence. In a nutshell, the responsibility gap is about the following question: when an AI causes harm, who is responsible? The programmer of an AI is not in direct control and is often not able to exactly predict the behavior of the AI, as a fundamental aspect of AI is its ability to learn and act autonomously. This means that, whenever an AI causes harm, we are left without a clear target or agent to ascribe moral responsibility to. Furthermore, the people responsible for creating the AI may claim ignorance by saying 'We did not know'. This is called the 'Epistemological Excuse', and is also a contributing factor to why the responsibility gap is created. AI creates a fundamental opacity that greatly increases the difficulty of locating moral responsibility. In an essay titled: 'Moral Responsibility in the Age of Bureaucracy', David Luban, Alan Strudler, and David Wasserman talk about how opacity is also inherent in large bureaucratic organizations. The many departments and layers of an organization allow the people working in this organization to use the epistemological excuse. Due to the opacity that this complex structure brings, it is hard to definitively point to who was responsible for any harm the organization causes. This issue displays some parallels with AI as well; AI is also fundamentally opaque which is precisely what prevents us from locating moral responsibility. Luban et al. propose to solve this issue with the introduction of 'preemptive duties'. What this means, in a nutshell, is that people in an organization are expected to carry out certain obligations and tasks to minimize the opacity present, thus reducing the possibility of people using the epistemological excuse. They are morally obligated to minimize the moral opacity present as they know that they are part of a fundamentally opaque enterprise. In this way they can still be ascribed moral responsibility if it can be shown that they failed to fulfill their preemptive duties. Being ignorant when you know these preemptive duties are present then becomes culpable ignorance and consequently allows us to morally blame people for their failure to fulfill their preemptive duties. This expansion of culpable ignorance is not enough in the context of AI, however. AI cannot take responsibility itself as it is an abnormal moral agent, yet it can cause harm. Its programmers may have fulfilled all of their preemptive duties and thus, ordinarily, be rightfully excused from moral responsibility, yet it is precisely this that creates the responsibility gap: the opacity in AI remains, it is inherent in AI. In order to save moral intelligibility, we should thus want those responsible for creating and manufacturing AI to want to take responsibility, even though they were not directly responsible for the harm the AI caused. This may come in the form of a responsive duty as a moral obligation, yet it remains a question of the virtuousness of the people responsible for creating the AI to take responsibility. This is because forsaking the moral obligation to take responsibility does not consequently allow us to locate blame for the harm caused by an AI. It only allows us to blame the programmer for their apparent lack of moral character. This thesis has shown that responsibility-as-blameworthiness falls short in the context of AI, and we must thus turn to responsibility-as-virtue.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectIn an essay from 2004 Andreas Matthias described the problem of the responsibility gap in Artificial Intelligence; because AI learns and acts autonomously, who is at fault whenever the AI causes harm? The responsibility gap in AI provides a challenge for our current moral responsibility practices. This thesis uses an idea by Milan Kundera to argue that people responsible for building AI should feel the obligation or need to take moral responsibility themselves.
dc.titleClosing the responsibility gap in AI: Can Kundera’s idea of taking responsibility address the responsibility gap?
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsAI;Artificial Intelligence;Liability;Responsibility Gap;Moral responsibility;Ethics;Kundera;Epistemological Excuse;Opacity;Ignorance
dc.subject.courseuuHistory and Philosophy of Science
dc.thesis.id12426


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record