Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorKlein, Dominik
dc.contributor.authorBiekart, Marijn
dc.date.accessioned2023-07-25T00:02:26Z
dc.date.available2023-07-25T00:02:26Z
dc.date.issued2023
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/44315
dc.description.abstractAlgorithmic decision making (ADM) is used to assist decisions that have far-reaching consequences for individuals and society as a whole, for example in hiring and criminal law. As such, it is important that ADM is fair. It is commonly believed that ADM is objective and neutral, which supports its fairness. The first aim of this thesis is to show that ADM is not, and cannot be, objective or neutral. Instead, ADM necessarily contains value judgments. In order to prove the existence of values in ADM, this thesis uses arguments from the philosophy of science, that show that science is not value-free. The parallels and differences between science and ADM indicate that values play an even bigger role ADM, compared to science. The second aim of this thesis is to propose a taxonomy of values in ADM, which indicates where values play a role, what values play a role, and how they play a role. There are two main purposes of the taxonomy: (1) it can be used by developers and regulators to recognize the values that play a role in ADM systems, ideally resulting in less unintentional outcomes; (2) it can be used to regulate ADM by informing public sector policies and laws. The practical use of the taxonomy is demonstrated by a case study, focusing on Rotterdam’s welfare fraud detection system, which uses risk profiles to indicate which recipients have a higher risk of committing fraud. This thesis provides a deeper understanding of the relation between values, bias, and unfairness in ADM. By acknowledging that ADM cannot be value-neutral, this thesis shifts the focus from omitting bias to managing bias, in an effort to make ADM fairer for everyone.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectAlgorithmic decision making (ADM) is used to assist decisions that have far-reaching individual or societal implications. Because of its data-driven approach, it is often believed that ADM, like science, is ‘objective’ or value-neutral. The purpose of this thesis is twofold. First, using arguments from philosophy of science, I have shown that ADM, like science, is not and cannot be value-neutral. Second, I have designed a taxonomy of values in ADM and demonstrated its use with a case study.
dc.titleWhy Algorithmic Decision Making Is Not Value-Neutral: Proposing a taxonomy of values in algorithmic decision making
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsAlgorithmic decision making; value judgments; fair AI; philosophy of AI; ethics; risk profiling.
dc.subject.courseuuArtificial Intelligence
dc.thesis.id20055


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record