Why Algorithmic Decision Making Is Not Value-Neutral: Proposing a taxonomy of values in algorithmic decision making
Summary
Algorithmic decision making (ADM) is used to assist decisions that have far-reaching consequences for individuals and society as a whole, for example in hiring and criminal law. As such, it is important that ADM is fair. It is commonly believed that ADM is objective and neutral, which supports its fairness. The first aim of this thesis is to show that ADM is not, and cannot be, objective or neutral. Instead, ADM necessarily contains value judgments. In order to prove the existence of values in ADM, this thesis uses arguments from the philosophy of science, that show that science is not value-free. The parallels and differences between science and ADM indicate that values play an even bigger role ADM, compared to science. The second aim of this thesis is to propose a taxonomy of values in ADM, which indicates where values play a role, what values play a role, and how they play a role. There are two main purposes of the taxonomy: (1) it can be used by developers and regulators to recognize the values that play a role in ADM systems, ideally resulting in less unintentional outcomes; (2) it can be used to regulate ADM by informing public sector policies and laws. The practical use of the taxonomy is demonstrated by a case study, focusing on Rotterdam’s welfare fraud detection system, which uses risk profiles to indicate which recipients have a higher risk of committing fraud. This thesis provides a deeper understanding of the relation between values, bias, and unfairness in ADM. By acknowledging that ADM cannot be value-neutral, this thesis shifts the focus from omitting bias to managing bias, in an effort to make ADM fairer for everyone.