Bias in, bias out: A Study on Social Bias in Automated Decision-Making Algorithms
Summary
Nowadays, algorithms play a large part in decision-making procedures, but
they affect marginalized groups negatively when their decisions are driven
by algorithmic social bias. An important way to look at this problem, is to
investigate what notion of fairness marginalized groups need to be treated
justly, and how to use this notion to find proper mitigation measures. This
thesis aims to find how algorithmic bias in automated decision-making
algorithms can be mitigated to prevent discriminatory decisions. In this
context, algorithmic bias roughly refers to the concern that an algorithm is
not merely a neutral transformer of data or extractor of information. There
are many sources of algorithmic bias, and they emerge in different stages
of machine learning.