Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorDe, M.
dc.contributor.advisorRin, B. G.
dc.contributor.authorMohammad, S.H.A.
dc.date.accessioned2021-09-02T18:00:30Z
dc.date.available2021-09-02T18:00:30Z
dc.date.issued2021
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/1283
dc.description.abstractNowadays, algorithms play a large part in decision-making procedures, but they affect marginalized groups negatively when their decisions are driven by algorithmic social bias. An important way to look at this problem, is to investigate what notion of fairness marginalized groups need to be treated justly, and how to use this notion to find proper mitigation measures. This thesis aims to find how algorithmic bias in automated decision-making algorithms can be mitigated to prevent discriminatory decisions. In this context, algorithmic bias roughly refers to the concern that an algorithm is not merely a neutral transformer of data or extractor of information. There are many sources of algorithmic bias, and they emerge in different stages of machine learning.
dc.description.sponsorshipUtrecht University
dc.format.extent446811
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleBias in, bias out: A Study on Social Bias in Automated Decision-Making Algorithms
dc.type.contentBachelor Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsbias, discrimination, social, fairness, ethics, artificial intelligence, automated decision-making
dc.subject.courseuuKunstmatige Intelligentie


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record