Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorQahtan, Hakim
dc.contributor.authorWoudstra, Fenna
dc.date.accessioned2022-09-17T00:00:42Z
dc.date.available2022-09-17T00:00:42Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/42820
dc.description.abstractMachine learning (ML) algorithms are widely used in decision-making tasks. These decisions can have a big impact on the lives of people. Therefore, it is important that the outcomes of ML models are fair and do not lead to discrimination. Unfair outcomes could be a result of societal biases reflected in the assigned class labels, biases that arise during the data collection and processing, or the design choices made within an algorithm. Over the last decade, the topic of fairness in machine learning has become an important area of research that has led to many bias mitigation algorithms. These algorithms have shown to perform differently on different datasets. For this reason, data profiling can give a better understanding of the effectiveness of various bias mitigation algorithms. In this thesis, we analyzed sixteen bias mitigation algorithms and identified several characteristics of the data that help to decide which algorithm should be used for a given dataset to improve fairness. Based on that, we developed a Fair Algorithm Selection Tool (FairAST), that inspects the data and recommends the optimal algorithm to improve a given fairness measure. The experimental evaluation shows that, to a great extent, these recommendations are in line with the best performing algorithms found through exhaustive search.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThe topic of fairness in machine learning has become an important area of research that has led to many bias mitigation algorithms. We analyzed sixteen bias mitigation algorithms and identified several characteristics of the data that help to decide which algorithm should be used for a given dataset to improve fairness. Based on that, we developed a Fair Algorithm Selection Tool (FairAST), that inspects the data and recommends the optimal algorithm to improve fairness.
dc.titleAlgorithmic Fairness: which algorithm suits my purpose?
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsalgorithmic fairness; machine learning; bias; bias mitigation
dc.subject.courseuuArtificial Intelligence
dc.thesis.id10746


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record