dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Qahtan, Hakim | |
dc.contributor.author | Woudstra, Fenna | |
dc.date.accessioned | 2022-09-17T00:00:42Z | |
dc.date.available | 2022-09-17T00:00:42Z | |
dc.date.issued | 2022 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/42820 | |
dc.description.abstract | Machine learning (ML) algorithms are widely used in decision-making tasks. These decisions can have a big impact on the lives of people. Therefore, it is important that the outcomes of ML models are fair and do not lead to discrimination. Unfair outcomes could be a result of societal biases reflected in the assigned class labels, biases that arise during the data collection and processing, or the design choices made within an algorithm. Over the last decade, the topic of fairness in machine learning has become an important area of research that has led to many bias mitigation algorithms. These algorithms have shown to perform differently on different datasets. For this reason, data profiling can give a better understanding of the effectiveness of various bias mitigation algorithms. In this thesis, we analyzed sixteen bias mitigation algorithms and identified several characteristics of the data that help to decide which algorithm should be used for a given dataset to improve fairness. Based on that, we developed a Fair Algorithm Selection Tool (FairAST), that inspects the data and recommends the optimal algorithm to improve a given fairness measure. The experimental evaluation shows that, to a great extent, these recommendations are in line with the best performing algorithms found through exhaustive search. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | The topic of fairness in machine learning has become an important area of research that has led to many bias mitigation algorithms. We analyzed sixteen bias mitigation algorithms and identified several characteristics of the data that help to decide which algorithm should be used for a given dataset to improve fairness. Based on that, we developed a Fair Algorithm Selection Tool (FairAST), that inspects the data and recommends the optimal algorithm to improve fairness. | |
dc.title | Algorithmic Fairness: which algorithm suits my purpose? | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | algorithmic fairness; machine learning; bias; bias mitigation | |
dc.subject.courseuu | Artificial Intelligence | |
dc.thesis.id | 10746 | |