Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorKaya, Heysem
dc.contributor.authorBüyük, Selim
dc.date.accessioned2023-07-20T00:02:17Z
dc.date.available2023-07-20T00:02:17Z
dc.date.issued2023
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/44225
dc.description.abstractIn recent years, Artificial Intelligence (AI) has seen a rapid development where it can aid, and sometimes completely replace, humans with decision making due to its superior computation and information processing skills. However, using AI in decision making tasks has not been without flaws, as researchers warn that blindly trusting AI could prove to have major adverse effects for humans. This is due to bias in AI, described as a prejudice of favoritism toward certain subjects, even if they are rationally unjustified. Examples range from university rankings to recidivism tests, where using AI resulted in damaging effects and perpetuation of bias. As a consequence, legislators over the world have introduced new acts to ensure transparency, explainability and fairness in AI, like the recent EU AI act and GDPR. To support this, the field of Responsible AI has set out to make AI more transparent and fair. However, we saw a gap in the current state of the art in fairness assessment toolkits. Researchers urgently called for the creation of a methodology in assisting users with fairness due to its complexity and context dependency. That is why we created this toolkit, in which users are automatically guided by interactive questions on selecting the most suitable model and fairness criteria for a given task, all openly and freely available in JASP. This methodology was created by identifying characteristics of fairness measures, creating a decision tree whose internal nodes composed of interactive questions, mutating this tree and generating candidate trees and subsequently evaluating the trees to select the best one. With this toolkit we hope the help the effort on making AI more transparent and fair.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectA methodology for automating fairness criteria and model selection via an interactive interface in an open source environment called JASP.
dc.titleAutomatic Fairness Criteria and Fair Model Selection for Critical ML Tasks
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsFairness Criteria; Model Selection; Responsible AI; JASP; Toolkit
dc.subject.courseuuArtificial Intelligence
dc.thesis.id19499


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record