View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Automatic Fairness Criteria and Fair Model Selection for Critical ML Tasks

        Thumbnail
        View/Open
        Master_Thesis_Selim_Buyuk.pdf (1.294Mb)
        Publication date
        2023
        Author
        Büyük, Selim
        Metadata
        Show full item record
        Summary
        In recent years, Artificial Intelligence (AI) has seen a rapid development where it can aid, and sometimes completely replace, humans with decision making due to its superior computation and information processing skills. However, using AI in decision making tasks has not been without flaws, as researchers warn that blindly trusting AI could prove to have major adverse effects for humans. This is due to bias in AI, described as a prejudice of favoritism toward certain subjects, even if they are rationally unjustified. Examples range from university rankings to recidivism tests, where using AI resulted in damaging effects and perpetuation of bias. As a consequence, legislators over the world have introduced new acts to ensure transparency, explainability and fairness in AI, like the recent EU AI act and GDPR. To support this, the field of Responsible AI has set out to make AI more transparent and fair. However, we saw a gap in the current state of the art in fairness assessment toolkits. Researchers urgently called for the creation of a methodology in assisting users with fairness due to its complexity and context dependency. That is why we created this toolkit, in which users are automatically guided by interactive questions on selecting the most suitable model and fairness criteria for a given task, all openly and freely available in JASP. This methodology was created by identifying characteristics of fairness measures, creating a decision tree whose internal nodes composed of interactive questions, mutating this tree and generating candidate trees and subsequently evaluating the trees to select the best one. With this toolkit we hope the help the effort on making AI more transparent and fair.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/44225
        Collections
        • Theses
        Utrecht university logo