Automatic Fairness Criteria and Fair Model Selection for Critical ML Tasks
MetadataShow full item record
In recent years, Artificial Intelligence (AI) has seen a rapid development where it can aid, and sometimes completely replace, humans with decision making due to its superior computation and information processing skills. However, using AI in decision making tasks has not been without flaws, as researchers warn that blindly trusting AI could prove to have major adverse effects for humans. This is due to bias in AI, described as a prejudice of favoritism toward certain subjects, even if they are rationally unjustified. Examples range from university rankings to recidivism tests, where using AI resulted in damaging effects and perpetuation of bias. As a consequence, legislators over the world have introduced new acts to ensure transparency, explainability and fairness in AI, like the recent EU AI act and GDPR. To support this, the field of Responsible AI has set out to make AI more transparent and fair. However, we saw a gap in the current state of the art in fairness assessment toolkits. Researchers urgently called for the creation of a methodology in assisting users with fairness due to its complexity and context dependency. That is why we created this toolkit, in which users are automatically guided by interactive questions on selecting the most suitable model and fairness criteria for a given task, all openly and freely available in JASP. This methodology was created by identifying characteristics of fairness measures, creating a decision tree whose internal nodes composed of interactive questions, mutating this tree and generating candidate trees and subsequently evaluating the trees to select the best one. With this toolkit we hope the help the effort on making AI more transparent and fair.