Everyday Argumentative Explanations for AI
Summary
There has been an upswing in the research field of explainable artificial intelligence (XAI) of methods aimed at explaining opaque artificial intelligence (AI) systems and their decisions. A recent, promising approach involves the use of formal argumentation to explain machine learning (ML) applications. In this thesis we investigate that approach; we aim to gain understanding of the value of argumentation for XAI. In particular, we explore how well argumentation can produce everyday explanations. Everyday explanations describe how humans explain in day to-day life and are claimed to be important for explaining decisions of AI systems to end-users. First, we conceptually show how argumentative explanations can be posed as everyday explanations. Afterward, we demonstrate that current argumentative explanation methods compute explanations that already contain some, but not all properties of everyday explanations. Finally, we present everyday argumentative explanations, or EVAX, which is a model-agnostic method that computes local explanations for ML models. These explanations can be adjusted in their size and retain high fidelity scores (an average of 0.95) on four different datasets and four different ML models. In addition, the explanations incorporate the main characteristics of everyday explanations and help in achieving the objectives of XAI.