Relevant Explanations in Formal Argumentation, an Empirical Study
Summary
The use of automated decision-making is becoming increasingly prevalent. Users of systems that make these decisions must be able to assess a system’s biases and have trust in it. Providing explanations for system decisions is one way to achieve this. Providing these explanations is the focus of the Explainable Artificial Intelligence (XAI) field. One technique used within XAI is formal argumentation. The logic used by an algorithm to arrive at a specific decision can be represented via formal argumentation structures. However, how such an argumentation structure can be translated into human-friendly explanations remains an open question. One concept formalized for explanations in argumentation that takes into account properties of human explanations is ‘relevance’. Informally an argument is relevant to another argument if there is a relation between the two, for example, by attacking or defending an argument.
In this thesis, the concept of relevance was empirically tested by comparing explanations in formal argumentation based on relevance to explanations provided by participants. One hundred twenty-seven participants provided explanations for scenarios based on two different types of relevance. Based on the results, relevance in argumentation seems to align with explanations selected by participants. Participants preferred small explanations consisting of direct defenders, arguments that attack the attacker of an argument. However, further investigation is needed to determine whether the task’s difficulty affects this study’s results. Future work could build on the current work by expanding to non-acceptance and non-extension-based explanations and by investigating differences in explanation behaviour based on prior knowledge and goals of explanation.