Unboxing the Black Box using Case-Based Argumentation
Summary
In search of the most accurate and stable predictors, machine-learning algorithms have been introduced that are so difficult to interpret that we metaphorically call them ‘black boxes’. Their lack of interpretability hinders their applicability in relevant domains, where it is often desired or even required to explain decisions. Recent work proposes case-based argumentation as a tool for justifying the predictions of black-box models. Case-based argumentation is a form of reasoning that draws analogies between new and previous cases. It fits naturally with machine learning, as input data can directly be used as cases. In this study, we bring the proposed justification system into practice. Based on the evaluation, we suggest a new argumentation framework. Besides justification, we examine the possibilities for replacing or monitoring black-box prediction models using case-based argumentation systems. The results of a user experiment hint at the suitability of a monitor approach.