Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorBrinkhuis, M.J.S.
dc.contributor.advisorWerf, J.M.E.M. van der
dc.contributor.authorRobeer, M.J.
dc.date.accessioned2018-08-27T17:01:39Z
dc.date.available2018-08-27T17:01:39Z
dc.date.issued2018
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/30669
dc.description.abstractIntroduction. Recent advances in Interpretable Machine Learning (iML) and Explainable Artificial Intelligence (XAI) have shown promising approaches that are able to provide human-understandable explanations. However, these approaches have also been criticized for disregarding human behavior in explanation. When humans ask for an explanation they generally contrast the given output against an output of interest. We propose to use this human tendency to ask questions like `Why this output (the fact) instead of the other (the foil)?' as a natural way of limiting an explanation to its key causes. Method. In this study we present an end-to-end approach for extracting contrastive explanations for machine learning (ML). First, we define how to apply contrastive explanation to ML. Next, we extensively study 84 iML methods in a systematic literature review to overview approaches for enhancing interpretability in machine learning, and finding method parts most suitable for contrastive explanations. We develop Foil Trees: a model-agnostic approach to extracting explanations for finding the set of rules that causes the explanation to be predicted the actual outcome (fact) instead of the other (foil). Results. Quantitative validation showed that Foil Trees are able to accurately mimic the decision boundaries of the model it aims to explain (94% fidelity), generalizes well on unseen data (88% accuracy), provides 78% shorter explanations than their non-contrastive counterparts (mean length of 1.19 over 5.37) and does this all in real-time (60ms on average per explanation). Moreover, we conducted a user experiment on 121 participants to establish how contrastive and non-contrastive explanations are perceived in terms of general preference, transparency and trust. We found that contrastive explanations are preferred over non-contrastive explanations in terms of understandability, informativeness of contents and alignment with their own decision-making. This preference lead to an increased general preference and willingness to act upon the decision. Discussion. These results suggest that it is feasible to extract generalizable, objectively transparent contrastive explanations for ML, and that contrastive explanations provide an intuitive means to create informative minimal-length human-understandable explanations that are preferable and more persuasive.
dc.description.sponsorshipUtrecht University
dc.format.extent5652475
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleContrastive Explanation for Machine Learning
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsInterpretable Machine Learning (iML);Explainable Artificial Intelligence (XAI);Contrastive Explanation;Decision Trees;Model-Agnostic;Human Interpretable Machine Learning;Foil
dc.subject.courseuuBusiness Informatics


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record