Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorPrakken, H.
dc.contributor.authorRatsma, R.J.
dc.date.accessioned2020-09-21T18:00:15Z
dc.date.available2020-09-21T18:00:15Z
dc.date.issued2020
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/37695
dc.description.abstractIn search of the most accurate and stable predictors, machine-learning algorithms have been introduced that are so difficult to interpret that we metaphorically call them ‘black boxes’. Their lack of interpretability hinders their applicability in relevant domains, where it is often desired or even required to explain decisions. Recent work proposes case-based argumentation as a tool for justifying the predictions of black-box models. Case-based argumentation is a form of reasoning that draws analogies between new and previous cases. It fits naturally with machine learning, as input data can directly be used as cases. In this study, we bring the proposed justification system into practice. Based on the evaluation, we suggest a new argumentation framework. Besides justification, we examine the possibilities for replacing or monitoring black-box prediction models using case-based argumentation systems. The results of a user experiment hint at the suitability of a monitor approach.
dc.description.sponsorshipUtrecht University
dc.format.extent1569920
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleUnboxing the Black Box using Case-Based Argumentation
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsBlack box; Interpretability; Explainable AI; Case-Based Argumentation
dc.subject.courseuuArtificial Intelligence


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record