Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorDirksen, Dr. S
dc.contributor.authorDool, W.V.S.O. van den
dc.date.accessioned2020-02-20T19:04:16Z
dc.date.available2020-02-20T19:04:16Z
dc.date.issued2020
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/34937
dc.description.abstractAlthough successful in terms of prediction accuracy, the Artificial Neural Network has a notable drawback, namely the lack of explainability of its outcomes. We propose a mathematical definition for the concept of an explanation in the context of understanding deep learning decisions. We put forward the Explanatory Vector Decomposition (EVD) method for computing such explanations, based on optimizing explanation strength. This is defined as the difference in model output probability caused by a movement in input space, divided by the vector length of this movement. We also propose a technique for quantitatively comparing existing explainability methods that compute feature importance, using this newly found definition of explanation strength. Implementation of this technique on LIME and RDE points to a higher average explanation strength achieved by the latter method, while the EVD method outperforms both according to this measure.
dc.description.sponsorshipUtrecht University
dc.language.isoen
dc.titleUnderstanding Deep Learning Decisions: the Explanatory Vector Decomposition (EVD) method
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsExplainable AI, Deep Learning, Neural Networks, Bayesian Neural Networks
dc.subject.courseuuMathematical Sciences


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record