dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Dirksen, Dr. S | |
dc.contributor.author | Dool, W.V.S.O. van den | |
dc.date.accessioned | 2020-02-20T19:04:16Z | |
dc.date.available | 2020-02-20T19:04:16Z | |
dc.date.issued | 2020 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/34937 | |
dc.description.abstract | Although successful in terms of prediction accuracy, the Artificial Neural Network has a notable drawback, namely the lack of explainability of its outcomes.
We propose a mathematical definition for the concept of an explanation in the context of understanding deep learning decisions. We put forward the Explanatory Vector Decomposition (EVD) method for computing such explanations, based on optimizing explanation strength. This is defined as the difference in model output probability caused by a movement in input space, divided by the vector length of this movement.
We also propose a technique for quantitatively comparing existing explainability methods that compute feature importance, using this newly found definition of explanation strength.
Implementation of this technique on LIME and RDE points to a higher average explanation strength achieved by the latter method, while the EVD method outperforms both according to this measure. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | en | |
dc.title | Understanding Deep Learning Decisions: the Explanatory Vector Decomposition (EVD) method | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | Explainable AI, Deep Learning, Neural Networks, Bayesian Neural Networks | |
dc.subject.courseuu | Mathematical Sciences | |