dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Prakken, Henry | |
dc.contributor.author | Hoek, Saar | |
dc.date.accessioned | 2023-01-01T02:01:51Z | |
dc.date.available | 2023-01-01T02:01:51Z | |
dc.date.issued | 2023 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/43398 | |
dc.description.abstract | Over the last few years, there has been an increasing interest in the transparency of computational models, in particular systems that are referred to as ‘black-box models’. These types of models, usually conceived through machine learning methodologies such as deep neural networks, have been a breakthrough of science able to perform myriad tasks that were previously thought to be the prerogative of human capability. Such advancements have already been implemented in nearly every facet of life, ranging from small-scale personalised song recommendations to international crime identification efforts.
Although artificial intelligence has arguably improved much, its employment has not been without risk or consequence. Perhaps fuelled by reports of scandals resulting from embedded bias and an increasing need for privacy, worries about the feasibility and reliability of black-box methods have grown, particularly in the context of the opacity of the model. In addition to the societal need for transparency and checks-and-balances, the introduction of the General Data Protection Directive as well as the impending AI Act have created a legal need to improve clarity on the transparency requirements for artificial intelligence. An important aspect of this transparency is a model’s explainability. It should be clear whether a model is explainable, whether an explanation is required from the model and what such an explanation may look like. Though there has been an expanding body of academic work on explainability for artificial intelligence, little has been written about how an explanation can be defined in this context with respect to both legal and technological requirements, and how well current explainability methods adhere to this definition.
This thesis sets out to create a novel, layered definition of explanations with respect to artificial intelligence and the General Data Protection Directive, and set this definition out against current popular post-hoc explainability methods such as LIME and SHAP. This analysis will be used to identify current gaps in both the formulation of the law and the delivery of explanations by these methods. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | In light of the issue of the feasability of opaque models, this thesis sets out to create a novel, layered definition of explanations with respect to artificial intelligence and the General Data Protection Directive, and set this definition out against current popular post-hoc explainability methods such as LIME and SHAP. This analysis will be used to identify current gaps in both the formulation of the law and the delivery of explanations by these methods. | |
dc.title | The Anatomy of Explanations for Artificial Intelligence: How Explanations and Explainability Can Be Defined in the Context of Black-Box Algorithms and the GDPR | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | AI; XAI; Artificial Intelligence; Machine Learning; explainability; transparency; GDPR; General Data Protection Regulation; explanations; responsible AI; black-box; law; legal aspects; post-hoc methods; interpretability; | |
dc.subject.courseuu | Artificial Intelligence | |
dc.thesis.id | 11526 | |