Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorPrakken, Henry
dc.contributor.authorHoek, Saar
dc.date.accessioned2023-01-01T02:01:51Z
dc.date.available2023-01-01T02:01:51Z
dc.date.issued2023
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/43398
dc.description.abstractOver the last few years, there has been an increasing interest in the transparency of computational models, in particular systems that are referred to as ‘black-box models’. These types of models, usually conceived through machine learning methodologies such as deep neural networks, have been a breakthrough of science able to perform myriad tasks that were previously thought to be the prerogative of human capability. Such advancements have already been implemented in nearly every facet of life, ranging from small-scale personalised song recommendations to international crime identification efforts. Although artificial intelligence has arguably improved much, its employment has not been without risk or consequence. Perhaps fuelled by reports of scandals resulting from embedded bias and an increasing need for privacy, worries about the feasibility and reliability of black-box methods have grown, particularly in the context of the opacity of the model. In addition to the societal need for transparency and checks-and-balances, the introduction of the General Data Protection Directive as well as the impending AI Act have created a legal need to improve clarity on the transparency requirements for artificial intelligence. An important aspect of this transparency is a model’s explainability. It should be clear whether a model is explainable, whether an explanation is required from the model and what such an explanation may look like. Though there has been an expanding body of academic work on explainability for artificial intelligence, little has been written about how an explanation can be defined in this context with respect to both legal and technological requirements, and how well current explainability methods adhere to this definition. This thesis sets out to create a novel, layered definition of explanations with respect to artificial intelligence and the General Data Protection Directive, and set this definition out against current popular post-hoc explainability methods such as LIME and SHAP. This analysis will be used to identify current gaps in both the formulation of the law and the delivery of explanations by these methods.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectIn light of the issue of the feasability of opaque models, this thesis sets out to create a novel, layered definition of explanations with respect to artificial intelligence and the General Data Protection Directive, and set this definition out against current popular post-hoc explainability methods such as LIME and SHAP. This analysis will be used to identify current gaps in both the formulation of the law and the delivery of explanations by these methods.
dc.titleThe Anatomy of Explanations for Artificial Intelligence: How Explanations and Explainability Can Be Defined in the Context of Black-Box Algorithms and the GDPR
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsAI; XAI; Artificial Intelligence; Machine Learning; explainability; transparency; GDPR; General Data Protection Regulation; explanations; responsible AI; black-box; law; legal aspects; post-hoc methods; interpretability;
dc.subject.courseuuArtificial Intelligence
dc.thesis.id11526


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record