View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        The Anatomy of Explanations for Artificial Intelligence: How Explanations and Explainability Can Be Defined in the Context of Black-Box Algorithms and the GDPR

        Thumbnail
        View/Open
        The Anatomy of Explanations for AI, title page version.pdf (13.31Mb)
        Publication date
        2023
        Author
        Hoek, Saar
        Metadata
        Show full item record
        Summary
        Over the last few years, there has been an increasing interest in the transparency of computational models, in particular systems that are referred to as ‘black-box models’. These types of models, usually conceived through machine learning methodologies such as deep neural networks, have been a breakthrough of science able to perform myriad tasks that were previously thought to be the prerogative of human capability. Such advancements have already been implemented in nearly every facet of life, ranging from small-scale personalised song recommendations to international crime identification efforts. Although artificial intelligence has arguably improved much, its employment has not been without risk or consequence. Perhaps fuelled by reports of scandals resulting from embedded bias and an increasing need for privacy, worries about the feasibility and reliability of black-box methods have grown, particularly in the context of the opacity of the model. In addition to the societal need for transparency and checks-and-balances, the introduction of the General Data Protection Directive as well as the impending AI Act have created a legal need to improve clarity on the transparency requirements for artificial intelligence. An important aspect of this transparency is a model’s explainability. It should be clear whether a model is explainable, whether an explanation is required from the model and what such an explanation may look like. Though there has been an expanding body of academic work on explainability for artificial intelligence, little has been written about how an explanation can be defined in this context with respect to both legal and technological requirements, and how well current explainability methods adhere to this definition. This thesis sets out to create a novel, layered definition of explanations with respect to artificial intelligence and the General Data Protection Directive, and set this definition out against current popular post-hoc explainability methods such as LIME and SHAP. This analysis will be used to identify current gaps in both the formulation of the law and the delivery of explanations by these methods.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/43398
        Collections
        • Theses
        Utrecht university logo