View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Natural Language Explanations

        Thumbnail
        View/Open
        6688721_David_Paul_Niland_Thesis.pdf (2.404Mb)
        Publication date
        2022
        Author
        Niland, David-Paul
        Metadata
        Show full item record
        Summary
        This thesis is focused on generating natural language explanations for automated machine learning (AutoML). Research in natural language explanations is timely, given both the popularity of explainability techniques and the continued advances in AutoML. We believe that the standard explainability techniques are not explicit enough in conveying information to stakeholders. Users might prefer one mode of information over another or feel more confident with visual information. In other domains, people understand information better if it is presented in natural language. We have therefore proposed, developed and tested language generation modules that build explanations for machine learning models that can be applied to AutoML systems. This research provides a bedrock for future work on generating natural language explanations. We have developed three language generation modules for permutation feature importance, partial dependence and accumulated local effects. During the development of the language generator modules, we conducted a preliminary pilot study to evaluate the systems. This study helped the development pro- cess and deepened our understanding of the language required to explain the graphical information. To test whether natural language explanations can offer more utility than visual explanations, we conducted a more extensive evaluation study to test which mode of explanation was more helpful: visual, textual or multimodal. What constitutes a ”good” explanation is one that helps users understand the underlying information that is being conveyed. In this thesis, study participants found multimodal explanations to be the most useful of the three modes in increasing their understanding of the underlying processes.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/42441
        Collections
        • Theses

        Related items

        Showing items related by title, author, creator and subject.

        • Constructing an Explanation Ontology for the Communication and Combination of Partial Explanations in a Federated Knowledge Environment 

          Bouter, C.A. (2019)
          Various machine learning explanation algorithms have are already been developed to interpret a prediction on a sensitive domain like release on parole or mortgage approval. These algorithms assume that the prediction is ...
        • The Anatomy of Explanations for Artificial Intelligence: How Explanations and Explainability Can Be Defined in the Context of Black-Box Algorithms and the GDPR 

          Hoek, Saar (2023)
          Over the last few years, there has been an increasing interest in the transparency of computational models, in particular systems that are referred to as ‘black-box models’. These types of models, usually conceived through ...
        • Condense and Efficient Explanations in ASPIC+: An ASP approach to element explanations. 

          Diehl, Daniël (2025)
          Explainable Artificial Intelligence (XAI) has emerged as a critical aspect of AI systems, addressing the pressing need to enhance user understanding across various applications, thereby fostering trust and responsible use. ...
        Utrecht university logo