Natural Language Explanations
Summary
This thesis is focused on generating natural language explanations for automated machine learning (AutoML). Research in natural language explanations is timely, given both the popularity of explainability techniques and the continued advances in AutoML. We believe that the standard explainability techniques are not explicit enough in conveying information to stakeholders. Users might prefer one mode of information over another or feel more confident with visual information. In other domains, people understand information better if it is presented in natural language. We have therefore proposed, developed and tested language generation modules that build explanations for machine learning models that can be applied to AutoML systems. This research provides a bedrock for future work on generating natural language explanations.
We have developed three language generation modules for permutation feature importance, partial dependence and accumulated local effects. During the development of the language generator modules, we conducted a preliminary pilot study to evaluate the systems. This study helped the development pro- cess and deepened our understanding of the language required to explain the graphical information. To test whether natural language explanations can offer more utility than visual explanations, we conducted a more extensive evaluation study to test which mode of explanation was more helpful: visual, textual or multimodal. What constitutes a ”good” explanation is one that helps users understand the underlying information that is being conveyed. In this thesis, study participants found multimodal explanations to be the most useful of the three modes in increasing their understanding of the underlying processes.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Constructing an Explanation Ontology for the Communication and Combination of Partial Explanations in a Federated Knowledge Environment
Bouter, C.A. (2019)Various machine learning explanation algorithms have are already been developed to interpret a prediction on a sensitive domain like release on parole or mortgage approval. These algorithms assume that the prediction is ... -
The Anatomy of Explanations for Artificial Intelligence: How Explanations and Explainability Can Be Defined in the Context of Black-Box Algorithms and the GDPR
Hoek, Saar (2023)Over the last few years, there has been an increasing interest in the transparency of computational models, in particular systems that are referred to as ‘black-box models’. These types of models, usually conceived through ... -
Exploring Contrastive Explanations in Formal Argumentation
Glade, Sophie (2023)With the growing usage of artificial intelligence (AI) in daily life, explainable systems become more important. Explainable AI (XAI), which is a set of tools and frameworks to help you understand and interpret predictions ...