Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributorAlbert Gatt
dc.contributor.advisorGatt, A.
dc.contributor.authorNiland, David-Paul
dc.date.accessioned2022-09-09T01:01:22Z
dc.date.available2022-09-09T01:01:22Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/42441
dc.description.abstractThis thesis is focused on generating natural language explanations for automated machine learning (AutoML). Research in natural language explanations is timely, given both the popularity of explainability techniques and the continued advances in AutoML. We believe that the standard explainability techniques are not explicit enough in conveying information to stakeholders. Users might prefer one mode of information over another or feel more confident with visual information. In other domains, people understand information better if it is presented in natural language. We have therefore proposed, developed and tested language generation modules that build explanations for machine learning models that can be applied to AutoML systems. This research provides a bedrock for future work on generating natural language explanations. We have developed three language generation modules for permutation feature importance, partial dependence and accumulated local effects. During the development of the language generator modules, we conducted a preliminary pilot study to evaluate the systems. This study helped the development pro- cess and deepened our understanding of the language required to explain the graphical information. To test whether natural language explanations can offer more utility than visual explanations, we conducted a more extensive evaluation study to test which mode of explanation was more helpful: visual, textual or multimodal. What constitutes a ”good” explanation is one that helps users understand the underlying information that is being conveyed. In this thesis, study participants found multimodal explanations to be the most useful of the three modes in increasing their understanding of the underlying processes.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThis thesis is focused on generating natural language explanations for Automated machine learning (AutoML). Research in natural language explanations is timely, given both the popularity of explainability techniques and the continued advances in AutoML. We have shown multimodal explanations to be more beneficial than singular modes of explanation.
dc.titleNatural Language Explanations
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsExplainability, Explainable AI, XAI, Natural language explanations, natural language generation, Partial dependence, accumulated local effects, automl
dc.subject.courseuuArtificial Intelligence
dc.thesis.id8787


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record