dc.rights.license | CC-BY-NC-ND | |
dc.contributor | Albert Gatt | |
dc.contributor.advisor | Gatt, A. | |
dc.contributor.author | Niland, David-Paul | |
dc.date.accessioned | 2022-09-09T01:01:22Z | |
dc.date.available | 2022-09-09T01:01:22Z | |
dc.date.issued | 2022 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/42441 | |
dc.description.abstract | This thesis is focused on generating natural language explanations for automated machine learning (AutoML). Research in natural language explanations is timely, given both the popularity of explainability techniques and the continued advances in AutoML. We believe that the standard explainability techniques are not explicit enough in conveying information to stakeholders. Users might prefer one mode of information over another or feel more confident with visual information. In other domains, people understand information better if it is presented in natural language. We have therefore proposed, developed and tested language generation modules that build explanations for machine learning models that can be applied to AutoML systems. This research provides a bedrock for future work on generating natural language explanations.
We have developed three language generation modules for permutation feature importance, partial dependence and accumulated local effects. During the development of the language generator modules, we conducted a preliminary pilot study to evaluate the systems. This study helped the development pro- cess and deepened our understanding of the language required to explain the graphical information. To test whether natural language explanations can offer more utility than visual explanations, we conducted a more extensive evaluation study to test which mode of explanation was more helpful: visual, textual or multimodal. What constitutes a ”good” explanation is one that helps users understand the underlying information that is being conveyed. In this thesis, study participants found multimodal explanations to be the most useful of the three modes in increasing their understanding of the underlying processes. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | This thesis is focused on generating natural language explanations for Automated machine learning (AutoML). Research in natural language explanations is timely, given both the popularity of explainability techniques and the continued advances in AutoML. We have shown multimodal explanations to be more beneficial than singular modes of explanation. | |
dc.title | Natural Language Explanations | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | Explainability, Explainable AI, XAI, Natural language explanations, natural language generation, Partial dependence, accumulated local effects, automl | |
dc.subject.courseuu | Artificial Intelligence | |
dc.thesis.id | 8787 | |