Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorHauptmann, Hanna
dc.contributor.authorTreur, Sander
dc.date.accessioned2022-07-23T00:02:41Z
dc.date.available2022-07-23T00:02:41Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/41898
dc.description.abstractIn the battle against 'beaching', referring to the illegally dismantling of ocean-going vessels, the IDLab as has developed machine learning models to provide predictive risk assessments of currently active ships. As the predictions that the models produce will support decision-making of the ship inspectors of the Dutch Human Environment and Transport Inspectorate, explainable artificial intelligence (XAI) techniques were used to find explanations for the predictions. As these inspectors are experts in their domain, but novices in the field of (X)AI and data science, challenges arise with regard to making the model results accessible for them. This exemplifies a larger question on how humans interact with (X)AI and concerns aspects such as visualisation and interaction, with the aim to make predictive machine learning models accessible, understandable and trustworthy for the decision-making end-users. As existing XAI visualisation studies mainly target data scientists, the current research contributes to getting a better understanding of how to effectively design XAI visualisations for end-users. In this research, a dashboard interface design is proposed, which was created following a systematic top-down approach, including a literature review, requirements analysis with stakeholders, brainstorm and sketching session, low-fidelity prototypes, focus group sessions, the implementation of a high-fidelity prototype and a final experiment with the target users. The resulting prototype was evaluated in terms of understandability, usability and reliance, and indicated promising results. The interface was received positively by the inspectors and findings from the evaluation show no reason to assume that there are major flaws in the design. Furthermore, the proposed design for the model explanations was found to be understandable in terms of visuals, while also opening the door to new challenges regarding trust in XAI models and interpretability of their explanations.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectIn this study, an interface was designed for visualising explainable predictive risk models to novice users. This was done in a use case for the Inspectie Leefomgeving en Transport (ILT). The used models were developed by the IDLab (data-department of the ILT) and aimed to predict illegal shipbreaking of ocean-going vessels. The interface, in the form of a dashboard, was targeted at the shipbreaking inspectors of the ILT.
dc.titleDesigning an Interface for an Explainable Machine Learning Risk Model for Predicting Illegal Shipbreaking
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsHCI;Human Computer Interaction;XAI;Explainable Artificial Intelligence;Human-Centered Design;Visual Analytics;Information Visualization;Usability;Understandability;Trust;Reliance
dc.subject.courseuuHuman-Computer Interaction
dc.thesis.id6258


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record