Designing an Interface for an Explainable Machine Learning Risk Model for Predicting Illegal Shipbreaking
Summary
In the battle against 'beaching', referring to the illegally dismantling of ocean-going vessels, the IDLab as has developed machine learning models to provide predictive risk assessments of currently active ships. As the predictions that the models produce will support decision-making of the ship inspectors of the Dutch Human Environment and Transport Inspectorate, explainable artificial intelligence (XAI) techniques were used to find explanations for the predictions. As these inspectors are experts in their domain, but novices in the field of (X)AI and data science, challenges arise with regard to making the model results accessible for them. This exemplifies a larger question on how humans interact with (X)AI and concerns aspects such as visualisation and interaction, with the aim to make predictive machine learning models accessible, understandable and trustworthy for the decision-making end-users. As existing XAI visualisation studies mainly target data scientists, the current research contributes to getting a better understanding of how to effectively design XAI visualisations for end-users. In this research, a dashboard interface design is proposed, which was created following a systematic top-down approach, including a literature review, requirements analysis with stakeholders, brainstorm and sketching session, low-fidelity prototypes, focus group sessions, the implementation of a high-fidelity prototype and a final experiment with the target users. The resulting prototype was evaluated in terms of understandability, usability and reliance, and indicated promising results. The interface was received positively by the inspectors and findings from the evaluation show no reason to assume that there are major flaws in the design. Furthermore, the proposed design for the model explanations was found to be understandable in terms of visuals, while also opening the door to new challenges regarding trust in XAI models and interpretability of their explanations.