Human-centred explanation of rule-based decision-making systems in the legal domain
Summary
This thesis develops a human-centred explanation method for rule-based automated decision-making
systems in the legal domain. The research consists of theoretical exploration and practical implementation.
Theoretical research establishes a framework for developing explanation methods, representing its key
internal components (content, communication and adaptation) and external factors (decision-making
system, human recipient and domain). Further investigation of human-centred research highlights the
importance of considering both the recipient’s knowledge and goals. Besides, we found that one way to
accomplish this is by creating a question-driven explanation method and visualising the decision-making
process to aid understanding. Accordingly, the proposed explanation method involves representing a decision
model in a graph database to be able to both question and visualise it. This proposed explanation method is
implemented for a real-world scenario, generating tailored explanations for different target groups. The
evaluation highlights the method’s ability to answer specific questions but identifies limitations in handling
logical checks and hypothetical scenarios. Future research can focus on improving these aspects and
exploring additional reasoning properties and customisable interfaces to adapt the method to recipients’
evolving needs.