Explaining machine learning outputs to humans: a case-based reasoning approach
Summary
The problem of interpretability, in other words the problem of explaining machine learning outputs in terms that are understandable for a human has become a widely debated topic in the field of AI. In particular, this work is concerned with explanations of machine learning outputs in the legal domain. HYPO was chosen as the blueprint implementation for the current model of explanation, which builds on HYPO while also attempting to improve it. The resulting model was tested on two case studies, and the yielded outputs were compared against the ML outputs.