Constructing an Explanation Ontology for the Communication and Combination of Partial Explanations in a Federated Knowledge Environment
Summary
Various machine learning explanation algorithms have are already been developed to interpret a prediction on a sensitive domain like release on parole or mortgage approval. These algorithms assume that the prediction is produced by a single machine learning model. However, a knowledge environment may consist of multiple machine learning models as well as other types of knowledge bases. Existing algorithms are therefore insufficient. In this thesis we categorise the field of Explainable AI to produce an ontology (i.e., a formal conceptualisation) that can function as a definition for the communication of partial explanations in a knowledge environment. The ontology is implemented in OWL. We verify the ontology by giving a set of competency questions that extract the contents and structure of the explanation. We validate the ontology by constructing a proof of concept in the mortgage approval domain that uses the ontology to communicate and combine partial explanations.