Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorPrakken, H.
dc.contributor.advisorNouwt, B.
dc.contributor.authorBouter, C.A.
dc.date.accessioned2019-06-19T17:00:52Z
dc.date.available2019-06-19T17:00:52Z
dc.date.issued2019
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/32695
dc.description.abstractVarious machine learning explanation algorithms have are already been developed to interpret a prediction on a sensitive domain like release on parole or mortgage approval. These algorithms assume that the prediction is produced by a single machine learning model. However, a knowledge environment may consist of multiple machine learning models as well as other types of knowledge bases. Existing algorithms are therefore insufficient. In this thesis we categorise the field of Explainable AI to produce an ontology (i.e., a formal conceptualisation) that can function as a definition for the communication of partial explanations in a knowledge environment. The ontology is implemented in OWL. We verify the ontology by giving a set of competency questions that extract the contents and structure of the explanation. We validate the ontology by constructing a proof of concept in the mortgage approval domain that uses the ontology to communicate and combine partial explanations.
dc.description.sponsorshipUtrecht University
dc.format.extent2612516
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleConstructing an Explanation Ontology for the Communication and Combination of Partial Explanations in a Federated Knowledge Environment
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsexplainable AI, ontology, interoperability, knowledge base, knowledge environment, explanation, interpretation, artificial intelligence, AI, semantic web, machine learning, data science, bayesian network, expert system
dc.subject.courseuuArtificial Intelligence


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record