Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorPrakken, H.
dc.contributor.authorFriscione, E.
dc.date.accessioned2019-06-19T17:00:54Z
dc.date.available2019-06-19T17:00:54Z
dc.date.issued2019
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/32701
dc.description.abstractThe problem of interpretability, in other words the problem of explaining machine learning outputs in terms that are understandable for a human has become a widely debated topic in the field of AI. In particular, this work is concerned with explanations of machine learning outputs in the legal domain. HYPO was chosen as the blueprint implementation for the current model of explanation, which builds on HYPO while also attempting to improve it. The resulting model was tested on two case studies, and the yielded outputs were compared against the ML outputs.
dc.description.sponsorshipUtrecht University
dc.format.extent511372
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.titleExplaining machine learning outputs to humans: a case-based reasoning approach
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsexplainable ai, XAI, cased based reasoning, argumentation
dc.subject.courseuuArtificial Intelligence


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record