Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorGaast, B.H. van der
dc.contributor.advisorJanssen, C.P.
dc.contributor.authorLam, N.T.
dc.date.accessioned2021-02-01T19:00:30Z
dc.date.available2021-02-01T19:00:30Z
dc.date.issued2020
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/38711
dc.description.abstractAs AI systems become further involved in society, the field of explainable AI is becoming increasingly relevant. Explainable AI tackles the problem of explaining the decisions made or outputs produced by the difficult-to-understand black-box systems which are frequently applied in the context of machine learning. In my thesis, I take a look into the explainability of black-box systems, taking a more holistic perspective than is commonly done in explainable AI. Taking into account ideas on explanation from philosophy and psychology, I argue that to fully understand these kind of systems, we need to be able to integrate multiple different kinds of partial explanations. Furthermore, I believe that cognitive psychology provides some inspiration on how to tackle this integration, specifically in the research on cognitive architectures.
dc.description.sponsorshipUtrecht University
dc.format.extent710509
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.titleBlack-box systems, multi-level explanation, and cognitive architectures.
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsexplainable ai, xai, explanation, understanding, human-centered, interpretation, visualization, machine learning, features, connectionism, symbolic, subsymbolic, cognitive architectures, cognition, cognitive science, marr, levels, chomsky, language, competence, performance, philosophy, epistemology
dc.subject.courseuuArtificial Intelligence


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record