Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorIemhoff, Rosalie
dc.contributor.authorPesman, Tara
dc.date.accessioned2022-02-03T00:00:28Z
dc.date.available2022-02-03T00:00:28Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/461
dc.description.abstractRecently, Evans et al. published^ the Apperception system: a formalization (the Apperception model) and accompanying implementation (the Apperception engine) of the intuitive notion of ‘making sense’, which involves the construction of a symbolic causal theory that explains the inputted sensory sequence and satisfies a set of unity conditions inspired by Kant. This work is particularly interesting when placed in the context of explainable AI: In this thesis, I discuss the history, characteristics, and (dis)advantages of classical AI approaches and modern machine learning approaches. The latter have the disadvantage of being a black box, which prevents these approaches from being able to guarantee desirable ethical properties. This problem motivates the research field of explainable AI: the effort of designing AI systems which can compete with modern machine learning approaches in performance, but that are nonetheless explainable (a white box: understandable by humans). I discuss the field in depth, elaborating on which ethical properties are desirable, and which general methods are employed. Then, I give a synopsis of the Apperception system, and assess whether it reaches its objective of formalizing the intuitive notion of 'making sense', after which I place the system in the context of explainable AI, and discuss whether it reaches the goal of combining high performance and explainability. Additionally, I explore potential extensions to the language of the model (specifically to the rules it learns) with the intention of increasing the quality of explanation produced by the system. I conclude that, given the pioneering nature of their work, Evans et al. come close to the goals of creating a high-performing, explainable system that formalizes ‘making sense’, though there are many improvements that may be made. It is (thus) a good (early) attempt at building explainable AI, and may serve as a foundation for future research in this field. ^Richard Evans et al.“Making sense of sensory input”. In: Artificial Intelligence 293 (2021), p. 103438.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectI give a synopsis and evaluation of the Apperception system^ with respect to the goal of the authors (formalizing the intuitive notion of "making sense") and in the context of the goals of explainable AI (producing a high-performing, explainable AI system). ^Richard Evans et al.“Making sense of sensory input”. In: Artificial Intelligence 293 (2021), p. 103438.
dc.titleEvaluating the Apperception system in the context of explainable AI
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsexplainable AI; explainability; black box; machine learning; subsymbolic AI; white box; classical AI; symbolic AI; GOFAI; transparent by design; ethics; ethical AI; unbiasedness; privacy; transparency; history of AI
dc.subject.courseuuArtificial Intelligence
dc.thesis.id2083


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record