View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Evaluating the Apperception system in the context of explainable AI

        Thumbnail
        View/Open
        Tara_Pesman_thesis(final).pdf (1.098Mb)
        Publication date
        2022
        Author
        Pesman, Tara
        Metadata
        Show full item record
        Summary
        Recently, Evans et al. published^ the Apperception system: a formalization (the Apperception model) and accompanying implementation (the Apperception engine) of the intuitive notion of ‘making sense’, which involves the construction of a symbolic causal theory that explains the inputted sensory sequence and satisfies a set of unity conditions inspired by Kant. This work is particularly interesting when placed in the context of explainable AI: In this thesis, I discuss the history, characteristics, and (dis)advantages of classical AI approaches and modern machine learning approaches. The latter have the disadvantage of being a black box, which prevents these approaches from being able to guarantee desirable ethical properties. This problem motivates the research field of explainable AI: the effort of designing AI systems which can compete with modern machine learning approaches in performance, but that are nonetheless explainable (a white box: understandable by humans). I discuss the field in depth, elaborating on which ethical properties are desirable, and which general methods are employed. Then, I give a synopsis of the Apperception system, and assess whether it reaches its objective of formalizing the intuitive notion of 'making sense', after which I place the system in the context of explainable AI, and discuss whether it reaches the goal of combining high performance and explainability. Additionally, I explore potential extensions to the language of the model (specifically to the rules it learns) with the intention of increasing the quality of explanation produced by the system. I conclude that, given the pioneering nature of their work, Evans et al. come close to the goals of creating a high-performing, explainable system that formalizes ‘making sense’, though there are many improvements that may be made. It is (thus) a good (early) attempt at building explainable AI, and may serve as a foundation for future research in this field. ^Richard Evans et al.“Making sense of sensory input”. In: Artificial Intelligence 293 (2021), p. 103438.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/461
        Collections
        • Theses
        Utrecht university logo