View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Condense and Efficient Explanations in ASPIC+: An ASP approach to element explanations.

        Thumbnail
        View/Open
        Thesis_DanielDiehl_28-11-24.pdf (1.242Mb)
        Publication date
        2025
        Author
        Diehl, Daniël
        Metadata
        Show full item record
        Summary
        Explainable Artificial Intelligence (XAI) has emerged as a critical aspect of AI systems, addressing the pressing need to enhance user understanding across various applications, thereby fostering trust and responsible use. Formal argumentation offers a promising approach to conceptualizing explanations, with a variety of explanation semantics defined to extract relevant arguments that justify the (non-)acceptance of conclusions. This thesis focuses on explanation semantics for structured argumentation, which is particularly suited for modelling real-world applications. Currently, the only explanation semantics defined for structured argumentation is that of Borg and Bex [2024]. While their approach is flexible and adaptable to the user’s needs, a key drawback is the high computational complexity of extension-based semantics in ASPIC+, which limits its scalability. As argumentation frameworks grow in complexity and size, it is essential to keep explanations understandable and efficient. To address this challenge, we propose a novel approach to explanations in ASPIC+, drawing inspiration from the results of Lehtonen, Wallner et al. [2020] for efficient reasoning with an ASP-based method. By leveraging their approach, we bypass the complex task of constructing the argumentation framework and directly determine the acceptance of premises, rules and conclusions. This allows us to define explanation semantics for ASPIC+ at the level of these components. A key result of this method is the condensation of explanations, grouping arguments with the same top rule. Additionally, we exclude irrelevant elements from the explanation, introducing new notions of attack and defence to further condense explanations. Our approach makes explanations shorter and more concise, offering minimal sets of elements that explain the (non-)acceptance of conclusions. This simplification is especially valuable for large and complex frameworks, where existing explanations are often too time-consuming and intricate. Our method provides a foundational step to ward more computationally efficient explanations.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/48995
        Collections
        • Theses

        Related items

        Showing items related by title, author, creator and subject.

        • Constructing an Explanation Ontology for the Communication and Combination of Partial Explanations in a Federated Knowledge Environment 

          Bouter, C.A. (2019)
          Various machine learning explanation algorithms have are already been developed to interpret a prediction on a sensitive domain like release on parole or mortgage approval. These algorithms assume that the prediction is ...
        • The Anatomy of Explanations for Artificial Intelligence: How Explanations and Explainability Can Be Defined in the Context of Black-Box Algorithms and the GDPR 

          Hoek, Saar (2023)
          Over the last few years, there has been an increasing interest in the transparency of computational models, in particular systems that are referred to as ‘black-box models’. These types of models, usually conceived through ...
        • Exploring Contrastive Explanations in Formal Argumentation 

          Glade, Sophie (2023)
          With the growing usage of artificial intelligence (AI) in daily life, explainable systems become more important. Explainable AI (XAI), which is a set of tools and frameworks to help you understand and interpret predictions ...
        Utrecht university logo