dc.description.abstract | Explainable Artificial Intelligence (XAI) has emerged as a critical aspect of AI systems,
addressing the pressing need to enhance user understanding across various applications, thereby fostering trust and responsible use. Formal argumentation offers a promising approach to conceptualizing explanations, with a variety of explanation semantics defined to extract relevant arguments that justify the (non-)acceptance of conclusions.
This thesis focuses on explanation semantics for structured argumentation, which is particularly suited for modelling real-world applications. Currently, the only explanation semantics defined for structured argumentation is that of Borg and Bex [2024]. While their approach is flexible and adaptable to the user’s needs, a key drawback is the high computational complexity of extension-based semantics in ASPIC+, which limits its scalability. As argumentation frameworks grow in complexity and size, it is essential to keep explanations understandable and efficient.
To address this challenge, we propose a novel approach to explanations in ASPIC+, drawing inspiration from the results of Lehtonen, Wallner et al. [2020] for efficient reasoning with an ASP-based method. By leveraging their approach, we bypass the complex task of constructing the argumentation framework and directly determine the acceptance of premises, rules and conclusions. This allows us to define explanation semantics for ASPIC+ at the level of these components.
A key result of this method is the condensation of explanations, grouping arguments with the same top rule. Additionally, we exclude irrelevant elements from the explanation, introducing new notions of attack and defence to further condense explanations. Our approach makes explanations shorter and more concise, offering minimal sets of elements that explain the (non-)acceptance of conclusions. This simplification is especially valuable for large and complex frameworks, where existing explanations are often too time-consuming and intricate. Our method provides a foundational step to ward more computationally efficient explanations. | |