Black-box systems, multi-level explanation, and cognitive architectures.
Summary
As AI systems become further involved in society, the field of explainable AI is becoming increasingly relevant. Explainable AI tackles the problem of explaining the decisions made or outputs produced by the difficult-to-understand black-box systems which are frequently applied in the context of machine learning. In my thesis, I take a look into the explainability of black-box systems, taking a more holistic perspective than is commonly done in explainable AI. Taking into account ideas on explanation from philosophy and psychology, I argue that to fully understand these kind of systems, we need to be able to integrate multiple different kinds of partial explanations. Furthermore, I believe that cognitive psychology provides some inspiration on how to tackle this integration, specifically in the research on cognitive architectures.