dc.description.abstract | Classifying facial expressions is a vital part of developing systems capable of aptly interacting with users. In this field, the use of deep-learning models has become the standard. However, the inner workings of these models are unintelligible, which is an important issue when deploying them to high-stakes environments. Recent efforts to generate explanations for emotion classification systems has been focused on this type of models. In this study, an alternative way of explaining the decisions of a more conventional model based on geometric features is presented. I develop a geometric- features-based deep neural network (DNN) and a convolutional neural network (CNN). After calculating the fidelity and accuracy scores of the explanations, I find that they approximate the DNN well. From the performed user study, it becomes clear that the explanations increase the understanding of the DNN and that they are preferred over the explanations for the CNN, which are more commonly used. I argue that the use conventional models is better suited for high-stakes decisions than black-box models, which is shown using the new explanation method. All scripts are available at: https://github.com/kayatb/GeomExp. | |