Explainable AI methods in clinical practice to obtain satisfactory performance and doctors’ confidence
MetadataShow full item record
Artificial Intelligence (AI) techniques can greatly contribute to many fields of medicine by creating cutting-edge, efficient and effective methods to treat, monitor, and analyze patient records. Unfortunately, the lack of transparency of the models causes limited adoption of AI techniques in the treatment of patients. In order to use AI systems in medical practice, models that interpret the decisions made by these systems are applied. Such models are called Explainable AI (XAI). This thesis presents a comprehensive analysis on application of XAI models in the medical field through literature review and experimental work. I carry out two clinical classification tasks to gain better understanding on which explainability methods and NLP models should be used in the different classification tasks in clinical practice to obtain satisfactory results and doctors’ confidence. Throughout the experiments I follow a framework proposed by Markus et al. with recommendations for choosing between different explainable AI methods. Obtained results show that the framework with recommendations help with the decisions during study execution, but there are some ambiguities in the graph. This thesis points out the problem of non human-interpretable explanations. In addition, considerations and improvements to the previously proposed framework are presented.