View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Explainable AI methods in clinical practice to obtain satisfactory performance and doctors’ confidence

        Thumbnail
        View/Open
        Master_s_Thesis.pdf (2.512Mb)
        Publication date
        2022
        Author
        Sobiczewska, Julita
        Metadata
        Show full item record
        Summary
        Artificial Intelligence (AI) techniques can greatly contribute to many fields of medicine by creating cutting-edge, efficient and effective methods to treat, monitor, and analyze patient records. Unfortunately, the lack of transparency of the models causes limited adoption of AI techniques in the treatment of patients. In order to use AI systems in medical practice, models that interpret the decisions made by these systems are applied. Such models are called Explainable AI (XAI). This thesis presents a comprehensive analysis on application of XAI models in the medical field through literature review and experimental work. I carry out two clinical classification tasks to gain better understanding on which explainability methods and NLP models should be used in the different classification tasks in clinical practice to obtain satisfactory results and doctors’ confidence. Throughout the experiments I follow a framework proposed by Markus et al. with recommendations for choosing between different explainable AI methods. Obtained results show that the framework with recommendations help with the decisions during study execution, but there are some ambiguities in the graph. This thesis points out the problem of non human-interpretable explanations. In addition, considerations and improvements to the previously proposed framework are presented.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/41932
        Collections
        • Theses
        Utrecht university logo