Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorGiachanou, Anastasia
dc.contributor.authorBel Bordes, Gemma
dc.date.accessioned2023-11-05T00:00:46Z
dc.date.available2023-11-05T00:00:46Z
dc.date.issued2023
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/45494
dc.description.abstractArtificial intelligence (AI) is increasingly being used in healthcare, particularly for interpreting medical images. However, there are growing concerns regarding the presence of biases in these AI models, which raise important fairness considerations. This study investigates biases in artificial intelligence (AI) models for chest X-ray diagnosis and explores the role of Explainable AI (XAI) in understanding model decisions. Biases were observed in model performance across different patient groups and diseases. Various XAI techniques were employed to generate explanations for model decisions, and comparisons were made with explanations provided by doctors. We identified an optimized version of occlusion as the most accurate XAI technique in this case, which also provided a consistent accuracy of the explanations across all patient groups. Indeed, the explanations remained equally accurate regardless of variations in model performance for different subgroups, suggesting the absence of model bias amplification. Evaluating the correctness of XAI explanations posed challenges due to the limited availability of ground truth. In order to increase the power of our analysis, we explored alternative evaluation methods, like deletion or insertion curves, but reported them as unsuitable for chest X-ray images. We have therefore established some recommendations for using XAI on chest X-ray images. Given the reported absence of biases in the explanations, our aim is also to instill confidence in clinical stakeholders regarding XAI techniques.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectArtificial Intelligence (AI) is promising for interpreting chest X-Ray images. However, these models are regarded as black boxes, since it is complicated to know the reason for the model output. We have again shown that such models are normally biased, performing better for certain patient groups. We have used Explainable AI (XAI) to understand the model output. For that, we benchmarked several XAI techniques. We also discuss the evaluation of the explanations given by these techniques.
dc.titleFairness and Explainability in Chest X-ray Image Classifiers
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.courseuuBioinformatics and Biocomplexity
dc.thesis.id22166


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record