Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorKaya, Heysem
dc.contributor.authorBrink, Thomas van den
dc.date.accessioned2025-08-21T00:06:40Z
dc.date.available2025-08-21T00:06:40Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/49903
dc.description.abstractThere is an increasing demand for medical care in the Netherlands, and overcrowding at the GP is a serious issue. In an attempt to alleviate some pressure from the GP, an automatic triage tool was developed to gauge how urgently a patient needs to see a doctor. This triage tool is prone to making mistakes; therefore, in order to improve it, it would be useful to identify the causal factors behind the mistakes. This thesis uses interpretable machine learning(ML) techniques to determine which factors indicate that the automatic triage system has made a mistake. The factors behind incorrect triage are identified using four different inherently interpretable machine learning models (logistic regression (logreg), SVM, Decision tree (DTs), and EBM). These models were compared in terms of prediction quality, computational efficiency (in terms of training and test time) and the quality of their interpretability was measured through both a proxy measure (the number of parameters in the model) and an experiment, where participants were asked to judge the explanation using the system causability scale questionnaire. Each model was also simplified and compared to its simplified version. When it came to predictive performance, logreg, SVMs, and EBMs all performed significantly better than DTs; the former did not show a significant difference between themselves, although EBMs did perform the best. The simplified models all showed an insignificant decrease in performance when compared to their non-simplified counterparts. Considering interpretability, when using the number of parameters as a proxy measure, all the simplified models were significantly more interpretable than their non-simplified counterparts. When looking at our SCS results, it becomes apparent that decision trees are the most interpretable model, closely followed by EBMs. Finally, it is concluded that EBMs were the best models for the task since they both showed good performance and were very interpretable. Simplifying the models was found to be a worthwhile endeavour, resulting in a significant increase in interpretability and only an insignificant decrease in predictive performance. Overall, this thesis shows that interpretable ML methods can successfully be used to identify why an automatic triage system makes wrong predictions. In future work, it would be beneficial to consider EBMs as the model type.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectAutomatically Analysing the Wrong Triage Decisions of 'Moet ik naar de dokter' recommender system
dc.titleAutomatically Analysing the Wrong Triage Decisions of 'Moet ik naar de dokter' recommender system
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsAI, machine learning, gradient boosting, healthcare, triage, automatic triage, logistic regression, SVM, Decision trees, EBM
dc.subject.courseuuArtificial Intelligence
dc.thesis.id51999


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record