Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorMosteiro Romero, Pablo
dc.contributor.authorKooistra, Joppe
dc.date.accessioned2024-03-31T00:02:12Z
dc.date.available2024-03-31T00:02:12Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/46223
dc.description.abstractThis research delves into gender bias and its mitigation in NLP models for violence risk assessment within the psychiatric care domain. A comparison between transformer-based monolingual and multilingual models is made, as well as a comparison between transformer-based models and a classical machine learning algorithm. First, the dataset is analyzed to gain insights into class balance and gender distribution. Then, the NLP models are trained on the dataset, and their performance and fairness metrics are evaluated. After that, a data augmentation technique is used on the data before running another training round. Finally, the Reject Option Classification method is used in post-processing to optimize performance and fairness. The most important findings are that the monolingual models outperform the multilingual ones, but there is little difference between domain-specific and general models. As with previous work on this topic, the classical machine learning model SVM outperforms the transformer-based models. Furthermore, bias mitigation methods should be carefully chosen, based on the desired metrics to improve, since they often come with trade-offs. Data augmentation led to increased counterfactual fairness for most models, but not for all. However, for some models, this came at the cost of predictive parity fairness, whereas in others this increased. Reject Option Classification showed mixed results as well, improving counterfactual fairness or predictive parity for some models but decreasing it in others. Understanding these trade-offs is the key to successful bias mitigation. This research contributes valuable insights into gender bias mitigation within psychiatric care and offers a thoughtful consideration of trade-offs in adopting bias mitigation strategies. The findings offer a perspective on the dynamics of transformer-based models and classical machine learning algorithms, contributing to the ongoing discourse on responsible and effective AI deployment in mental healthcare contexts.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectAnalysis of gender bias in Dutch NLP models for the violence domain with a real-world ML case study on violence risk assessment
dc.titleBias analysis of NLP models for violence risk assessment
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsBias mitigation; NLP; mental healthcare; word embeddings; transformer models; SVM; predictive parity; counterfactual fairness
dc.subject.courseuuBusiness Informatics
dc.thesis.id27032


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record