Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorNguyen, Dong
dc.contributor.authorBurema, Renate
dc.date.accessioned2025-05-12T23:01:56Z
dc.date.available2025-05-12T23:01:56Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/48930
dc.description.abstractLarge Language Models (LLMs) are increasingly used by people in their daily lives. However, LLMs can contain biases and express these biases in their responses. One of these biases is social bias, on which this thesis is focused. Furthermore, one form of social bias is a disparate treatment of individuals based on characterizations such as age, gender and race. It is therefore crucial to explore the possible biases within LLMs and to raise awareness. This thesis builds on previous studies that investigate social bias in LLMs with a hiring decision setting. For this thesis, the decision the LLM has to make is whether someone is hired or not. LLMs are prompted with handwritten prompts in Dutch that look at both gender and country of origin. The responses of the LLMs are evaluated on Dutch social bias. This thesis finds that all tested models, gpt-4o-mini, claude-3.5-haiku, Geitje-7B-Ultra and EuroLLM-9B-Instruct, to some extent have social bias in their outputs. Furthermore, this thesis finds that all tested models to some extent are sensitive to the manner in which the prompts are written.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectEvaluating Dutch Social Bias in Large Language Models
dc.titleEvaluating Dutch Social Bias in Large Language Models
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsAI; NLP; LLMs; Social Bias; Dutch; Gender Bias; Country of Origin Bias
dc.subject.courseuuArtificial Intelligence
dc.thesis.id45649


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record