Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorDeoskar, Tejaswini
dc.contributor.authorZgreabăn, Mădălina
dc.date.accessioned2024-09-26T23:02:13Z
dc.date.available2024-09-26T23:02:13Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/47851
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThis thesis concerns the possible alignment of LLMs to human values, as well as priming humans to perform better on ToM.
dc.titleThe Alignment Formula: Large Language Models and Humans' Decisions in a False-Belief Task
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.courseuuLinguistics
dc.thesis.id39759


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record