The Alignment Formula: Large Language Models and Humans' Decisions in a False-Belief Task
dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Deoskar, Tejaswini | |
dc.contributor.author | Zgreabăn, Mădălina | |
dc.date.accessioned | 2024-09-26T23:02:13Z | |
dc.date.available | 2024-09-26T23:02:13Z | |
dc.date.issued | 2024 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/47851 | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | This thesis concerns the possible alignment of LLMs to human values, as well as priming humans to perform better on ToM. | |
dc.title | The Alignment Formula: Large Language Models and Humans' Decisions in a False-Belief Task | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.courseuu | Linguistics | |
dc.thesis.id | 39759 |