dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Kalis, A. | |
dc.contributor.author | Strien, Sterre van | |
dc.date.accessioned | 2025-02-07T00:01:19Z | |
dc.date.available | 2025-02-07T00:01:19Z | |
dc.date.issued | 2025 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/48478 | |
dc.description.abstract | With the increasing use and popularity of Large Language Models (LLMs), the need for answers grows, too. One question that is frequently raised is whether LLMs can be ascribed mental states. This is a question which philosophers have been trying to answer for years, for Artificial Intelligence in general. One philosopher who has written about mental states and the possibility of attributing them to Artificial Intelligence is Daniel Dennett.
In this thesis, I explore Daniel Dennett’s theory on the intentional stance and what conditions for attributing mental states could be derived from it. I propose a set of conditions and discuss whether current Large Language Models meet these conditions. I also discuss whether conditions which are currently not met by LLMs, might possibly be met in the future.
I provide a short introduction to LLMs, as well as a brief history of the debate on attributing mental states to LLMs and other forms of Artificial Intelligence. I also present an overview of philosophical questions and debates, as well as different positions in these debates, concerning mental state attribution. Subsequently, I explain Daniel Dennett’s theory on the intentional stance, as well as some notable criticism it received, before proposing three conditions for attributing mental states to an entity, which have been derived from the Intentional Stance Theory.
I conclude that, while current LLMs meet one out of three conditions, they do not meet the other two conditions. It can thus be concluded that current LLMs cannot be attributed mental states. Future LLMs, however, might be able to meet the already met condition as well as the remaining two conditions, meaning they could be attributed mental states. Ultimately, I end this thesis by discussing future developments of LLMs as well as the implications of attributing mental states to LLMs. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | This thesis explores the question of whether Large Language Models (LLMs) can be attributed mental states, when considering Daniel Dennett's Intentional Stance Theory. A philosophical background is given, the Intentional Stance Theory is explained and Large Language Models are explained/defined. Subsequently, the possibility of attributing mental states to currently existing as well as future LLMs is discussed. | |
dc.title | Do Large Language Models have mental states?
An exploration of Daniel Dennett’s Intentional Stance Theory | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | AI, Artificial Intelligence, Daniel Dennett, Intentional Stance Theory, Large Language Models, LLMs, Mental States | |
dc.subject.courseuu | Artificial Intelligence | |
dc.thesis.id | 42814 | |