Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorNouwen, Rick
dc.contributor.authorGiulimondi, Alessia
dc.date.accessioned2024-10-31T01:01:45Z
dc.date.available2024-10-31T01:01:45Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/48038
dc.description.abstractIncreasingly, linguistic studies are employing LLMs to explain the underlying mechanisms of human linguistic cognition by applying experimental methods to LLMS that were previously adopted to test human participants (Hueb- ner et al., 2021; Beguš et al., 2023; Piantadosi, 2023; Goldstein et al., 2020). Computational cognitive scientists have argued that the assumptions under- lying these research choices are incorrect(Guest and Martin, 2023; van Rooij et al., 2023) and a growing body of linguists is taking a critical stance to- wards LLMs (Martínez et al., 2023; Kodner et al., 2023; Katzir, 2023; Bender and Koller, 2020; Bender et al., 2021). However, meta-theoretical linguistic research is still scarce, and, so far, no systematic analysis of language stud- ies using LLMs as experimental tools was conducted. This thesis aims to understand how the use of LLMs in research is affecting theory building in linguistics. More specifically, this analysis will focus on two research ques- tions: 1) What is the theoretical relation of LLMs to human cognition, when they are used for linguistic research? 2) How valid is the use of LLMs in linguistic theory? The thesis will review ten linguistic articles and argue that they share the assumption that LLMs represent an artificial replication of human lin- guistic cognition. Moreover, drawing from Guest and Martin (2023); Guest (2024) and Sullivan (2022) theoretical framework, it will be discussed how LLMs used to generate human-like linguistic behavior represent a theoretical misuse. It will be shown how this misuse of LLMs is motivated by an industry- driven research mindset (Ahmed et al., 2023), which may be at the root of the theoretical misconceptions hypothesized in the first inquiry of this study. This analysis is relevant for the understanding of language technology by language professionals and the possible systemic misinterpretations at play in research in a human-machine era.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThe thesis is a meta-analysis of 10 research articles of different fields of linguistics and it shows the ambiguities and conceptual mistakes of a part of the linguistic literature using Large Language Models (LLMs) for experimental research (e.g. language acquisition research addressing the question of language innatism). The research shows that some linguists consider LLMs as replications of human linguistic cognition, diregarding the limits posed by the principle of multiple realizability.
dc.titleGiulimondi, 7165838. Talking machines and linguistic cognition: a critical review of the use of large language models in linguistic theorizing
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsLarge Language Models; Meta-theory; principle of multiple realizability; innatism; machine learning; platform society
dc.subject.courseuuLinguistics
dc.thesis.id38960


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record