Capacity modeling in animal AGL studies
Summary
Artificial grammar learning (AGL) is an experimental paradigm used to investigate the processes that underlie language learning. Participants are trained on a set of strings generated from a grammar before they are being tested on their ability to generalize their learning to novel strings generated from the same grammar. From early AGL experiments in humans it was concluded that the participants learn to represent and encode the grammar in the form of rules. Later, this claim has been challenged by a number of other possible learning strategies and mechanisms that can account for what is being learned, and how it’s being learned. Work on humans has so far not yielded a consensus as to what the mechanisms driving AGL are. However, it has yield a number of computational models able to account for the performance reported in human studies. Since recently, the AGL paradigm has also been used to investigate potential grammatical competence in non-human animals. But so far, animal AGL studies have not embraced the possibility of other mechanisms than rule-learning, leaving computational models able to account for the phenomena in humans, unexplored. In this thesis, I first review the animal AGL literature and find that the tacit assumption in many studies is that rulelearning is the only possible mechanism of AGL, leading to experimental designs that do not control for other, potentially simpler, explanations. To challenge this bias, I implement and apply a computational model (PARSER) for word-segmentation in human infants, to experimental data from animal AGL experiments. In silico replication of animal AGL experiments show that PARSER is indeed capable of accounting for the AGL performance reported in animal studies. From these experimental results, it is concluded that mechanisms other than rule-learning are equally, or more likely to explain the performance of animals in artificial grammar learning tasks.