The Role of Positional Cues in Non-Adjacent Dependency-Learning: An Artificial Grammar Study
MetadataShow full item record
Previous research has shown that both adults and infants are capable of detecting and implicitly learning dependencies between non-adjacent elements in streams of spoken input. The learning mechanism underlying this ability, arguably a form of statistical learning, may be essential to infants’ acquisition of morpho-syntactic dependencies in their native language. In this study we tested the limitations of this learning mechanism, using an artificial grammar that replicates some of the properties of morpho-syntactic dependencies in natural languages. Endress & Mehler (2009) proposed that non-adjacent dependencies are just as successfully tracked when they are instantiated at string edges, as when they occur string-internally (i.e. the dependency between a and b was shown to be learned just as easily in strings of the type XaYbZ as in strings of the type aXYZb). We predicted a replication of their results with our own materials, designed to emulate dependencies in natural languages. Subjects exposed to an artificial grammar with the structure YaXb (with a, b, monosyllabic, dependent elements, and Y, X, bisyllabic ‘fillers’) remained sensitive to the dependency despite the fact that the left edge of the dependency did not coincide with the left edge of the string, confirming our expectations. On the other hand, subjects exposed to a minimally different grammar, with the structure aYXb were no longer able to track the dependencies, suggesting that the length and nature of the intervening material plays an important part in the ability to keep track of non-adjacent dependencies.