Newborns are able to learn complex sound sequences that follow rules similar to those of language, according to a team of researchers, including psycholinguist Jutta Mueller of the University of Vienna.
By showing long sought evidence that the ability to perceive dependencies between non adjacent acoustic signals is innate, this study leads to the reexamination of the societal problems shaped by space, language, and attention.
It’s well known that babies can learn sequences of syllables or sounds that go right after one another. Unfortunately, human language contains much that does not link adjacent elements. For example: In the sentence, ‘The tall woman who is hiding behind the tree calls herself Catwoman,’ the ‘tall woman’ is the subject to which the verb ending ‘-s’ belongs: third person singular.
Language development research shows that children have begun to learn these rules by the age of two in their native language. Yet, learning experiments have shown that by five months of age, infants can detect rules between non adjacent elements of sounds, even outside of language, for example, tones.
Co-author Simon Townsend from the University of Zurich says even our very close relatives chimpanzees can detect complex acoustic patterns when they are embedded in tones. There are no hard and fast rules for hearing: sounds are patterned.
Related Topics:
Human nature: Music
Despite the many previous studies implying that the ability to pattern match between non adjacent sounds is an innate capability, there is not a clear cut case—until now. By watching brain activity from newborns and infants six months old, the group of international researchers have provided this evidence; listening to complex sound sequences.
In their experiment, newborns (less than a few days old) heard sequences to which the first tone was coupled to a non-adjacent third tone. Once the babies had only just listened to two different types of sequences for six minutes, they were played new sequences that matched the same pattern in pitch.
Either they were correct or (were) they in error in the pattern. “It appears that, during the learning process in newborns, brain regions have specific connections that are important for the formation of the networks necessary to start processing of more complex acoustic patterns later on.”
The researchers also note that their findings underscore that non-linguistic acoustic signals such as the tone sequences used in the study can activate language relevant brain networks. That opens up the possibility for early intervention programs, for example, that would employ musical stimulation to hone language development.
Reference: 1.Cai L, Arimitsu T, Shinohara N, Takahashi T, Hakuno Y, Hata M, et al. Functional reorganisation of brain regions supporting artificial grammar learning across the first half year of life


