University of Pennsylvania

10/07/2024 | Press release | Distributed by Public on 10/07/2024 11:42

Studying how infants learn language

If a scholar wanted to study how adults learn a new language when they move to a new country, they would need to know what kind of language the adults are hearing. For example, are they taking language classes, or do they only hear the language in the public sphere? Do they work in an environment where they hear people speaking the language?

The same is true for children. To understand how infants start learning a language, a researcher must have some knowledge about what the infants are hearing. Are they hearing a lot of words? Are the words they are hearing being presented in a way that they are likely to understand?

In their summer experience at the Infant Language Center, second-year Ziana Sundrani and third-year Taiwo Adeaga investigated how infants figure out which things are words by examining one-hour recording sessions of parents talking to their children. Sundrani, a cognitive science and computer science major in the School of Arts & Sciences (SAS), and Adeaga, a cognitive science major in SAS, applied for and received the research opportunity through the Penn Undergraduate Research Mentoring Program, offered through the Center for Undergraduate Research and Fellowships.

Sundrani and Adeaga worked on the project with Daniel Swingley, director of the Infant Language Center. Swingley, a professor in the Department of Psychology in the School of Arts & Sciences, says the first step in the project was to get the recordings. Researchers from the Infant Language Center recorded parents from the Philadelphia area talking to their children for one hour at 6, 10, and 14 months.

The second step, which Adeaga and Sundrani worked on over the summer, was to convert those recordings into a new mathematical format allowing researchers to compare similar chunks of speech. Sundrani and Adeaga created a computer model that would listen for stretches of speech that are similar in adjacent sentences.

"We've been working on some code to compare each of the sounds that a mom says," Sundrani says. "If there are repeated words, we would hope that a lot of the sounds would be similar, and that would show in the computer graph analysis. Using those sounds that they picked out, that's kind of like our starting hypothesis of maybe that's what babies pick out as words, because it's patterns of the same sounds."

Swingley says their intuition is that when babies first come into the world, they hear a lot of people talking and, at first, none of it makes any sense, but over time some portions of speech start sounding familiar. Their first steps in figuring out the language could be detecting that some stretches of speech happen repeatedly across sentences.

Sundrani says the most interesting aspect of the internship was the methodology of the experiments and the ability of researchers to analyze what babies know since they can't speak.

"Babies can't tell us anything, and yet we're doing all of these experiments about what we think they know," she says.

Adeaga says the most interesting part has been working in the different realms of linguistics.

"I was also drawn to the communication science aspect of it, so I think it was just a good fit for me," she says.

Swingley says Sundrani and Adeaga did fantastic work and were highly motivated.

"They were knowledgeable. The work they did was consistently effective. They had a great problem-solving approach," he says.