Skip to content

By Deborah Halber

 
As the recording relays Alice’s interactions with the Cheshire Cat and the Mad Hatter, a sophisticated imaging scanner tracks spikes of activity in the woman’s brain.

Experiments like this help Ev Fedorenko PhD ’07, investigator at the McGovern Institute for Brain Research and associate professor in the Department of Brain and Cognitive Sciences, explore a nagging puzzle: how do our brains perform the infinitely complex tasks of interpreting and generating language?

“Understanding how human minds perform high-level cognitive tasks is one of the greatest quests of all time,” she says. “Not only do we have incredibly sophisticated, abstract thoughts about how the world works, we can communicate those thoughts to one another. I find this feat incredible and want to understand the representations and computations that enable this.”

The work could also lead to improved diagnosis and treatment of language disorders, building machines capable of understanding and generating language, and designing better educational programs for children with developmental language impairment or adults learning a second language.

Processing language

Researchers agree that language is processed primarily in the frontal and temporal lobes in the brain’s left hemisphere. But the nature of the cognitive and neural mechanisms at work, and the potential contribution of other brain areas, is hotly debated.

While some argue that one focal area processes syntax, Fedorenko says there are empirical and computational reasons to reject that view. In 2016, Fedorenko’s lab reported that syntax processing is distributed across the “entire ensemble” of brain regions that respond to high-level linguistic tasks.

Fedorenko uses techniques such as functional magnetic resonance imaging (fMRI), behavioral experiments, intracranial recordings, computational modeling, genotyping, and information gleaned from neurodevelopmental disorders to delve deeply into the interface between thoughts and utterances.

Amazingly, our brains seem to instinctively recognize certain sounds as human communication. “There’s a part of our auditory cortex that responds to speech sounds and no other sounds at all,” Fedorenko says. “Not to music, not to animal sounds, not to construction noises, just speech,” whether the words are English, Tagalog, or Hindi. For these regions to get engaged, it doesn’t have to be a language the listener can speak or understand. But the regions that get input from these speech-responsive areas—the ones that Fedorenko focuses on—care deeply about whether the speech is meaningful or not, showing little or no response to unfamiliar languages.

In fact, constructing complex meanings seems to drive these regions. Fedorenko has found that if you take a sentence and swap around the words so that it no longer reads like a logical sentence, the brain responds just as strongly as it does to well-formed sentences, a finding so surprising she was initially convinced it must be a mistake. It seems that as long as you can build meaning from nearby words, even if they don’t occur in a grammatical pattern (“ate he apple an”), your language system works at full power. This makes sense, she says, given that we often get linguistic input that contains errors—when we talk to children, for instance, or non-native speakers.

No one, she believes, has done what she has set out to do: systematically investigate differences in the neural architecture of speakers of a broad sampling of the world’s 6,000-plus languages. The Alice in Wonderland project involves speakers of more than 40 languages (and counting), including Farsi, Serbo-Croatian, and Basque, listening to excerpts from the book translated into their native languages while the fMRI scanner records their brain activity. Fedorenko wants to know whether languages that use a strict word order, such as English or German, are processed differently from languages that have highly flexible word orders, such as Finnish or Russian.

So far, Fedorenko’s team is finding broad similarities across diverse languages. “If we can generalize the basic properties of the language architecture to, say, 50 languages that we’ve sampled across different language families, that gives credence to the idea that these properties are universal and determined by the general features of human language rather than idiosyncratic properties of a particular language,” she says.

Investigating multilingual speakers

Fedorenko is especially intrigued by polyglots. She’s found that speaking multiple languages engages the same set of core centers within the brain that help you speak and understand your native language. Polyglots’ language processing also seems to involve less, rather than more, blood flow, compared to monolingual individuals. It’s as though polyglots have flexed the brain’s language muscle so many times, they can use it more efficiently. Multilingual herself, Fedorenko grew up in Volgograd, formerly Stalingrad, a city of more than a million that she recalls as mostly war memorials and drab apartment buildings on the bank of the beautiful Volga River.

With the dissolution of the Soviet Union in 1991, “things just fell apart,” she says. Her mother, a mechanical engineer with a law degree, and her father, a construction worker who performed multiple jobs for little or no pay, struggled to make ends meet. When her mother was ready to return to work after the birth of Fedorenko’s sister, there was no company to go back to.

Yet, starting at age seven, Fedorenko learned languages: English in school, plus French, German, Polish, and Spanish from locals that her mom enlisted to tutor her. “We were spending money on those kinds of things, even at times when we were borderline starving. My mom just saw that as a more important investment in my future”—a ticket out of Russia.

Inspired by Chomsky

It worked. Fedorenko spent her sophomore year of high school with a host family in Alabama and then enrolled at Harvard University in 1998 on a full scholarship. She studied psychology and linguistics. When she went on to pursue graduate studies in cognitive science and neuroscience at MIT, she came across a 2002 paper by MIT Institute Professor and emeritus professor of linguistics Noam Chomsky proposing that the defining feature of human language is structure-building, which may be shared with brain domains that process math and music.

“That seemed like a really cool idea,” she says. “It just turned out to be wrong.” Fedorenko wanted to evaluate Chomsky’s hypothesis using the best scientific tools available, so she went on to develop a new fMRI approach to identify language-responsive areas in individual brains and then asked whether those areas also respond when we process structure in math and music. The answer was a resounding no.

Using her arsenal of tools, Fedorenko is now able to analyze the neural mechanisms of language-processing using dozens of manipulations and to relate neural data to state-of-the-art computational models of language processing such as those used by Google Translate. This allows her “to tackle important questions that have not yet been answered,” she says. “Nobody has asked these kinds of questions, because people haven’t had the tools to do it.”