What Can AI Teach us about the Human Brain?

Researchers at the Massachusetts Institute of Technology (MIT) and the University of California, Los Angeles (UCLA) believe that artificial intelligence (AI)-powered language models may be able to teach us something about the way the human brain processes and understands language.

These models have grown increasingly sophisticated in recent years, with models like GPT-3 generating a significant amount of buzz in the media. Although the models weren’t necessarily built to mimic the exact mechanisms by which the brain processes and understands language, the team of researchers noticed that many of the tasks these models are able to perform nowadays, from summarizing documents to answering questions, seem to require some level of understanding of the text. 

So, the researchers set out to see how the AI models compare to the human brain, by comparing their underlying algorithms with functional magnetic resonance imaging (fMRI) of the human brain performing similar tasks. In their study (which has not yet been peer-reviewed), “The neural architecture of language: Integrative modeling converges on predictive processing,” they found that the mechanisms appear to be quite similar, with the high-performing models closely resembling the activity of the human brain.

“The better the model is at predicting the next word, the more closely it fits the human brain,” said Nancy Kanwisher, a professor of cognitive science at MIT who worked on the research project. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

The researchers had the AI models predict strings of text while measuring the activity of each node involved in the process; similarly, they conducted fMRI scans on human subjects performing a series of reading and listening tasks. In analyzing the data collected, they found that the activity of the most successful AI models was similar to those of the human brain. 

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” said Joshua Tenenbaum, another researcher on the study. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges, than we’ve had in the past.”

Andrew Warner
Andrew Warner is a writer from Sacramento. He received his B.A. in linguistics and English from UCLA and is currently working toward an M.A. in applied linguistics at Columbia University. His writing has been published in Language Magazine, Sactown Magazine, and The Takeout.


Weekly Digest

Subscribe to stay updated

MultiLingual Media LLC