AI & RoboticsNews

Meta AI announces long-term study on human brain and language processing

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


The human brain has long been, and continues to be, a conundrum — how it developed, how it continues to evolve, its tapped and untapped capabilities.

The same goes for artificial intelligence (AI) and machine learning (ML) models.

And just as the human brain created AI and ML models that grow increasingly sophisticated by the day, these systems are now being applied to study the human brain itself. Specifically, such studies are seeking to enhance the capabilities of AI systems and more closely model them after brain functions so that they can operate in increasingly autonomous ways.

Researchers at Meta AI have embarked on one such initiative. The research arm of Facebook’s parent company today announced a long-term study to better understand how the human brain processes language. Researchers are looking at how the brain and AI language models respond to the same spoken or written sentences.

“We’re trying to compare AI systems to the brain, quite literally,” said Jean-Rémi King, senior research scientist at Meta AI.

Spoken language, he noted, makes humans wholly unique and understanding how the brain works is still a challenge and an ongoing process. The underlying question is: “What makes humans so much more powerful or so much more efficient than these machines? We want to identify not just the similarities, but pinpoint the remaining differences.”

Brain imaging and human-level AI

Meta AI is working with NeuroSpin (CEA), a Paris-based research center for innovation in brain imaging and the French National Institute for Research in Digital Science (INRIA). The work is part of Meta AI’s broader focus on human-level AI that can learn with little to no human supervision.

By better understanding how the human brain processes language, the researchers hypothesize that they can glean insights that will help guide development of AI that can learn and process speech as efficiently as people do.

“It is becoming increasingly easy to develop and train and use special learning algorithms to perform a wide variety of tasks,” King said. “But these AI systems remain far away from how efficient the human brain is. What’s clear is that there is something missing from these systems to be able to understand and learn language much more efficiently, at least as efficiently as humans do. This is obviously the million-dollar question.”

In deep learning, multiple layers of neural networks work in tandem to learn. This approach has been applied in the Meta AI researchers’ work to highlight when and where perceptual representations of words and sentences are generated in the brain as a volunteer reads or listens to a story.

Over the past two years, researchers have applied deep learning techniques to public neuroimaging datasets culled from images of brain activity in magnetic resonance imaging (MRI) and computerized tomography (CT) scans of volunteers. These were collected and shared by several academic institutions, including Princeton University and the Max Planck Institute for Psycholinguistics.

The team modeled thousands of these brain scans while also applying a magnetoencephalography (MEG) scanner to capture images every millisecond. Working with INRIA, they compared a variety of language models to the brain responses of 345 volunteers that were recorded with functional magnetic resonance imaging (fMRI) as they listened to complex narratives.

The same narratives that were read or presented to human subjects were then presented to AI systems. “We can compare these two sets of data to see when and where they match or mismatch,” King said.

What researchers have found so far

Researchers have already pulled out valuable insights. Notably, language models that most closely resemble brain activity are those that best predict the next word from context (such as “on a dark and stormy night…” or “once upon a time…”), King explained. Such prediction based on partially observable inputs is at the core of AI self-supervised learning (SSL).

Still, specific regions of the brain anticipate words and ideas far ahead in time – while by contrast, language models are typically trained to predict the very next word. They are limited in their ability to anticipate complex ideas, plots and narratives.

“(Humans) systematically predict what is going to come next,” King said. “But it’s not just prediction at a word level, it is at a more abstract level.”

In further contrasts, the human brain can learn with a few million sentences and can continuously adapt and store information between its trillions of synapses. AI language models, meanwhile, are trained on billions of sentences and can parameterize up to 175 billion artificial synapses.

King pointed to the fact that infants are exposed to sentences in the thousands and can understand language quickly. For instance, from just a few examples, children learn that “orange” can refer to both a fruit and a color. But modern AI systems have trouble with this task.

“It is very clear that the AI system of today, no matter how good or impressive they are, are extremely inefficient as well,” King said. While AI models are performing increasingly complex tasks, “it is becoming very clear that in many ways they do not understand things broadly.”

To further hone their study, Meta AI researchers and NeuroSpin are now creating an original neuroimaging dataset. This, along with code, deep learning models and research papers will be open sourced to help further discovery in AI and neuroscience fields. “The idea is to provide a series of tools that will be used and capitalized on by our colleagues in academia and other areas,” King said.

By studying long-range forecasting capability more in depth, researchers can help improve modern AI language models, he said. Enhancing algorithms with long-range forecasts can help them become more correlated with the brain.

King emphasized that, “What is clear now is that these systems can be compared to the human brain, which was not the case just a few years ago.”

He added that scientific progress requires the bringing together of the disciplines of neuroscience and AI. With time, they will evolve much more closely and collaboratively.

“This exchange between neuroscience and AI is not just a metaphorical exchange with abstract ideas,” King said. “It’s becoming extremely concrete. We’re trying to understand what are the architectures, what are the learning principles in the brain? And we’re trying to implement these architectures and these principles into our models.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!