Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Business Reporter – Management – Managing supply chain risks in 2026: when disruption is the norm

January 23, 2026

San Francisco AI Startup Emergent Secures $70M Series B Funding to Grow Its Team

January 23, 2026

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI Research Updates»Brain uses AI-like calculations for language
AI Research Updates

Brain uses AI-like calculations for language

December 10, 2025005 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Ai langauge processing neuroscience.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Summary: The human brain processes spoken language in a step-by-step sequence that closely matches the way large linguistic models transform text. Using electrocorticography recordings of people listening to a podcast, the researchers found that early brain responses aligned with early layers of the AI, while deeper layers corresponded to later neural activity in regions such as Broca’s area.

The results challenge traditional theories of language that rely on fixed rules, instead emphasizing dynamic and contextual computation. The team also published a rich dataset linking neural signals to linguistic features, providing a powerful resource for future neuroscience research.

Key facts

  • Layered alignment: Early brain responses followed the first layers of the AI ​​model, while deeper layers aligned with later neural activity.
  • Context rather than rules: AI-derived contextual embeddings predicted brain activity better than traditional linguistic units.
  • New resource: Researchers have published a large neurolinguistic dataset to accelerate the neuroscience of language.

Source: Hebrew University of Jerusalem

In a study published in Natural communicationsResearchers led by Dr. Ariel Goldstein of the Hebrew University in collaboration with Dr. Mariano Schain of Google Research as well as Professor Uri Hasson and Eric Ham of Princeton University, have discovered a surprising connection between the way our brains make sense of spoken language and the way advanced AI models analyze text.

Using electrocorticography recordings of participants listening to a thirty-minute podcast, the team showed that the brain processes language in a structured sequence that reflects the layered architecture of large language models such as GPT-2 and Llama 2.

What the study revealed

When we listen to someone speak, our brain processes each incoming word through a cascade of neural calculations. Goldstein’s team found that these transformations unfold over time in a pattern parallel to the hierarchical layers of AI language models.

Early layers of AI track simple word characteristics, while deeper layers incorporate context, tone, and meaning. The study found that human brain activity follows a similar progression: early neural responses align with earlier layers of the pattern, and later neural responses align with deeper layers.

This alignment was particularly clear in high-level language regions such as Broca’s region, where the peak brain response occurred later for deeper AI layers.

According to Dr. Goldstein, “What surprised us most is how well the temporal unfolding of meaning in the brain matches the sequence of transformations within large language models. Even though these systems are built very differently, the two seem to converge on a similar step-by-step build toward understanding.”

Why it matters

The results suggest that artificial intelligence is not just a tool for generating text. It could also open a new window into understanding how the human brain processes meaning. For decades, scientists believed that understanding language relied on symbolic rules and rigid linguistic hierarchies.

This study challenges this view. Instead, it supports a more dynamic and statistical approach to language, in which meaning emerges gradually through layers of contextual processing.

The researchers also found that classic linguistic features such as phonemes and morphemes did not predict real-time brain activity as well as AI-derived contextual integrations. This reinforces the idea that the brain integrates meaning in a more fluid and contextual way than previously thought.

A new reference for neuroscience

To advance the field, the team made public the full dataset of neural recordings associated with linguistic features. This new resource allows scientists around the world to test competing theories about how the brain understands natural language, paving the way for computational models that more closely resemble human cognition.

Key questions answered:

Q: How is the brain’s language processing similar to AI models?

A: The brain transforms spoken language through a sequence of calculations that align with increasingly deeper layers of larger linguistic patterns.

Q: Why is this study important for understanding meaning?

A: It challenges rule-based theories of language, suggesting instead that meaning emerges through dynamic, contextual processing, similar to modern AI systems.

Q: What resource did the researchers publish?

A: A publicly available dataset combining electrocorticography recordings with linguistic features, enabling new tests of competing linguistic theories.

Editorial notes:

  • This article was edited by a Neuroscience News editor.
  • Journal article revised in its entirety.
  • Additional context added by our staff.

About this language and current AI research

Author: Yarden Mills
Source: Hebrew University of Jerusalem
Contact: Yarden Mills – Hebrew University of Jerusalem
Picture: Image is credited to Neuroscience News

Original research: Free access.
“The temporal structure of natural language processing in the human brain corresponds to a layered hierarchy of large language models» by Uri Hasson et al. Natural communications


Abstract

The temporal structure of natural language processing in the human brain corresponds to a layered hierarchy of large language models

Large language models (LLMs) provide a framework for understanding language processing in the human brain. Unlike traditional models, LLMs represent words and context via layered numerical embeddings.

Here, we demonstrate that the layer hierarchy of LLMs aligns with the temporal dynamics of language understanding in the brain.

Using electrocorticography (ECoG) data from participants listening to a 30-minute narrative, we show that deeper LLM layers correspond to later brain activity, particularly in Broca’s area and other language-related regions.

We extract contextual embeddings from GPT-2 XL and Llama-2 and use linear models to predict neuronal responses over time. Our results reveal a strong correlation between the depth of the model and the temporal window of reception of the brain during comprehension.

We also compare LLM-based predictions with symbolic approaches, highlighting the advantages of deep learning models for capturing brain dynamics.

We are releasing our aligned neural and linguistic dataset as a public benchmark for testing competing theories of language processing.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026

Experts warn Canada risks losing top researchers in ‘global war for AI talent’

January 22, 2026

$30 million awarded to Binghamton University for new AI research center | News

January 21, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (54)
  • AI in Business (278)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (224)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)

Business Reporter – Management – Managing supply chain risks in 2026: when disruption is the norm

January 23, 2026

San Francisco AI Startup Emergent Secures $70M Series B Funding to Grow Its Team

January 23, 2026

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026

AI for Business: Practical Tools for Small Businesses

January 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (54)
  • AI in Business (278)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (224)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.