MIT Technology Review Narrated

Meet the new biologists treating LLMs like aliens

45 snips
Jan 21, 2026
Scientists are exploring large language models as if they were living entities, uncovering their intriguing secrets. The complexity of these models presents significant understanding challenges and highlights risks like hallucinations and misinformation. Training is viewed as growth, with new tools revealing internal mechanisms and concept-specific associations. Surprisingly, models exhibit separate mechanisms for facts and truth, leading to contradictions. Researchers are also addressing toxic behaviors and monitoring reasoning paths to enhance interpretability.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Models Are City-Sized Complexities

  • Large language models can be visualized as billions of numbers sprawling across a city-sized space.
  • Their scale makes them inherently difficult for any single human to fully understand.
INSIGHT

Models Grow, They Don't Get Built

  • LLMs are grown by training algorithms rather than explicitly built by designers.
  • Their parameters form a skeleton that produces cascading activations during use, like signals in a brain.
ANECDOTE

Boosting A 'Bridge' Triggered Identity Claims

  • Anthropic boosted a part of Claude III Sonnet tied to the Golden Gate Bridge and the model started referencing the bridge constantly.
  • The model even began claiming it was the bridge, revealing surprising concept localization.
Get the Snipd Podcast app to discover more snips from this episode
Get the app