a16z Podcast

Columbia CS Professor: Why LLMs Can’t Discover New Science

452 snips
Oct 13, 2025
In this engaging conversation, Vishal Misra, a distinguished computer science professor at Columbia University, delves into the limitations of large language models (LLMs) in making scientific discoveries. Sharing insights from his research on retrieval-augmented generation, he argues that while LLMs have evolved rapidly, they can't fundamentally create new scientific paradigms. Co-host Martin Casado adds a technical perspective, discussing the need for new architectures in AI and why current models might be plateauing. Together, they explore the implications for artificial general intelligence.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
00:00 / 00:00

AGI Means Creating New Science

  • Vishal Misra defines AGI as the ability to create new scientific paradigms beyond training data.
  • He says an AGI must invent results like relativity rather than just interpolate known knowledge.
00:00 / 00:00

Manifold View Explains Hallucinations

  • LLMs produce a next-token distribution and navigate a compressed Bayesian manifold derived from training data.
  • When prompts push them off that manifold their outputs become confident but incorrect hallucinations.
00:00 / 00:00

Why Chain-Of-Thought Helps

  • Chain-of-thought works because breaking tasks into steps reduces prediction entropy at each step.
  • LLMs follow seen step patterns, increasing confidence and correctness during stepwise reasoning.
Get the Snipd Podcast app to discover more snips from this episode
Get the app