Me, Myself, and AI

Connecting Language and (Artificial) Intelligence: Princeton’s Tom Griffiths

36 snips
Jan 20, 2026
In this engaging discussion, Tom Griffiths, a Princeton professor specializing in AI and cognitive science, dives into his book, The Laws of Thought. He explores how mathematics has historically shaped our understanding of both human and machine intelligence. Tom elaborates on three frameworks—rules, neural networks, and probability—that drive modern AI and connects these concepts to language. He emphasizes the unique human skills of judgment and metacognition while discussing the limits of large language models and the future of human-AI collaboration.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Math Sought To Explain The Mind

  • Early scientists tried using math to describe the mental world just like the physical world.
  • Understanding minds proved harder and required new mathematical tools beyond physics-style laws.
INSIGHT

Limits Of Rules And Symbols

  • Rules-and-symbols (logic) gave cognitive science precise hypotheses about reasoning and language structure.
  • That approach struggled to explain learning and fuzzy, real-world concepts.
INSIGHT

Neural Nets As Continuous Representations

  • Neural networks model continuous representations where concepts are regions in feature spaces.
  • They let systems learn mappings between spaces, solving learning problems symbolic logic couldn't.
Get the Snipd Podcast app to discover more snips from this episode
Get the app