The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673

6 snips
Feb 26, 2024
Ben Prystawski, a PhD student at Stanford blending cognitive science with machine learning, unveils fascinating insights on LLM reasoning. He discusses his recent paper that questions if reasoning exists in LLMs and the effectiveness of chain-of-thought strategies. Delve into how locality in training data fuels reasoning capabilities, and explore the nuances of optimizing prompts for better model performance. The conversation also touches on how our human experiences shape reasoning, enhancing comprehension in artificial intelligence.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Ben's Path to AI

  • Ben Prystawski's interest in language began with learning languages at 14.
  • He realized his true passion was grammar, combining it with his computer interest to pursue computational linguistics.
INSIGHT

Value of Reasoning

  • Reasoning is valuable because it helps us derive new knowledge without new external data.
  • This is demonstrated by mathematical proofs providing surprising insights.
INSIGHT

LLM Reasoning

  • LLMs display reasoning as intermediate computation, not necessarily mechanically.
  • Chain-of-thought prompting encourages this, often leading to better answers.
Get the Snipd Podcast app to discover more snips from this episode
Get the app