Ben Prystawski, a PhD student at Stanford blending cognitive science with machine learning, unveils fascinating insights on LLM reasoning. He discusses his recent paper that questions if reasoning exists in LLMs and the effectiveness of chain-of-thought strategies. Delve into how locality in training data fuels reasoning capabilities, and explore the nuances of optimizing prompts for better model performance. The conversation also touches on how our human experiences shape reasoning, enhancing comprehension in artificial intelligence.
25:03
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
question_answer ANECDOTE
Ben's Path to AI
Ben Prystawski's interest in language began with learning languages at 14.
He realized his true passion was grammar, combining it with his computer interest to pursue computational linguistics.
insights INSIGHT
Value of Reasoning
Reasoning is valuable because it helps us derive new knowledge without new external data.
This is demonstrated by mathematical proofs providing surprising insights.
insights INSIGHT
LLM Reasoning
LLMs display reasoning as intermediate computation, not necessarily mechanically.
Chain-of-thought prompting encourages this, often leading to better answers.
Get the Snipd Podcast app to discover more snips from this episode
In *The Cultural Origins of Human Cognition*, Michael Tomasello bridges evolutionary theory and cultural psychology to explain the distinct cognitive abilities of humans. He argues that human cognition is rooted in capacities for shared attention, understanding intentions, and imitative learning, which drive cultural evolution and distinguish humans from other primates. The book provides a comprehensive analysis of how these cognitive capacities develop and shape human culture over time.
Sapiens
A Brief History of Humankind
Yuval Noah Harari
This book surveys the history of humankind from the Stone Age to the 21st century, focusing on Homo sapiens. It divides human history into four major parts: the Cognitive Revolution, the Agricultural Revolution, the Unification of Humankind, and the Scientific Revolution. Harari argues that Homo sapiens dominate the world due to their unique ability to cooperate in large numbers through beliefs in imagined realities such as gods, nations, money, and human rights. The book also examines the impact of human activities on the global ecosystem and speculates on the future of humanity, including the potential for genetic engineering and non-organic life.
Today we’re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben’s recent paper, “Why think step by step? Reasoning emerges from the locality of experience,” which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben’s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it.
The complete show notes for this episode can be found at twimlai.com/go/673.