LessWrong (Curated & Popular)

"At 87, Pearl is still able to change his mind" by rotatingpaguro

Oct 30, 2023
Judea Pearl, famous researcher known for Bayesian networks and statistical formalization of causality, discusses the need for a causal model and challenges machine learning's limitation to statistics-level reasoning. They explore surprising changes in perspective on causal queries and GPT capabilities, levels of causation in AI, and ethical implications in the shift towards general AI.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Causal Information in Text

  • Judea Pearl reconsidered his stance on extracting causal information from observational studies after observing large language models.
  • He found that LLMs can cite causal information present in text data, even without experiencing underlying events.
ANECDOTE

Firing Squad Example

  • Pearl tested GPT's causal reasoning with a firing squad example from his book.
  • After initial struggles, GPT correctly identified overdetermination, where multiple sufficient causes exist for an event.
INSIGHT

Ladder of Causation and LLMs

  • Pearl acknowledges that LLMs can access higher levels of the ladder of causation due to text data containing causal information.
  • Reinforcement learning, while enabling intervention-based learning, still falls short of true causal reasoning.
Get the Snipd Podcast app to discover more snips from this episode
Get the app