Causal Bandits Podcast cover image

Free Will, LLMs & Intelligence | Judea Pearl Ep 21 | CausalBanditsPodcast.com

Causal Bandits Podcast

NOTE

Limitations of Large Language Models and Causal Reasoning

Large language models (LLMs) face limitations in learning causal structures based solely on observational data. While some argue that LLMs can learn world models, others point to inherent constraints grounded in causal reasoning theories. Insights from Andrew Lampinan suggest that LLMs can develop active causal strategies from both passive and interventional data, which includes experimental descriptions. LLMs are trained on new types of data derived from human-produced texts, which embody causal models created by individuals. However, merely copying these models does not equate to genuine learning from the data itself, and LLMs often create a complex amalgamation of these human models—referred to here as a 'salad of associations.' This complexity raises questions about the reliability of LLMs in causal reasoning, as they may produce unpredictable outcomes based on a mixture of borrowed concepts rather than robust causal understanding.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner