3min snip

Causal Bandits Podcast cover image

Free Will, LLMs & Intelligence | Judea Pearl Ep 21 | CausalBanditsPodcast.com

Causal Bandits Podcast

NOTE

Limitations of Large Language Models and Causal Reasoning

Large language models (LLMs) face limitations in learning causal structures based solely on observational data. While some argue that LLMs can learn world models, others point to inherent constraints grounded in causal reasoning theories. Insights from Andrew Lampinan suggest that LLMs can develop active causal strategies from both passive and interventional data, which includes experimental descriptions. LLMs are trained on new types of data derived from human-produced texts, which embody causal models created by individuals. However, merely copying these models does not equate to genuine learning from the data itself, and LLMs often create a complex amalgamation of these human models—referred to here as a 'salad of associations.' This complexity raises questions about the reliability of LLMs in causal reasoning, as they may produce unpredictable outcomes based on a mixture of borrowed concepts rather than robust causal understanding.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode