AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Limitations of Large Language Models and Causal Reasoning
Large language models (LLMs) face limitations in learning causal structures based solely on observational data. While some argue that LLMs can learn world models, others point to inherent constraints grounded in causal reasoning theories. Insights from Andrew Lampinan suggest that LLMs can develop active causal strategies from both passive and interventional data, which includes experimental descriptions. LLMs are trained on new types of data derived from human-produced texts, which embody causal models created by individuals. However, merely copying these models does not equate to genuine learning from the data itself, and LLMs often create a complex amalgamation of these human models—referred to here as a 'salad of associations.' This complexity raises questions about the reliability of LLMs in causal reasoning, as they may produce unpredictable outcomes based on a mixture of borrowed concepts rather than robust causal understanding.