

Why Can't AI Make Its Own Discoveries? — With Yann LeCun
359 snips Mar 19, 2025
Yann LeCun, Chief AI Scientist at Meta and Turing Award winner, discusses the intriguing limitations of AI in making original discoveries. He explains why current AI models, despite their vast access to knowledge, struggle with true innovation and requires deeper understanding. LeCun highlights the key differences between human reasoning and AI capabilities, emphasizing the need for advanced architectures. The conversation also touches on the importance of open-source innovation and the potential pitfalls for investors in the AI landscape.
AI Snips
Chapters
Transcript
Episode notes
LLMs and Scientific Discovery
- Large language models (LLMs) haven't made new scientific discoveries despite access to vast knowledge.
- They struggle to make novel connections and deductions like humans can, even with memorized information.
AI's Inability to Question
- Current AI models excel at providing known answers but lack the ability to question established knowledge.
- True innovation requires challenging assumptions, a skill LLMs haven't developed yet.
DeepSeek's Clever Trick
- Yann LeCun recalls a DeepSeek example where it generated seemingly insightful observations on the human condition.
- Upon closer inspection, it was revealed to be regurgitated information from existing texts like Sapiens.