The launch of R1, an AI model by the Chinese startup DeepSeek, recently sent shockwaves through the technology world. R1 is a “reasoning” model—the most cutting-edge type of large language model (LLM)—and it performs about as well as the best-in-class Western models but for a fraction of the training cost. Like other LLMs, though, it still lacks many of the skills and types of intelligence that human brains achieve. For one, “reasoning” models still have a very limited understanding of the physical world in which they exist.
Our guest today wants to get beyond these hurdles. Yann LeCun, chief AI scientist at Meta and a professor at New York University, thinks LLMs are not the answer if we want truly useful personal assistants, humanoid robots and driverless cars in the future. For machine intelligence to get more interactive with the real world, he is fundamentally rethinking how AI models are built and trained.
This week, along with six other pioneers of machine learning, Professor LeCun was awarded the Queen Elizabeth Prize for Engineering. He joins Alok Jha, The Economist’s science and technology editor.
For more on this topic, check out our series on the science that built the AI revolution, as well as our episodes on artificial general intelligence.
Transcripts of our podcasts are available via economist.com/podcasts.
Listen to what matters most, from global politics and business to science and technology—subscribe to Economist Podcasts+.
For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.