AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
In this episode of the Profound Podcast, I speak with Erik J. Larson, author of The Myth of Artificial Intelligence, about the speculative nature and real limitations of AI, particularly in relation to achieving Artificial General Intelligence (AGI). Larson delves into the philosophical and scientific misunderstandings surrounding AI, challenging the dominant narrative that AGI is just around the corner. Drawing from his expertise and experience in the field, Larson explains why much of the AI hype lacks empirical foundation. He emphasizes the limits of current AI models, particularly their reliance on inductive reasoning, which, though powerful, is insufficient for achieving human-like intelligence.
Larson discusses how the field of AI has historically blended speculative futurism with genuine technological advancements, often fueled by financial incentives rather than scientific rigor. He highlights how this approach has led to misconceptions about AI’s capabilities, especially in the context of AGI. Drawing connections to philosophical theories of inference, Larson introduces deductive, inductive, and abductive reasoning, explaining how current AI systems fall short in their over-reliance on inductive methods. The conversation touches on the challenges of abduction (the "broken" form of reasoning humans often use) and the difficulty of replicating this in AI systems.
Throughout the discussion, we explore the social and ethical implications of AI, including concerns about data limitations, the dangers of synthetic data, and the looming “data wall” that could hinder future AI progress. We also touch on broader societal impacts, such as how AI’s potential misuse and over-reliance might affect innovation and human intelligence.