
Digital Disruption with Geoff Nielson How AI Will Save Humanity: Creator of The Last Invention Explains
When intelligence becomes abundant, what happens to humanity’s purpose?
Andy Mills, the co-founder of The New York Times’ The Daily and creator of The Last Invention, joins us on this episode of Digital Disruption.
Andy is a reporter, editor, podcast producer, and co-founder of Longview. His most recent series, The Last Invention, explores the AI revolution, from Alan Turing’s early ideas to today’s fierce debates between accelerationists, doomers, and those focused on building the technology safely. Before that, he co-created The Daily at The New York Times and produced acclaimed documentary series including Rabbit Hole, Caliphate, and The Witch Trials of J.K. Rowling. A former fundamentalist Christian from Louisiana and Illinois, Andy now champions curiosity, skepticism, and the transformative power of listening to people with different perspectives, values that shape his award-winning journalism across politics, terrorism, culture wars, technology, and science.
Andy sits down with Geoff to break down the real debate shaping the future of AI. From the “doomers” warning of existential risk to the accelerationists racing toward AGI, Andy maps out the three major AI camps influencing policy, economics, and the future of human intelligence. This conversation explores why some researchers fear AGI, why others believe it will save humanity, how job loss and automation could reshape society, and why 2025 is becoming an “AI 101 moment” for the public. Andy also shares what he’s learned after years investigating OpenAI, Anthropic, xAI, and the people behind the AGI race.
If you want clarity on AGI, existential risk, the future of work, and what it all means for humanity, this is an episode you won’t want to miss.
In this episode:
00:00 Intro
01:00 The three camps of AI: doom, acceleration, scouts
05:00 Why skeptics aren’t driving the AI debate
07:00 Job loss, productivity & “good” vs. “bad” disruption
09:00 Existential risk & why scientists are sounding alarms
12:00 The origins of doomers and accelerationists
17:00 How AI debates escalated after ChatGPT
22:00 Why 2025 is an AI “101 moment” for the public
24:00 The tech stack wars: OpenAI, Anthropic, xAI
28:00 Why leaders joined the AI race
30:00 The accelerationist mindset
33:00 Contrarians, symbolists & the forgotten history of AI
39:00 Big Tech, branding & why AI CEOs avoid open conflict
42:00 The closed group chats of AI’s elite builders
46:00 Sci-Fi narratives vs. real-world intelligence risks
52:00 The AI bubble & why adoption is unlike any tech before
01:00:00 Are we entering a wright-brothers-to-moon-landing era?
01:10:00 What AGI means for capitalism, work & purpose
01:18:00 Why public debate needs to start now
01:20:00 What happens next
Connect with Andy:
Website: https://www.andymills.work/about
Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
