
For Humanity: An AI Risk Podcast What We Lose When AI Makes Choices for Us | For Humanity #76
What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.
Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.
Together, they explore:
* Why AI threatens near-term human agency more than long-term sci-fi extinction
* How Google Maps offers a chilling preview of AI’s effect on the human brain
* The difference between fast-thinking and slow-thinking — and why AI exploits it
* Why persuasive AI may outperform humans politically and psychologically
* How profit incentives, not intelligence, are driving the most dangerous outcomes
* Why focusing only on extinction risk alienates the public — and weakens AI safety efforts
👉 Follow More of Jacob Ward’s Work:
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
