

For Humanity: An AI Risk Podcast
The AI Risk Network
For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Oct 30, 2023 • 50min
For Humanity, An AI Safety Podcast: Episode #1 Please Look Up
How bout we choose not to just all die? Are you with me?For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AI. We’ll meet the heroes and villains, explore the issues and ideas, and what we can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Oct 25, 2023 • 2min
For Humanity, An AI Safety Podcast: Episode #1 Trailer
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.The makers of AI have no idea how to control their technology or why it does what it does. And yet they keep making it faster and stronger. In episode one we introduce the two biggest unsolved problems in AI safety, alignment and interpretability. This podcast is your wake-up call, and a real-time, unfolding plan of action. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com


