
For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Latest episodes

Nov 20, 2023 • 2min
Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4 TRAILER
In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.
Among the many topics discussed in this episode:
-why more average people aren't more involved and upset about AI safety
-how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day
-how we can talk do our kids about these dark, existential issues
-what if AI safety researchers concerned about human extinction over AI are just somehow wrong?
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Nov 15, 2023 • 28min
The Interpretability Problem: For Humanity, An AI Safety Podcast Episode #3
Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Nov 13, 2023 • 1min
The Interpretability Problem: For Humanity, An AI Safety Podcast Episode #3 Trailer
This is the trailer for Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
#AI #airisk #alignment #interpretability #doom #aisafety #openai #anthropic #eleizeryudkowsky #maxtegmark #connorleahy

Nov 8, 2023 • 34min
For Humanity, An AI Safety Podcast Episode #2: The Alignment Problem
Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.
This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.
In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough.

Nov 6, 2023 • 2min
For Humanity, An AI Safety Podcast: Episode #2, The Alignment Problem, Trailer
Did you know the makers of AI have no idea how to control their technology, while they admit it has the power to create human extinction? In For Humanity: An AI Safety Podcast, Episode #2 The Alignment Problem, we look into the fact no one has any clue how to align an AI system with human values, ethics and goals. Such as don't kill all the humans, for example. Episode #2 drops Wednesday, this is the trailer.

Oct 30, 2023 • 50min
For Humanity, An AI Safety Podcast: Episode #1 Please Look Up
How bout we choose not to just all die? Are you with me?
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AI. We’ll meet the heroes and villains, explore the issues and ideas, and what we can do to help save humanity.

Oct 25, 2023 • 2min
For Humanity, An AI Safety Podcast: Episode #1 Trailer
For Humanity, An AI Safety Podcast is the AI Safety Podcast for regular people. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
The makers of AI have no idea how to control their technology or why it does what it does. And yet they keep making it faster and stronger. In episode one we introduce the two biggest unsolved problems in AI safety, alignment and interpretability.
This podcast is your wake-up call, and a real-time, unfolding plan of action.