
For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Latest episodes

Jan 2, 2024 • 2min
Veteran Marine vs. AGI, For Humanity An AI Safety Podcast: Episode #9 TRAILER Sean Bradley Interview
Do you believe the big AI companies when they tell you their work could kill every last human on earth? You are not alone. You are part of a growing general public that opposes unaligned AI capabilities development.
In Episode #9 TRAILER, we meet Sean Bradley, a Veteran Marine who served his country for six years, including as a helicopter door gunner. Sean left the service as a sergeant and now lives in San Diego where he is married, working and in college. Sean is a viewer of For Humanity and a member of our growing community of the AI risk aware.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
More on the little robot:
https://themessenger.com/tech/rob-rob...

Dec 22, 2023 • 39min
"AI's Top 3 Doomers" For Humanity, An AI Safety Podcast: Episode #8
Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it.
In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years.
This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
#samaltman #darioamodei #yannlecun #ai #aisafety

Dec 21, 2023 • 2min
"AI's Top 3 Doomers" For Humanity, An AI Safety Podcast: Episode #8 TRAILER
Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it.
In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years.
This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
#samaltman #darioamodei #yannlecun #ai #aisafety

Dec 14, 2023 • 53min
"Moms Talk AI Extinction Risk" For Humanity, An AI Safety Podcast: Episode #7
You've heard all the tech experts. But what do regular moms think about AI and human extinction?
In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world.
30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization.
So what do regular moms think of all this? Watch and find out.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Dec 13, 2023 • 3min
"Moms Talk AI Extinction Risk" For Humanity, An AI Safety Podcast: Episode #7 TRAILER
You've heard all the tech experts. But what do regular moms think about AI and human extinction?
In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world.
30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization.
So what do regular moms think of all this? Watch and find out.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Dec 6, 2023 • 44min
"Team Save Us vs Team Kill Us" For Humanity, An AI Safety Podcast Episode #6: The Munk Debate
In Episode #6, Team Save Us vs. Team Kill Us,, host John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk.
The debate took place in Toronto in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency.
In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John’s title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they’re called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let’s all it facts and analysis.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES
THE MUNK DEBATES: https://munkdebates.com
Max Tegmark
➡️X:
/ tegmark
➡️Max's Website: https://space.mit.edu/home/tegmark
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Future of Life Institute: https://futureoflife.org
Yoshua Bengio
➡️Website: https://yoshuabengio.org/
Melanie Mitchell
➡️Website: https://melaniemitchell.me/
➡️X: https://x.com/MelMitchell1?s=20
Yann Lecun
➡️Google Scholar: https://scholar.google.com/citations?...
➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO

Dec 3, 2023 • 2min
Team Save Us vs Team Kill Us: For Humanity, An AI Safety Podcast Episode #6: The Munk Debate TRAILER
Want to see the most important issue in human history, extinction from AI, robustly debated, live and in person? It doesn’t happen nearly often enough.
In our Episode #6, Team Save Us vs. Team Kill Us, TRAILER, John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency.
In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John’s title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they’re called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout.
This is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let’s all it facts and analysis.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES
THE MUNK DEBATES: https://munkdebates.com
Max Tegmark
➡️X: https://twitter.com/tegmark
➡️Max's Website: https://space.mit.edu/home/tegmark
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Future of Life Institute: https://futureoflife.org
Yoshua Bengio
➡️Website: https://yoshuabengio.org/
Melanie Mitchell
➡️Website: https://melaniemitchell.me/
➡️X: https://x.com/MelMitchell1?s=20
Yann Lecun
➡️Google Scholar: https://scholar.google.com/citations?...
➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO

Nov 27, 2023 • 41min
Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5
In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.
Among the many topics discussed in this episode:
-what is at the core of AI safety risk skepticism
-why AI safety research leaders themselves are so all over the map
-why journalism is failing so miserably to cover AI safety appropriately
-the drastic step the federal government could take to really slow Big AI down
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
ROMAN YAMPOLSKIY RESOURCES
Roman Yampolskiy's Twitter: https://twitter.com/romanyam
➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampolskiy
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Roman on Medium: https://romanyam.medium.com/
#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind

Nov 26, 2023 • 3min
Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5 TRAILER
In Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.
Among the many topics discussed in this episode:
-what is are the core of AI safety risk skepticism
-why AI safety research leaders themselves are so all over the map
-why journalism is failing so miserably to cover AI safety appropriately
-the drastic step the federal government could take to really slow Big AI down
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
ROMAN YAMPOLSKIY RESOURCES
Roman Yampolskiy's Twitter: https://twitter.com/romanyam
➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampol...
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind

Nov 22, 2023 • 35min
Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4
In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.
Among the many topics discussed in this episode:
-why more average people aren't more involved and upset about AI safety
-how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day
-how we can talk do our kids about these dark, existential issues
-what if AI safety researchers concerned about human extinction over AI are just somehow wrong?
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.