
For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Latest episodes

Apr 15, 2024 • 5min
Episode #24 TRAILER - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast
In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. She compares the challenge to the Titanic's course towards an iceberg, stressing the difficulty in convincing people of the urgency. Woods argues that AI safety is a matter of both altruism and self-preservation. She uses human-animal relations to illustrate the potential consequences of a disparity in intelligence between humans and AI. She notes a positive shift in the perception of AI risks, from fringe to mainstream concern, and shares a personal anecdote from her time in Africa, which informed her views on the universal aversion to death and the importance of preventing harm. Woods's realization of the increasing probability of near-term AI risks further emphasizes the immediate need for action in AI safety.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Nonlinear: https://www.nonlinear.org/
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAISco

Apr 10, 2024 • 2h 2min
Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast
AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, existential risks, and AI alignment with human values. They discuss the philosophy of accelerationism, human conditioning's influence on AI understanding, and the potential consequences of AI safety efforts. The podcast delves into the existential threat of human extinction from AGI, exploring the worst-case scenario of AI killing all humans.

Apr 8, 2024 • 5min
Episode #23 TRAILER - “AI Acceleration Debate” For Humanity: An AI Safety Podcast
Suicide or Salvation? In episode #23 TRAILER, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They ponder whether AI should align with human values and the potential consequences of such alignment. Paul suggests that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
Is AI an existential threat to humanity? (00:00:00) Debate on the potential risks of AI and its impact on humanity.
The AI safety movement (00:00:42) Discussion on the perception of AI safety as a religion and the philosophy of accelerationism.
Human conditioning and perspectives on AI (00:02:01) Exploration of how human conditioning shapes perspectives on AI and the concept of AGI as a human creation.
Aligning AI and human values (00:04:24) Debate on the dangers of aligning AI with human ideologies and the potential implications for humanity.
RESOURCES:
Paul’s Youtube Channel: Accel News Network
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS

Apr 3, 2024 • 39min
Episode #22 - “Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast
In Episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Vanity Fair Gushes in 2015
Business Insider: Sam Altman’s Act May Be Wearing Thin
Oprah and Maya Angelou
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
Timestamps:
The man who holds the power (00:00:00) Discussion about Sam Altman's power and its implications for humanity.
The safety crisis (00:01:11) Concerns about safety in AI technology and the need for protection against potential risks.
Sam Altman's decisions and vision (00:02:24) Examining Sam Altman's role, decisions, and vision for AI technology and its impact on society.
Sam Altman's actions and accountability (00:04:14) Critique of Sam Altman's actions and accountability regarding the release of AI technology.
Reflections on getting fired (00:11:01) Sam Altman's reflections and emotions after getting fired from OpenAI's board.
Silencing of concerns (00:19:25) Discussion about the silencing of individuals concerned about AI safety, particularly Ilya Sutskever.
Relationship with Elon Musk (00:20:08) Sam Altman's sentiments and hopes regarding his relationship with Elon Musk amidst tension and legal matters.
Legal implications of AI technology (00:22:23) Debate on the fairness of training AI under copyright law and its legal implications.
The value of data (00:22:32) Sam Altman discusses the compensation for valuable data and its use.
Safety concerns (00:23:41) Discussion on the process for ensuring safety in AI technology.
Broad definition of safety (00:24:24) Exploring the various potential harms and impacts of AI, including technical, societal, and economic aspects.
Lack of trust and control (00:27:09) Sam Altman's admission about the power and control over AGI and the need for governance.
Public apathy towards AI risk (00:31:49) Addressing the common reasons for public inaction regarding AI risk awareness.
Celebration of life (00:34:20) A personal reflection on the beauty of music and family, with a message about the celebration of life.
Conclusion (00:38:25) Closing remarks and a preview of the next episode.

Apr 1, 2024 • 2min
“Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast Episode #22 TRAILER
In episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
n this AI Safety Podcast episode, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS

Mar 27, 2024 • 1h 14min
“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21
“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21
Interview starts at 20:10
Some highlights of John’s news career start at 9:14
In In Episode #21 “Why AI Killing You Isn’t On The News” Casey Clark Interview,, host John Sherman and WJZY-TV News Director Casey Clark explore the significant underreporting of AI's existential risks in the media. They recount a disturbing incident where AI bots infiltrated a city council meeting, spewing hateful messages. The conversation delves into the challenges of conveying the complexities of artificial general intelligence to the public and the media's struggle to present such abstract concepts compellingly. They predict job losses as the first major AI-related news story to break through and speculate on the future of AI-generated news anchors, emphasizing the need for human reporters in the field.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
See more of John’s Talk in Philly:
https://x.com/ForHumanityPod/status/1772449876388765831?s=20
FOLLOW DAVID SHAPIRO ON YOUTUBE!
David Shapiro - YouTube
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS

Mar 25, 2024 • 4min
“Why AI Killing You Isn’t On The News” TRAILER For Humanity: An AI Safety Podcast Episode #21
In Episode #21 TRAILER “Why AI Killing You Isn’t On The News” Casey Clark Interview, John Sherman interviews WJZY-TV News Director Casey Clark about TV news coverage of AI existential risk.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST / discord

Mar 20, 2024 • 1h 49min
0:01 / 3:52 “AI Risk Realist vs. Coding Cowboy” For Humanity: An AI Safety Podcast Episode #20
In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” John Sherman debates AI risk with lifelong coder and current Chief AI Officer Mark Tellez. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Community Note: So, after much commentary, I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!
RESOURCES:
Time Article on the New Report:
AI Poses Extinction-Level Risk, State-Funded Report Says | TIME
John's Upcoming Talk in Philadelphia!
It is open to the public, you will need to make a free account at meetup.com
https://www.meetup.com/philly-net/eve...
FOLLOW DAVID SHAPIRO ON YOUTUBE!
David Shapiro - YouTube
Dave Shapiro’s New Video where he talks about For Humanity
AGI: What will the first 90 days be like? And more VEXING questions from the audience!
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
Pause AI
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord

Mar 18, 2024 • 4min
“AI Risk Realist vs. Coding Cowboy” TRAILER For Humanity: An AI Safety Podcast Episode #20
In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” TRAILER, John Sherman debates AI risk with a lifelong coder and current Chief AI Officer. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Community Note: So after much commentary I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!
RESOURCES:
Time Article on the New Report:
AI Poses Extinction-Level Risk, State-Funded Report Says | TIME
FOLLOW DAVID SHAPIRO ON YOUTUBE!
David Shapiro - YouTube
Dave Shapiro’s New Video where he talks about For Humanity
AGI: What will the first 90 days be like? And more VEXING questions from the audience!
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
Pause AI
Pause AI

9 snips
Mar 13, 2024 • 1h 41min
“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19
Discussion on the dangers of AI surpassing human intelligence, exploring societal implications and the need for consent. Delving into risks of AI and ensuring a positive future, including building a digital super organism. Reflecting on the future impact of advanced AI on society, job automation, and economic shifts. Navigating the complexities of AI development, international cooperation, and geopolitical concerns.