

For Humanity: An AI Safety Podcast
The AI Risk Network
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Jan 27, 2025 • 1h 43min
Protecting Our Kids From AI Risk | Episode #58
Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmYou can also donate any amount one time.Get Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:BENGIO/NG DAVOS VIDEOhttps://www.youtube.com/watch?v=w5iuHJh3_Gk&t=8sSTUART RUSSELL VIDEOhttps://www.youtube.com/watch?v=KnDY7ABmsds&t=5sAL GREEN VIDEO (WATCH ALL 39 MINUTES THEN REPLAY)https://youtu.be/SOrHdFXfXds?si=s_nlDdDpYN0RR_YcCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ****************To learn more about protecting our children from AI risks such as deep fakes, please feel free to visit our YouTube channel.In this video, we cover 2025 AI risk preview along with the following topics:AIAI riskAI safety This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

10 snips
Jan 13, 2025 • 1h 40min
2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57
Max Winga, an AI Safety Research Engineer from Conjecture, dives into pressing concerns about AI risks as we approach 2025. He discusses the imminent advent of advanced AI agents and the ethical implications of military collaboration with AI technology. Winga reflects on his shift from aspiring astronaut to advocating for AI safety after recognizing its potential threats. The conversation highlights urgent needs for better governance, ethical considerations in AI development, and the chilling prospects of rogue AI collaborations. A thought-provoking dialogue on the future of humanity and technology.

Dec 19, 2024 • 1h 14min
AGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S...$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change.SUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutEMAIL JOHN: forhumanitypodcast@gmail.comCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Nov 25, 2024 • 2h 25min
Connor Leahy Interview | Helping People Understand AI Risk | Episode #54
3,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture.(FULL INTERVIEW STARTS AT 00:06:46)DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...EMAIL JOHN: forhumanitypodcast@gmail.comCheck out Lethal Intelligence AI:Lethal Intelligence AI - Home https://lethalintelligence.ai@lethal-intelligence-clips / @lethal-intelligence-clips This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Nov 19, 2024 • 1h 42min
Human Augmentation Incoming | The Coming Age Of Humachines | Episode #53
In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI.DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Nov 19, 2024 • 1h 18min
AI Risk Update | One Year of For Humanity | Episode #52
In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Oct 23, 2024 • 1h 6min
AI Risk Funding | Big Tech vs. Small Safety I Episode #51
In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.Learn More About Founders Pledge:https://www.founderspledge.com/No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out.THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM ESTJoin Zoom Meeting:https://storyfarm.zoom.us/j/816517210...Passcode: 829191Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.****************RESOURCES:SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef EbnerYoutube: / @jpjosefpictures Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ***********************Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include: AI AI safety AI safety research This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Oct 21, 2024 • 6min
AI Risk Funding | Big Tech vs. Small Safety | Episode #51 TRAILER
In Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.Learn More About Founders Pledge:https://www.founderspledge.com/THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM ESTJoin Zoom Meeting:https://storyfarm.zoom.us/j/816517210...Passcode: 829191Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.****************RESOURCES:SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef EbnerYoutube: / @jpjosefpictures Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ***********************Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation.In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include: AI What is AI? Big tech***************************If you want to learn more about AI risk funding, follow us on our social media platforms, where we share additional tips, resources, and stories. You can find us onYouTube: / @forhumanitypodcast Website: http://www.storyfarm.com/***************************Don’t miss this opportunity to discover the secrets of AI risk funding, AI, what is AI, and big tech.Have I addressed your concerns about AI risk funding?Maybe you wish to comment below and let me know what else I can help you with AI, what is AI, big tech, and AI risk funding. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Oct 21, 2024 • 1h 19min
Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50
In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM ESTJoin Zoom Meeting:https://storyfarm.zoom.us/j/816517210...Passcode: 829191LEARN MORE–www.metaculus.comPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.****************RESOURCES:SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord / discord Max Winga’s “A Stark Warning About Extinction” • A Stark Warning About AI Extinction For Humanity Theme Music by Josef EbnerYoutube: / @jpjosefpictures Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes **********************Hi, thanks for watching our video about what insight can Metaculus reveal about AI risk and accurately predicting doom.In this video, we discuss accurately predicting doom and cover the following topics AI AI safety Metaculus **********************Explore our other video content here on YouTube, where you'll find more insights into accurately predicting doom, along with relevant social media links.YouTube: / @forhumanitypodcast Website: http://www.storyfarm.com/***************************This video explores accurately predicting doom, AI, AI safety, and Metaculus.Have I addressed your curiosity regarding accurately predicting doom?We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI, AI safety, Metaculus, and accurately predicting doom. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Oct 14, 2024 • 5min
Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50 TRAILER
In Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.LEARN MORE–AND JOIN STOP AIwww.stopai.infoPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com