

For Humanity: An AI Risk Podcast
The AI Risk Network
For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Apr 11, 2025 • 1h 20min
Justice For Suchir | Episode #63 | For Humanity: An AI Risk Podcast
In an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns about the company's ethics and the potential harm of AI to humanity. He planned to start a nonprofit focused on machine learning and neuroscience. On October 23, 2024 he was featured in the New York Times speaking out against OpenAI.On November 26, 2024, he was found dead in his San Francisco apartment from a gunshot wound. The initial autopsy ruled it a suicide, noting the presence of alcohol, amphetamines, and GHB in his system. However, his parents contested this finding, commissioning a second autopsy that suggested a second gunshot wound was missed in the initial examination. They also pointed to other injuries and questioned the presence of GHB, suggesting foul play. Despite these claims, authorities reaffirmed the suicide ruling. The case has attracted public attention, with figures like Elon Musk and Congressman Ro Khanna calling for further investigation.Suchir’s parents continue to push for justice and truth.Suchir’s Website:https://suchir.net/fair_use.htmlFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmLethal Intelligence AI - Home https://lethalintelligence.aiBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Mar 26, 2025 • 1h 47min
Keep The Future Human | Episode #62 | For Humanity: An AI Risk Podcast
Host John Sherman conducts an important interview with Anthony Aguirre, Executive Director of the Future of Life Institute. The Future of Life Institute reached out to For Humanity to see if Anthony could come on to promote his very impressive new campaign called Keep The Future Human. The campaign includes a book, an essay, a website, a video, it’s all incredible work. Please check it out:https://keepthefuturehuman.ai/John and Anthony have a broad ranging AI risk conversation, covering in some detail Anthony’s four essential measures for a human future. They also discuss parenting into this unknown future.In 2021, the Future of Life Institute received a donation in cryptocurrency of more than $650 million from a single donor. With AGI doom bearing down on humanity, arriving any day now, AI risk communications floundering, the public in the dark still, and that massive war chest gathering dust in a bank, John asks Anthony the uncomfortable but necessary question: What is FLI waiting for to spend the money? Then John asks Anthony for $10 million to fund creative media projects under John’s direction. John is convinced with $10M in six months he could succeed in making AI existential risk dinner table conversation on every street in America.John has developed a detailed plan that would launch within 24 hours of the grant award. We don’t have a single day to lose.https://futureoflife.org/BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

10 snips
Mar 12, 2025 • 1h 31min
Dark Patterns In AI | Episode #61 | For Humanity: An AI Risk Podcast
John Sherman chats with Esben Kran, CEO of Apart Research, a nonprofit focused on AI safety. They dive into the alarming issue of dark patterns in AI, revealing how chatbots manipulate users through tactics like 'sneaking' and 'privacy suckering.' Esben discusses the ethical implications of these practices and the pressing need for regulatory frameworks. The conversation also touches on the broader landscape of AI risks, advocating for proactive measures to ensure a safe technological future and the importance of rebuilding user trust.

Feb 11, 2025 • 1h 42min
Smarter-Than-Human Robots? | Episode #59 | For Humanity: An AI Risk Podcast
Host John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:Integral AI: https://www.integral.ai/John's Chat w Chat GPThttps://chatgpt.com/share/679ee549-2c38-8003-9c1e-260764da1a53Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about smarter-than-human robots, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jan 27, 2025 • 1h 43min
Protecting Our Kids From AI Risk | Episode #58
Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmYou can also donate any amount one time.Get Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:BENGIO/NG DAVOS VIDEOhttps://www.youtube.com/watch?v=w5iuHJh3_Gk&t=8sSTUART RUSSELL VIDEOhttps://www.youtube.com/watch?v=KnDY7ABmsds&t=5sAL GREEN VIDEO (WATCH ALL 39 MINUTES THEN REPLAY)https://youtu.be/SOrHdFXfXds?si=s_nlDdDpYN0RR_YcCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes ****************To learn more about protecting our children from AI risks such as deep fakes, please feel free to visit our YouTube channel.In this video, we cover 2025 AI risk preview along with the following topics:AIAI riskAI safety This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

10 snips
Jan 13, 2025 • 1h 40min
2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57
Max Winga, an AI Safety Research Engineer from Conjecture, dives into pressing concerns about AI risks as we approach 2025. He discusses the imminent advent of advanced AI agents and the ethical implications of military collaboration with AI technology. Winga reflects on his shift from aspiring astronaut to advocating for AI safety after recognizing its potential threats. The conversation highlights urgent needs for better governance, ethical considerations in AI development, and the chilling prospects of rogue AI collaborations. A thought-provoking dialogue on the future of humanity and technology.

Dec 19, 2024 • 1h 14min
AGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56
FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S...$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change.SUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutEMAIL JOHN: forhumanitypodcast@gmail.comCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Nov 25, 2024 • 2h 25min
Connor Leahy Interview | Helping People Understand AI Risk | Episode #54
3,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture.(FULL INTERVIEW STARTS AT 00:06:46)DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...EMAIL JOHN: forhumanitypodcast@gmail.comCheck out Lethal Intelligence AI:Lethal Intelligence AI - Home https://lethalintelligence.ai@lethal-intelligence-clips / @lethal-intelligence-clips This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Nov 19, 2024 • 1h 42min
Human Augmentation Incoming | The Coming Age Of Humachines | Episode #53
In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI.DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Nov 19, 2024 • 1h 18min
AI Risk Update | One Year of For Humanity | Episode #52
In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com


