For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Latest episodes

undefined
Apr 22, 2025 • 1h 42min

Seventh Grader vs AI Risk | Episode #64 | For Humanity: An AI Risk Podcast

In Episode #64, interview, host John Sherman interviews seventh grader Dylan Pothier, his mom Bridget and his teach Renee DiPietro. Dylan is a award winning student author who is converend about AI risk.(FULL INTERVIEW STARTS AT 00:33:34)Sam Altman/Chris Anderson @ TEDhttps://www.youtube.com/watch?v=5MWT_doo68kCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates
undefined
Apr 11, 2025 • 1h 20min

Justice For Suchir | Episode #63 | For Humanity: An AI Risk Podcast

In an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns about the company's ethics and the potential harm of AI to humanity. He planned to start a nonprofit focused on machine learning and neuroscience. On October 23, 2024 he was featured in the New York Times speaking out against OpenAI.On November 26, 2024, he was found dead in his San Francisco apartment from a gunshot wound. The initial autopsy ruled it a suicide, noting the presence of alcohol, amphetamines, and GHB in his system. However, his parents contested this finding, commissioning a second autopsy that suggested a second gunshot wound was missed in the initial examination. They also pointed to other injuries and questioned the presence of GHB, suggesting foul play. Despite these claims, authorities reaffirmed the suicide ruling. The case has attracted public attention, with figures like Elon Musk and Congressman Ro Khanna calling for further investigation.Suchir’s parents continue to push for justice and truth.Suchir’s Website:https://suchir.net/fair_use.htmlFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmLethal Intelligence AI - Home https://lethalintelligence.aiBUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! / @doomdebates
undefined
Mar 26, 2025 • 1h 47min

Keep The Future Human | Episode #62 | For Humanity: An AI Risk Podcast

Host John Sherman conducts an important interview with Anthony Aguirre, Executive Director of the Future of Life Institute. The Future of Life Institute reached out to For Humanity to see if Anthony could come on to promote his very impressive new campaign called Keep The Future Human. The campaign includes a book, an essay, a website, a video, it’s all incredible work. Please check it out:https://keepthefuturehuman.ai/John and Anthony have a broad ranging AI risk conversation, covering in some detail Anthony’s four essential measures for a human future. They also discuss parenting into this unknown future.In 2021, the Future of Life Institute received a donation in cryptocurrency of more than $650 million from a single donor. With AGI doom bearing down on humanity, arriving any day now, AI risk communications floundering, the public in the dark still, and that massive war chest gathering dust in a bank, John asks Anthony the uncomfortable but necessary question: What is FLI waiting for to spend the money? Then John asks Anthony for $10 million to fund creative media projects under John’s direction. John is convinced with $10M in six months he could succeed in making AI existential risk dinner table conversation on every street in America.John has developed a detailed plan that would launch within 24 hours of the grant award. We don’t have a single day to lose.https://futureoflife.org/BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH  https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!     / @doomdebates  ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube:    / @forhumanitypodcast
undefined
10 snips
Mar 12, 2025 • 1h 31min

Dark Patterns In AI | Episode #61 | For Humanity: An AI Risk Podcast

John Sherman chats with Esben Kran, CEO of Apart Research, a nonprofit focused on AI safety. They dive into the alarming issue of dark patterns in AI, revealing how chatbots manipulate users through tactics like 'sneaking' and 'privacy suckering.' Esben discusses the ethical implications of these practices and the pressing need for regulatory frameworks. The conversation also touches on the broader landscape of AI risks, advocating for proactive measures to ensure a safe technological future and the importance of rebuilding user trust.
undefined
Feb 28, 2025 • 1h 43min

AI Risk Rising | Episode #60 | For Humanity: An AI Risk Podcast

Join Joep Meindertsma, founder of Pause AI Global and advocate for AI safety, as he discusses the urgent need for effective communication strategies in addressing AI risks. He delves into the shortcomings of recent AI summits and emphasizes the importance of collective action for ethical governance, akin to nuclear physics. The conversation also highlights the alarming biases within AI models and the emotional toll on activists fighting for regulation, all while calling for stronger political engagement to safeguard our future against advanced AI threats.
undefined
Feb 11, 2025 • 1h 42min

Smarter-Than-Human Robots? | Episode #59 | For Humanity: An AI Risk Podcast

Host John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:Integral AI: https://www.integral.ai/John's Chat w Chat GPThttps://chatgpt.com/share/679ee549-2c38-8003-9c1e-260764da1a53Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about smarter-than-human robots, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast
undefined
Jan 27, 2025 • 1h 43min

Protecting Our Kids From AI Risk | Episode #58

Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km You can also donate any amount one time. Get Involved! EMAIL JOHN: forhumanitypodcast@gmail.com SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about RESOURCES: BENGIO/NG DAVOS VIDEO https://www.youtube.com/watch?v=w5iuHJh3_Gk&t=8s STUART RUSSELL VIDEO https://www.youtube.com/watch?v=KnDY7ABmsds&t=5s AL GREEN VIDEO (WATCH ALL 39 MINUTES THEN REPLAY) https://youtu.be/SOrHdFXfXds?si=s_nlDdDpYN0RR_Yc Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes **************** To learn more about protecting our children from AI risks such as deep fakes, please feel free to visit our YouTube channel. In this video, we cover 2025 AI risk preview along with the following topics: AI AI risk AI safety
undefined
7 snips
Jan 13, 2025 • 1h 40min

2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57

Max Winga, an AI Safety Research Engineer from Conjecture, dives into pressing concerns about AI risks as we approach 2025. He discusses the imminent advent of advanced AI agents and the ethical implications of military collaboration with AI technology. Winga reflects on his shift from aspiring astronaut to advocating for AI safety after recognizing its potential threats. The conversation highlights urgent needs for better governance, ethical considerations in AI development, and the chilling prospects of rogue AI collaborations. A thought-provoking dialogue on the future of humanity and technology.
undefined
Dec 19, 2024 • 1h 14min

AGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56

FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S... $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change. SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about EMAIL JOHN: forhumanitypodcast@gmail.com Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!    / @doomdebates   BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes  
undefined
Dec 5, 2024 • 1h 32min

AI Risk Special | "Near Midnight in Suicide City" | Episode #55

In a special episode of For Humanity: An AI Risk Podcast, host John Sherman travels to San Francisco. Episode #55 "Near Midnight in Suicide City" is a set of short pieces from our trip out west, where we met with Pause AI, Stop AI, Liron Shapira and stopped by Open AI among other events. Big, huge massive thanks to Beau Kershaw, Director of Photography, and my biz partner and best friend who made this journey with me through the work side and the emotional side of this. The work is beautiful and the days were wet and long and heavy. Thank you, Beau. SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... EMAIL JOHN: forhumanitypodcast@gmail.com Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai @lethal-intelligence-clips / @lethal-intelligence-clips

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner