For Humanity: An AI Risk Podcast

The AI Risk Network
undefined
Apr 10, 2024 • 2h 2min

Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast

AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, existential risks, and AI alignment with human values. They discuss the philosophy of accelerationism, human conditioning's influence on AI understanding, and the potential consequences of AI safety efforts. The podcast delves into the existential threat of human extinction from AGI, exploring the worst-case scenario of AI killing all humans.
undefined
Apr 8, 2024 • 5min

Episode #23 TRAILER - “AI Acceleration Debate” For Humanity: An AI Safety Podcast

Suicide or Salvation? In episode #23 TRAILER, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They ponder whether AI should align with human values and the potential consequences of such alignment. Paul suggests that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.TIMESTAMPS:Is AI an existential threat to humanity? (00:00:00) Debate on the potential risks of AI and its impact on humanity.The AI safety movement (00:00:42) Discussion on the perception of AI safety as a religion and the philosophy of accelerationism.Human conditioning and perspectives on AI (00:02:01) Exploration of how human conditioning shapes perspectives on AI and the concept of AGI as a human creation.Aligning AI and human values (00:04:24) Debate on the dangers of aligning AI with human ideologies and the potential implications for humanity.RESOURCES:Paul’s Youtube Channel: Accel News NetworkBest Account on Twitter: AI Notkilleveryoneism Memes JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST  / discord  22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIS This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Apr 3, 2024 • 39min

Episode #22 - “Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast

In Episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:Vanity Fair Gushes in 2015Business Insider: Sam Altman’s Act May Be Wearing ThinOprah and Maya AngelouBest Account on Twitter: AI Notkilleveryoneism Memes JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST  / discord  22 Word Statement from Center for AI SafetyStatement on AI Risk | CAISTimestamps:The man who holds the power (00:00:00) Discussion about Sam Altman's power and its implications for humanity.The safety crisis (00:01:11) Concerns about safety in AI technology and the need for protection against potential risks.Sam Altman's decisions and vision (00:02:24) Examining Sam Altman's role, decisions, and vision for AI technology and its impact on society.Sam Altman's actions and accountability (00:04:14) Critique of Sam Altman's actions and accountability regarding the release of AI technology.Reflections on getting fired (00:11:01) Sam Altman's reflections and emotions after getting fired from OpenAI's board.Silencing of concerns (00:19:25) Discussion about the silencing of individuals concerned about AI safety, particularly Ilya Sutskever.Relationship with Elon Musk (00:20:08) Sam Altman's sentiments and hopes regarding his relationship with Elon Musk amidst tension and legal matters.Legal implications of AI technology (00:22:23) Debate on the fairness of training AI under copyright law and its legal implications.The value of data (00:22:32) Sam Altman discusses the compensation for valuable data and its use.Safety concerns (00:23:41) Discussion on the process for ensuring safety in AI technology.Broad definition of safety (00:24:24) Exploring the various potential harms and impacts of AI, including technical, societal, and economic aspects.Lack of trust and control (00:27:09) Sam Altman's admission about the power and control over AGI and the need for governance.Public apathy towards AI risk (00:31:49) Addressing the common reasons for public inaction regarding AI risk awareness.Celebration of life (00:34:20) A personal reflection on the beauty of music and family, with a message about the celebration of life.Conclusion (00:38:25) Closing remarks and a preview of the next episode. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Apr 1, 2024 • 2min

“Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast Episode #22 TRAILER

In episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST  / discord  22 Word Statement from Center for AI SafetyStatement on AI Risk | CAISn this AI Safety Podcast episode, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST  / discord  22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIS This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Mar 27, 2024 • 1h 14min

“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21

“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21Interview starts at 20:10Some highlights of John’s news career start at 9:14In In Episode #21 “Why AI Killing You Isn’t On The News” Casey Clark Interview,, host John Sherman and WJZY-TV News Director Casey Clark explore the significant underreporting of AI's existential risks in the media. They recount a disturbing incident where AI bots infiltrated a city council meeting, spewing hateful messages. The conversation delves into the challenges of conveying the complexities of artificial general intelligence to the public and the media's struggle to present such abstract concepts compellingly. They predict job losses as the first major AI-related news story to break through and speculate on the future of AI-generated news anchors, emphasizing the need for human reporters in the field.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST  / discord  See more of John’s Talk in Philly:https://x.com/ForHumanityPod/status/1772449876388765831?s=20FOLLOW DAVID SHAPIRO ON YOUTUBE!David Shapiro - YouTube22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIS This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Mar 25, 2024 • 4min

“Why AI Killing You Isn’t On The News” TRAILER For Humanity: An AI Safety Podcast Episode #21

In Episode #21 TRAILER “Why AI Killing You Isn’t On The News” Casey Clark Interview, John Sherman interviews WJZY-TV News Director Casey Clark about TV news coverage of AI existential risk.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST  / discord  This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Mar 20, 2024 • 1h 49min

0:01 / 3:52 “AI Risk Realist vs. Coding Cowboy” For Humanity: An AI Safety Podcast Episode #20

In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” John Sherman debates AI risk with lifelong coder and current Chief AI Officer Mark Tellez. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Community Note: So, after much commentary, I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!RESOURCES:Time Article on the New Report:AI Poses Extinction-Level Risk, State-Funded Report Says | TIMEJohn's Upcoming Talk in Philadelphia!It is open to the public, you will need to make a free account at meetup.comhttps://www.meetup.com/philly-net/eve...FOLLOW DAVID SHAPIRO ON YOUTUBE!David Shapiro - YouTubeDave Shapiro’s New  Video where he talks about For HumanityAGI: What will the first 90 days be like? And more VEXING questions from the audience!22 Word Statement from Center for AI SafetyStatement on AI Risk | CAISPause AIPause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST  / discord   This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Mar 18, 2024 • 4min

“AI Risk Realist vs. Coding Cowboy” TRAILER For Humanity: An AI Safety Podcast Episode #20

In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” TRAILER, John Sherman debates AI risk with a lifelong coder and current Chief AI Officer. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Community Note: So after much commentary I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!RESOURCES:Time Article on the New Report:AI Poses Extinction-Level Risk, State-Funded Report Says | TIMEFOLLOW DAVID SHAPIRO ON YOUTUBE!David Shapiro - YouTubeDave Shapiro’s New  Video where he talks about For HumanityAGI: What will the first 90 days be like? And more VEXING questions from the audience!22 Word Statement from Center for AI SafetyStatement on AI Risk | CAISPause AIPause AI This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
9 snips
Mar 13, 2024 • 1h 41min

“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19

Discussion on the dangers of AI surpassing human intelligence, exploring societal implications and the need for consent. Delving into risks of AI and ensuring a positive future, including building a digital super organism. Reflecting on the future impact of advanced AI on society, job automation, and economic shifts. Navigating the complexities of AI development, international cooperation, and geopolitical concerns.
undefined
Mar 11, 2024 • 7min

“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19 TRAILER

In Episode #19 TRAILER, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels, his main channel (link below go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down.But a lot Dave’s content is about a post-AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two-part conversation, first about if we can live past AGI, and second, about the issues we’d face in a world where humans and AGIs are co-existing. In this trailer, Dave gets to the edge of giving his (p)-doom.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.FOLLOW DAVID SHAPIRO ON YOUTUBE!https://youtube.com/@DaveShap?si=o_USH-v0fDyo23fm This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app