

For Humanity: An AI Safety Podcast
The AI Risk Network
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Mar 27, 2024 • 1h 14min
“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21
“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21Interview starts at 20:10Some highlights of John’s news career start at 9:14In In Episode #21 “Why AI Killing You Isn’t On The News” Casey Clark Interview,, host John Sherman and WJZY-TV News Director Casey Clark explore the significant underreporting of AI's existential risks in the media. They recount a disturbing incident where AI bots infiltrated a city council meeting, spewing hateful messages. The conversation delves into the challenges of conveying the complexities of artificial general intelligence to the public and the media's struggle to present such abstract concepts compellingly. They predict job losses as the first major AI-related news story to break through and speculate on the future of AI-generated news anchors, emphasizing the need for human reporters in the field.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST / discord See more of John’s Talk in Philly:https://x.com/ForHumanityPod/status/1772449876388765831?s=20FOLLOW DAVID SHAPIRO ON YOUTUBE!David Shapiro - YouTube22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIS This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Mar 25, 2024 • 4min
“Why AI Killing You Isn’t On The News” TRAILER For Humanity: An AI Safety Podcast Episode #21
In Episode #21 TRAILER “Why AI Killing You Isn’t On The News” Casey Clark Interview, John Sherman interviews WJZY-TV News Director Casey Clark about TV news coverage of AI existential risk.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST / discord This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Mar 20, 2024 • 1h 49min
0:01 / 3:52 “AI Risk Realist vs. Coding Cowboy” For Humanity: An AI Safety Podcast Episode #20
In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” John Sherman debates AI risk with lifelong coder and current Chief AI Officer Mark Tellez. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Community Note: So, after much commentary, I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!RESOURCES:Time Article on the New Report:AI Poses Extinction-Level Risk, State-Funded Report Says | TIMEJohn's Upcoming Talk in Philadelphia!It is open to the public, you will need to make a free account at meetup.comhttps://www.meetup.com/philly-net/eve...FOLLOW DAVID SHAPIRO ON YOUTUBE!David Shapiro - YouTubeDave Shapiro’s New Video where he talks about For HumanityAGI: What will the first 90 days be like? And more VEXING questions from the audience!22 Word Statement from Center for AI SafetyStatement on AI Risk | CAISPause AIPause AIJoin the Pause AI Weekly Discord Thursdays at 3pm EST / discord This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Mar 18, 2024 • 4min
“AI Risk Realist vs. Coding Cowboy” TRAILER For Humanity: An AI Safety Podcast Episode #20
In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” TRAILER, John Sherman debates AI risk with a lifelong coder and current Chief AI Officer. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Community Note: So after much commentary I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!RESOURCES:Time Article on the New Report:AI Poses Extinction-Level Risk, State-Funded Report Says | TIMEFOLLOW DAVID SHAPIRO ON YOUTUBE!David Shapiro - YouTubeDave Shapiro’s New Video where he talks about For HumanityAGI: What will the first 90 days be like? And more VEXING questions from the audience!22 Word Statement from Center for AI SafetyStatement on AI Risk | CAISPause AIPause AI This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

9 snips
Mar 13, 2024 • 1h 41min
“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19
Discussion on the dangers of AI surpassing human intelligence, exploring societal implications and the need for consent. Delving into risks of AI and ensuring a positive future, including building a digital super organism. Reflecting on the future impact of advanced AI on society, job automation, and economic shifts. Navigating the complexities of AI development, international cooperation, and geopolitical concerns.

Mar 11, 2024 • 7min
“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19 TRAILER
In Episode #19 TRAILER, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels, his main channel (link below go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down.But a lot Dave’s content is about a post-AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two-part conversation, first about if we can live past AGI, and second, about the issues we’d face in a world where humans and AGIs are co-existing. In this trailer, Dave gets to the edge of giving his (p)-doom.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.FOLLOW DAVID SHAPIRO ON YOUTUBE!https://youtube.com/@DaveShap?si=o_USH-v0fDyo23fm This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Feb 28, 2024 • 1h 32min
"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview
In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Resources:Liron's Youtube Channel:https://youtube.com/@liron00?si=cqIo5...More on rationalism:https://www.lesswrong.com/More on California State Senate Bill SB-1047:https://leginfo.legislature.ca.gov/fa...https://thezvi.substack.com/p/on-the-...Warren Wolfhttps://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Feb 26, 2024 • 3min
"AI Risk=Jenga" For Humanity, An AI Safety Podcast #17 TRAILER, Liron Shapira Interview
In Episode #17 TRAILER, "AI Risk=Jenga", Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He explains how something like Sora, seemingly just a video tool, is actually a significant, real Jenga piece, and could actually end all life on earth.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Feb 21, 2024 • 44min
"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16
In Episode #16, AI Risk Denier Down, things get weird.This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:https://www.understandingai.org/p/why...https://www.understandingai.org/p/why...Tim was not prepared to discuss this work, which is when things started to get off the rails.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions)OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23-You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you.-Where is he wrong? -You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.”-In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject. -This is ludicrous to me. You think humans are far too special. -AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE. -How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this.-Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe?-A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really?-You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.”-Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special?-Does this claim rest on the security protocols of the big AI companies?-Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted. -Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Feb 20, 2024 • 3min
"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16 TRAILER
In Episode #16 TRAILER, AI Risk Denier Down, things get weird.This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:https://www.understandingai.org/p/why-im-not-afraid-of-superintelligenthttps://www.understandingai.org/p/why-im-not-worried-about-ai-takingTim was not prepared to discuss this work, which is when things started to get off the rails.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com