
For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Latest episodes

Mar 11, 2024 • 7min
“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19 TRAILER
In Episode #19 TRAILER, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels, his main channel (link below go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down.
But a lot Dave’s content is about a post-AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two-part conversation, first about if we can live past AGI, and second, about the issues we’d face in a world where humans and AGIs are co-existing. In this trailer, Dave gets to the edge of giving his (p)-doom.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
FOLLOW DAVID SHAPIRO ON YOUTUBE!
https://youtube.com/@DaveShap?si=o_USH-v0fDyo23fm

Mar 6, 2024 • 1h 34min
“Worse Than Extinction, CTO vs. S-Risk” For Humanity, An AI Safety Podcast Episode #18
In Episode #18 TRAILER, “Worse Than Extinction, CTO vs. S-Risk” Louis Berman Interview, John talks with tech CTO Louis Berman about a broad range of AI risk topics centered around existential risk. The conversation goes to the darkest corner of the AI risk debate, S-risk, or suffering risk.
This episode has a lot in it that is very hard to hear. And say.The tech CEOs are spinning visions of abundance and utopia for the public. Someone needs to fill in the full picture of the realm of possibilities, no matter how hard it is to hear.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
John's Upcoming Talk in Philadelphia!
It is open to the public, you will need to make a free account at meetup.com
https://www.meetup.com/philly-net/events/298710679/
Excellent Background on S-Risk w supporting links https://80000hours.org/problem-profiles/s-risks/
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.gg/pVMWjddaW7

Mar 4, 2024 • 4min
“Worse Than Extinction, CTO vs. S-Risk” TRAILER For Humanity, An AI Safety Podcast Episode #18
In Episode #18 TRAILER, “Worse Than Extinction, CTO vs. S-Risk” Louis Berman Interview, John talks with tech CTO Louis Berman about a broad range of AI risk topics centered around existential risk. The conversation goes to the darkest corner of the AI risk debate, S-risk, or suffering risk.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Feb 28, 2024 • 1h 32min
"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview
In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
Liron's Youtube Channel:
https://youtube.com/@liron00?si=cqIo5...
More on rationalism:
https://www.lesswrong.com/
More on California State Senate Bill SB-1047:
https://leginfo.legislature.ca.gov/fa...https://thezvi.substack.com/p/on-the-...
Warren Wolf
https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL

Feb 26, 2024 • 3min
"AI Risk=Jenga" For Humanity, An AI Safety Podcast #17 TRAILER, Liron Shapira Interview
In Episode #17 TRAILER, "AI Risk=Jenga", Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He explains how something like Sora, seemingly just a video tool, is actually a significant, real Jenga piece, and could actually end all life on earth.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Feb 21, 2024 • 44min
"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16
In Episode #16, AI Risk Denier Down, things get weird.
This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:
https://www.understandingai.org/p/why...
https://www.understandingai.org/p/why...
Tim was not prepared to discuss this work, which is when things started to get off the rails.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions)
OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23
-You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you.
-Where is he wrong?
-You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.”
-In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject.
-This is ludicrous to me. You think humans are far too special.
-AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE.
-How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this.
-Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe?
-A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really?
-You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.”
-Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special?
-Does this claim rest on the security protocols of the big AI companies?
-Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted.
-Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on?

Feb 20, 2024 • 3min
"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16 TRAILER
In Episode #16 TRAILER, AI Risk Denier Down, things get weird.
This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:
https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent
https://www.understandingai.org/p/why-im-not-worried-about-ai-taking
Tim was not prepared to discuss this work, which is when things started to get off the rails.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Feb 19, 2024 • 1h
"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15
In Episode #15, AI Risk Superbowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24.
With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
Machine Learning Street Talk - YouTube
Full Debate, e/acc Leader Beff Jezos vs Doomer Connor Leahy
e/acc Leader Beff Jezos vs Doomer Connor Leahy
How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc
How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc
Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407
Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407
Next week’s guest Timothy Lee’s Website and related writing:
https://www.understandingai.org/https://www.understandingai.org/p/why...https://www.understandingai.org/p/why...

Feb 12, 2024 • 2min
"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15 TRAILER
In Episode #15 TRAILER, AI Risk Super Bowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24.
With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Feb 7, 2024 • 1h 17min
"Pause AI or Die" For Humanity: An AI Safety Podcast Episode #14, Joep Meindertsma Interview
In Episode #14, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet.
John and Joep talk about what's being done, how it all feels, how it all might end, and even broach the darkest corner of all of this: suffering risk. This conversation embodies a spirit this movement needs: we can be upbeat and positive as we talk about the darkest subjects possible. It's not "optimism" to race to build suicide machines, but it is optimism to assume the best, and to believe we can and must succeed no matter what the odds.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
https://pauseai.info/https://discord.gg/pVMWjddaW7
Sample Letter to Elected Leaders:
Dear XXXX-
I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close.
Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them?
Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo.
It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply.
Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation.
I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level.
I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible?
Thanks very much.
XXXXXX
Address
Phone