
For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Latest episodes

Feb 5, 2024 • 3min
"Pause AI or Die" For Humanity: An AI Safety Podcast Episode #14 TRAILER, Joep Meindertsma Interview
In Episode #14 TRAILER, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
https://pauseai.info/
https://discord.com/channels/1100491867675709580/@home
Sample Letter to Elected Leaders:
Dear XXXX-
I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close.
Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them?
Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo.
It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply.
Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation.
I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level.
I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible?
Thanks very much.
XXXXXX
Address
Phone

Jan 30, 2024 • 1h 40min
"Uncontrollable AI" For Humanity: An AI Safety Podcast, Episode #13 , Darren McKee Interview
In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World.
In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible.
Apologies for the laggy cam on Darren!
Darren’s book is an excellent resource, like this podcast it is intended for the general public.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
Darren’s Book
https://www.amazon.com/Uncontrollable...
My Dad's Favorite Messiah Recording (3:22-6:-55 only lol!!)
https://www.youtube.com/watch?v=lFjQ7...
Sample letter/email to an elected official:
Dear XXXX-
I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close.
Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them?
Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo.
It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply.
Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation.
I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level.
I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible?
Thanks very much.
XXXXXX
Address
Phone

Jan 29, 2024 • 2min
"Uncontrollable AI" For Humanity: An AI Safety, Podcast Episode #13, Author Darren McKee Interview
In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World.
In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible.
Darren’s book is an excellent resource, like this podcast it is intended for the general public.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
Darren’s Book

Jan 25, 2024 • 1h 40min
"AI Risk Debate" For Humanity: An AI Safety Podcast Episode #12 Theo Jaffee Interview
In Episode #12, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources
Theo’s YouTube Channel : https://youtube.com/@theojaffee8530?si=aBnWNdViCiL4ZaEg
Glossary: First Definitions by ChaptGPT4, I asked it to give answers simple enough elementary school student could understand( lol, I find this helpful often!)
Reinforcement Learning with Human Feedback (RLHF):
Definition: RLHF, or Reinforcement Learning with Human Feedback, is like teaching a computer to make decisions by giving it rewards when it does something good and telling it what's right when it makes a mistake. It's a way for computers to learn and get better at tasks with the help of guidance from humans, just like how a teacher helps students learn. So, it's like a teamwork between people and computers to make the computer really smart!
Model Weights
Definiton: Model weights are like the special numbers that help a computer understand and remember things. Imagine it's like a recipe book, and these weights are the amounts of ingredients needed to make a cake. When the computer learns new things, these weights get adjusted so that it gets better at its job, just like changing the recipe to make the cake taste even better! So, model weights are like the secret ingredients that make the computer really good at what it does.
Foom/Fast Take-off:
Definition: "AI fast take-off" or "foom" refers to the idea that artificial intelligence (AI) could become super smart and powerful really quickly. It's like imagining a computer getting super smart all of a sudden, like magic! Some people use the word "foom" to talk about the possibility of AI becoming super intelligent in a short amount of time. It's a bit like picturing a computer going from learning simple things to becoming incredibly smart in the blink of an eye! Foom comes from cartoons, it’s the sound a super hero makes in comic books when they burst off the ground into flight.
Gradient Descent: Gradient descent is like a treasure hunt for the best way to do something. Imagine you're on a big hill with a metal detector, trying to find the lowest point. The detector beeps louder when you're closer to the lowest spot. In gradient descent, you adjust your steps based on these beeps to reach the lowest point on the hill, and in the computer world, it helps find the best values for a task, like making a robot walk smoothly or a computer learn better.
Orthoginality: Orthogonality is like making sure things are independent and don't mess each other up. Think of a chef organizing ingredients on a table – if each ingredient has its own space and doesn't mix with others, it's easier to work. In computers, orthogonality means keeping different parts separate, so changing one thing doesn't accidentally affect something else. It's like having a well-organized kitchen where each tool has its own place, making it easy to cook without chaos!

Jan 22, 2024 • 5min
"AI Risk Debate" For Humanity: An AI Safety Podcast Episode #12 Theo Jaffee Interview TRAILER
In Episode #12 TRAILER, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources
Theo’s YouTube Channel : https://youtube.com/@theojaffee8530?s...

Jan 17, 2024 • 1h 19min
"Artist vs. AI Risk" For Humanity: An AI Safety Podcast Episode #11 Stephen Hanson Interview
In Episode #11, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
STEVE'S ART! stephenhansonart.bigcartel.com
Get ahead for next week and check out Theo Jaffee's Youtube Channel:
https://youtube.com/@theojaffee8530?s...

Jan 16, 2024 • 2min
"Artist vs. AI Risk" For Humanity: An AI Safety Podcast Episode #11 Stephen Hanson Interview TRAILER
In Episode #11 Trailer, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Resources:
STEVE'S ART! stephenhansonart.bigcartel.com

Jan 10, 2024 • 28min
"Eliezer Yudkowsky's 2024 Doom Update" For Humanity: An AI Safety Podcast, Episode #10
In Episode #10, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. Be warned, this is a heavy episode. But there is some hope and a laugh at the end.
Most important among them, he believes:
-Humanity no longer has 30-50 years to solve the alignment and interpretability problems, our broken processes just won't allow it
-Human augmentation is the only viable path for humans to compete with AGIs
-We have ONE YEAR, THIS YEAR, 2024, to mount a global WW2-style response to the extinction risk of AI.
-This battle is EASIER to win than WW2 :)
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Jan 8, 2024 • 2min
"Eliezer Yudkowsky's 2024 Doom Update" For Humanity: An AI Safety Podcast, Episode #10 Trailer
In Episode #10 TRAILER, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Jan 3, 2024 • 1h 7min
Veteran Marine vs. AGI, For Humanity An AI Safety Podcast: Episode #9 Sean Bradley Interview
Do you believe the big AI companies when they tell you their work could kill every last human on earth? You are not alone. You are part of a growing general public that opposes unaligned AI capabilities development.
In Episode #9 , we meet Sean Bradley, a Veteran Marine who served his country for six years, including as a helicopter door gunner. Sean left the service as a sergeant and now lives in San Diego where he is married, working and in college. Sean is a viewer of For Humanity and a member of our growing community of the AI risk aware.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
More on the little robot:
https://themessenger.com/tech/rob-rob...