

For Humanity: An AI Risk Podcast
The AI Risk Network
For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Jan 17, 2024 • 1h 19min
"Artist vs. AI Risk" For Humanity: An AI Safety Podcast Episode #11 Stephen Hanson Interview
In Episode #11, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Resources:STEVE'S ART! stephenhansonart.bigcartel.comGet ahead for next week and check out Theo Jaffee's Youtube Channel:https://youtube.com/@theojaffee8530?s... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jan 16, 2024 • 2min
"Artist vs. AI Risk" For Humanity: An AI Safety Podcast Episode #11 Stephen Hanson Interview TRAILER
In Episode #11 Trailer, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Resources:STEVE'S ART! stephenhansonart.bigcartel.com This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jan 10, 2024 • 28min
"Eliezer Yudkowsky's 2024 Doom Update" For Humanity: An AI Safety Podcast, Episode #10
In Episode #10, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. Be warned, this is a heavy episode. But there is some hope and a laugh at the end.Most important among them, he believes:-Humanity no longer has 30-50 years to solve the alignment and interpretability problems, our broken processes just won't allow it-Human augmentation is the only viable path for humans to compete with AGIs-We have ONE YEAR, THIS YEAR, 2024, to mount a global WW2-style response to the extinction risk of AI.-This battle is EASIER to win than WW2 :)This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jan 8, 2024 • 2min
"Eliezer Yudkowsky's 2024 Doom Update" For Humanity: An AI Safety Podcast, Episode #10 Trailer
In Episode #10 TRAILER, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jan 3, 2024 • 1h 7min
Veteran Marine vs. AGI, For Humanity An AI Safety Podcast: Episode #9 Sean Bradley Interview
Do you believe the big AI companies when they tell you their work could kill every last human on earth? You are not alone. You are part of a growing general public that opposes unaligned AI capabilities development.In Episode #9 , we meet Sean Bradley, a Veteran Marine who served his country for six years, including as a helicopter door gunner. Sean left the service as a sergeant and now lives in San Diego where he is married, working and in college. Sean is a viewer of For Humanity and a member of our growing community of the AI risk aware. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:More on the little robot:https://themessenger.com/tech/rob-rob... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jan 2, 2024 • 2min
Veteran Marine vs. AGI, For Humanity An AI Safety Podcast: Episode #9 TRAILER Sean Bradley Interview
Do you believe the big AI companies when they tell you their work could kill every last human on earth? You are not alone. You are part of a growing general public that opposes unaligned AI capabilities development.In Episode #9 TRAILER, we meet Sean Bradley, a Veteran Marine who served his country for six years, including as a helicopter door gunner. Sean left the service as a sergeant and now lives in San Diego where he is married, working and in college. Sean is a viewer of For Humanity and a member of our growing community of the AI risk aware. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:More on the little robot:https://themessenger.com/tech/rob-rob... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Dec 22, 2023 • 39min
"AI's Top 3 Doomers" For Humanity, An AI Safety Podcast: Episode #8
Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it.In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.#samaltman #darioamodei #yannlecun #ai #aisafety This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Dec 21, 2023 • 2min
"AI's Top 3 Doomers" For Humanity, An AI Safety Podcast: Episode #8 TRAILER
Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it.In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.#samaltman #darioamodei #yannlecun #ai #aisafety This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Dec 14, 2023 • 53min
"Moms Talk AI Extinction Risk" For Humanity, An AI Safety Podcast: Episode #7
You've heard all the tech experts. But what do regular moms think about AI and human extinction?In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world.30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization.So what do regular moms think of all this? Watch and find out.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Dec 13, 2023 • 3min
"Moms Talk AI Extinction Risk" For Humanity, An AI Safety Podcast: Episode #7 TRAILER
You've heard all the tech experts. But what do regular moms think about AI and human extinction?In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world.30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization.So what do regular moms think of all this? Watch and find out.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com


