

For Humanity: An AI Risk Podcast
The AI Risk Network
For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Sep 11, 2024 • 1h 24min
Episode #45: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast
In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28)Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected WorldAn article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of ProgressFine Dr. Brooks on Social MediaLinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebookhttps://www.linkedin.com/in/dr-mike-brooks-b1164120https://x.com/drmikebrookshttps://www.youtube.com/@connectwithdrmikebrookshttps://www.tiktok.com/@connectwithdrmikebrooks?lang=enhttps://www.instagram.com/drmikebrooks/?hl=enChris Gerrby’s Twitter: https://x.com/ChrisGerrbyLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Sep 9, 2024 • 7min
Episode #45 TRAILER: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast
In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected WorldAn article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of ProgressLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Sep 4, 2024 • 1h 31min
Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast
In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:BUY ROMAN’S NEW BOOK ON AMAZONhttps://a.co/d/fPG6lOBSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Sep 2, 2024 • 8min
Episode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast
In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments.LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:BUY ROMAN’S NEW BOOK ON AMAZONhttps://a.co/d/fPG6lOBSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Aug 26, 2024 • 8min
Episode #43 TRAILER: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast
In Episode #43 TRAILER, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Aug 21, 2024 • 1h 23min
Episode #42: “Actors vs. AI” For Humanity: An AI Risk Podcast
In Episode #42, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Aug 19, 2024 • 3min
Episode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk Podcast
In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Aug 14, 2024 • 49min
Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Aug 12, 2024 • 9min
Episode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jul 31, 2024 • 1h 23min
Episode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast
In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTimestamps**GOP's AI Regulation Stance (00:00:41)****Welcome to Episode 39 (00:01:41)****Trump's Assassination Attempt (00:03:41)****Partisan Shift in AI Risk (00:04:09)****Matthew Tabor's Background (00:06:32)****Tennessee's "ELVIS" Law (00:13:55)****Bipartisan Support for ELVIS (00:15:49)****California's Legislative Actions (00:18:58)****Overview of California Bills (00:20:50)****Lobbying Influence in California (00:23:15)****Challenges of AI Training Data (00:24:26)****The Original Sin of AI (00:25:19)****Congress and AI Regulation (00:27:29)****Investigations into AI Companies (00:28:48)****The New York Times Lawsuit (00:29:39)****Political Developments in AI Risk (00:30:24)****GOP Platform and AI Regulation (00:31:35)****Local vs. National AI Regulation (00:32:58)****Public Awareness of AI Regulation (00:33:38)****Engaging with Lawmakers (00:41:05)****Roleplay Demonstration (00:43:48)****Legislative Frameworks for AI (00:46:20)****Coalition Against AI Development (00:49:28)****Understanding AI Risks in Hollywood (00:51:00)****Generative AI in Film Production (00:53:32)****Impact of AI on Authenticity in Entertainment (00:56:14)****The Future of AI-Generated Content (00:57:31)****AI Legislation and Political Dynamics (01:00:43)****Partisan Issues in AI Regulation (01:02:22)****Influence of Celebrity Advocacy on AI Legislation (01:04:11)****Understanding Legislative Processes for AI Bills (01:09:23)****Presidential Approach to AI Regulation (01:11:47)****State-Level Initiatives for AI Legislation (01:14:09)**# Podcast Episode Timestamps**State vs. Congressional Regulation (01:15:05)****Engaging Lawmakers (01:15:29)****YouTube Video Views Explanation (01:15:37)****Algorithm Challenges (01:16:48)****Celebration of Life (01:18:08)****Final Thoughts and Call to Action (01:19:13)** This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com


