For Humanity: An AI Safety Podcast

The AI Risk Network
undefined
Oct 8, 2024 • 1h 10min

What Is The Origin Of AI Safety? | AI Safety Movement | Episode #48

In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 PASSCODE: 789742LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!   / @doomdebates  Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord   / discord  Max Winga’s “A Stark Warning About Extinction”   • A Stark Warning About AI Extinction  For Humanity Theme Music by Josef EbnerYoutube:    / @jpjosefpictures  Website: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.co...22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on...Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes  *************************Welcome! In today's video, we delve into the vital aspects of AI safety movement and explore what is the origin of AI safety.This video covers what is the origin of AI safety and the following topics: AI safety AI safety research Eliezer’s insights on AI safety research******************** Discover more of our video content on what is the origin of AI safety. You'll find additional insights on this topic along with relevant social media links.YouTube:    / @forhumanitypodcast   This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 30, 2024 • 8min

AI Safety's Limiting Origins: For Humanity, An AI Risk Podcast, Episode #48 Trailer

In Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 25, 2024 • 1h 20min

Episode #47: “Can AI Be Controlled?“ For Humanity: An AI Risk Podcas

In Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerfulhttps://redwoodresearch.substack.com/p/would-catching-your-ais-trying-toSenate Hearing:https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectivesHarry Macks Youtube Channelhttps://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8TgLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 25, 2024 • 5min

Episode #47 Trailer : “Can AI Be Controlled?“ For Humanity: An AI Risk Podcast

In Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerfulhttps://redwoodresearch.substack.com/p/would-catching-your-ais-trying-toSenate Hearing:https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectivesHarry Macks Youtube Channelhttps://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8TgLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 18, 2024 • 1h 17min

Episode #46: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk Podcast

In Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.More About Daniel Faggellahttps://danfaggella.com/LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 16, 2024 • 6min

Episode 46 Trailer: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk Podcast

In Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.More About Daniel Faggellahttps://danfaggella.com/LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 11, 2024 • 1h 24min

Episode #45: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28)Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected WorldAn article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of ProgressFine Dr. Brooks on Social MediaLinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebookhttps://www.linkedin.com/in/dr-mike-brooks-b1164120https://x.com/drmikebrookshttps://www.youtube.com/@connectwithdrmikebrookshttps://www.tiktok.com/@connectwithdrmikebrooks?lang=enhttps://www.instagram.com/drmikebrooks/?hl=enChris Gerrby’s Twitter: https://x.com/ChrisGerrbyLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 9, 2024 • 7min

Episode #45 TRAILER: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected WorldAn article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of ProgressLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 4, 2024 • 1h 31min

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:BUY ROMAN’S NEW BOOK ON AMAZONhttps://a.co/d/fPG6lOBSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Sep 2, 2024 • 8min

Episode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments.LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:BUY ROMAN’S NEW BOOK ON AMAZONhttps://a.co/d/fPG6lOBSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app