For Humanity: An AI Risk Podcast

The AI Risk Network
undefined
Sep 25, 2024 • 1h 20min

Episode #47: “Can AI Be Controlled?“ For Humanity: An AI Risk Podcas

In Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerfulhttps://redwoodresearch.substack.com/p/would-catching-your-ais-trying-toSenate Hearing:https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectivesHarry Macks Youtube Channelhttps://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8TgLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 25, 2024 • 5min

Episode #47 Trailer : “Can AI Be Controlled?“ For Humanity: An AI Risk Podcast

In Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below:https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerfulhttps://redwoodresearch.substack.com/p/would-catching-your-ais-trying-toSenate Hearing:https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectivesHarry Macks Youtube Channelhttps://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8TgLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 18, 2024 • 1h 17min

Episode #46: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk Podcast

In Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.More About Daniel Faggellahttps://danfaggella.com/LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 16, 2024 • 6min

Episode 46 Trailer: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk Podcast

In Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor.More About Daniel Faggellahttps://danfaggella.com/LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 11, 2024 • 1h 24min

Episode #45: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28)Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected WorldAn article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of ProgressFine Dr. Brooks on Social MediaLinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebookhttps://www.linkedin.com/in/dr-mike-brooks-b1164120https://x.com/drmikebrookshttps://www.youtube.com/@connectwithdrmikebrookshttps://www.tiktok.com/@connectwithdrmikebrooks?lang=enhttps://www.instagram.com/drmikebrooks/?hl=enChris Gerrby’s Twitter: https://x.com/ChrisGerrbyLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 9, 2024 • 7min

Episode #45 TRAILER: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected WorldAn article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of ProgressLEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AISUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 4, 2024 • 1h 31min

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:BUY ROMAN’S NEW BOOK ON AMAZONhttps://a.co/d/fPG6lOBSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 2, 2024 • 8min

Episode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments.LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HEREhttps://pauseai.info/local-organizingPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:BUY ROMAN’S NEW BOOK ON AMAZONhttps://a.co/d/fPG6lOBSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extinction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Aug 26, 2024 • 8min

Episode #43 TRAILER: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast

In Episode #43 TRAILER,  host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Aug 21, 2024 • 1h 23min

Episode #42: “Actors vs. AI” For Humanity: An AI Risk Podcast

In Episode #42,  host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastEMAIL JOHN: forhumanitypodcast@gmail.comThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST  / discord  https://discord.com/invite/pVMWjddaW7Max Winga’s “A Stark Warning About Extiction”https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesSUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom22 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app