For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Latest episodes

undefined
Aug 21, 2024 • 1h 23min

Episode #42: “Actors vs. AI” For Humanity: An AI Risk Podcast

In Episode #42,  host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
Aug 19, 2024 • 3min

Episode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk Podcast

In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
Aug 14, 2024 • 49min

Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
Aug 12, 2024 • 9min

Episode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
Aug 7, 2024 • 1h 31min

Episode #40 “Surviving Doom” For Humanity: An AI Risk Podcast

In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
Aug 5, 2024 • 6min

Episode #40 TRAILER “Surviving Doom” For Humanity: An AI Risk Podcast

In Episode #40, TRAILER, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps Prepping Perspectives (00:00:00)Discussion on how to characterize preparedness efforts, ranging from common sense to doomsday prepping. Personal Experience in Emergency Management (00:00:06)Speaker shares background in emergency management and Red Cross, reflecting on past preparation efforts. Vision of AGI and Societal Collapse (00:00:58)Exploration of potential outcomes of AGI development and societal disruptions, including chaos and extinction. Geopolitical Safety in the Philippines (00:02:14)Consideration of living in the Philippines as a safer option during global conflicts and crises. Self-Reliance and Supply Chain Concerns (00:03:15)Importance of self-reliance and being off-grid to mitigate risks from supply chain breakdowns. Escaping Potential Threats (00:04:11)Discussion on the plausibility of escaping threats posed by advanced AI and the implications of being tracked. Nuclear Threats and Personal Safety (00:05:34)Speculation on the potential for nuclear conflict while maintaining a sense of safety in the Philippines.
undefined
Jul 31, 2024 • 1h 23min

Episode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast

In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps **GOP's AI Regulation Stance (00:00:41)** **Welcome to Episode 39 (00:01:41)** **Trump's Assassination Attempt (00:03:41)** **Partisan Shift in AI Risk (00:04:09)** **Matthew Tabor's Background (00:06:32)** **Tennessee's "ELVIS" Law (00:13:55)** **Bipartisan Support for ELVIS (00:15:49)** **California's Legislative Actions (00:18:58)** **Overview of California Bills (00:20:50)** **Lobbying Influence in California (00:23:15)** **Challenges of AI Training Data (00:24:26)** **The Original Sin of AI (00:25:19)** **Congress and AI Regulation (00:27:29)** **Investigations into AI Companies (00:28:48)** **The New York Times Lawsuit (00:29:39)** **Political Developments in AI Risk (00:30:24)** **GOP Platform and AI Regulation (00:31:35)** **Local vs. National AI Regulation (00:32:58)** **Public Awareness of AI Regulation (00:33:38)** **Engaging with Lawmakers (00:41:05)** **Roleplay Demonstration (00:43:48)** **Legislative Frameworks for AI (00:46:20)** **Coalition Against AI Development (00:49:28)** **Understanding AI Risks in Hollywood (00:51:00)** **Generative AI in Film Production (00:53:32)** **Impact of AI on Authenticity in Entertainment (00:56:14)** **The Future of AI-Generated Content (00:57:31)** **AI Legislation and Political Dynamics (01:00:43)** **Partisan Issues in AI Regulation (01:02:22)** **Influence of Celebrity Advocacy on AI Legislation (01:04:11)** **Understanding Legislative Processes for AI Bills (01:09:23)** **Presidential Approach to AI Regulation (01:11:47)** **State-Level Initiatives for AI Legislation (01:14:09)** # Podcast Episode Timestamps **State vs. Congressional Regulation (01:15:05)** **Engaging Lawmakers (01:15:29)** **YouTube Video Views Explanation (01:15:37)** **Algorithm Challenges (01:16:48)** **Celebration of Life (01:18:08)** **Final Thoughts and Call to Action (01:19:13)**
undefined
Jul 29, 2024 • 4min

Episode #39 Trailer “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast

In Episode #39 Trailer, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation addresses the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps Republican Party's AI Regulation Stance (00:00:41)The GOP platform aims to eliminate existing AI regulations, reflecting a shift in political dynamics. Bipartisanship in AI Issues (00:01:21)AI is initially a bipartisan concern, but quickly becomes a partisan issue amidst political maneuvering. Tech Companies' Frustration with Legislation (00:01:55)Major tech companies express dissatisfaction with California's AI bills, indicating a push for regulatory rollback. Public Sentiment vs. Party Platform (00:02:42)Discrepancy between GOP platform on AI and average voter opinions, highlighting a disconnect in priorities. Polling on AI Regulation (00:03:26)Polling shows strong public support for AI regulation, raising questions about political implications and citizen engagement.
undefined
Jul 24, 2024 • 1h 20min

Episode #38 “France vs. AGI” For Humanity: An AI Risk Podcast

In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.    Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **Concerns about AI Risks in France (00:00:00)**   **Optimism in AI Solutions (00:01:15)**   **Introduction to the Episode (00:01:51)**   **Max Wingo's Powerful Clip (00:02:29)**   **AI Safety Summit Context (00:04:20)**   **Personal Journey into AI Safety (00:07:02)**   **Commitment to AI Risk Work (00:21:33)**   **France's AI Sacrifice (00:21:49)**   **Impact of Efforts (00:21:54)**   **Existential Risks and Choices (00:22:12)**   **Underestimating Impact (00:22:25)**   **Researching AI Risks (00:22:34)**   **Weak Counterarguments (00:23:14)**   **Existential Dread Theory (00:23:56)**   **Global Awareness of AI Risks (00:24:16)**   **France's AI Leadership Role (00:25:09)**   **AI Policy in France (00:26:17)**   **Influential Figures in AI (00:27:16)**   **EU Regulation Sabotage (00:28:18)**   **Committee's Risk Perception (00:30:24)**   **Concerns about France's AI Development (00:32:03)**   **International AI Treaties (00:32:36)**   **Sabotaging AI Safety Summit (00:33:26)**   **Quality of France's AI Report (00:34:19)**   **Misleading Risk Analyses (00:36:06)**   **Comparison to Historical Innovations (00:39:33)**   **Rhetoric and Misinformation (00:40:06)**   **Existential Fear and Rationality (00:41:08)**   **Position of AI Leaders (00:42:38)**   **Challenges of Volunteer Management (00:46:54)**  
undefined
Jul 22, 2024 • 7min

Episode #38 TRAILER “France vs. AGI” For Humanity: An AI Risk Podcast

In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty?   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS Trust in AI Awareness in France (00:00:00)Discussion on France being uninformed about AI risks compared to other countries with AI labs. International Treaty Concerns (00:00:46)Speculation on France's reluctance to sign an international AI safety treaty. Personal Reflections on AI Risks (00:00:57)Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment. Underestimating Impact (00:01:13)The tendency of people to underestimate their potential impact on global issues. Researching AI Risks (00:01:50)Speaker shares their journey of researching AI risks and finding weak counterarguments. Critique of Counterarguments (00:02:23)Discussion on the absurdity of opposing views on AI risks and societal implications. Existential Dread and Rationality (00:02:42)Connection between existential fear and irrationality in discussions about AI safety. Shift in AI Safety Focus (00:03:17)Concerns about the diminishing focus on AI safety in upcoming summits. Quality of AI Strategy Report (00:04:11)Criticism of a recent French AI strategy report and plans to respond critically. Optimism about AI Awareness (00:05:04)Belief that understanding among key individuals can resolve AI safety issues. Power Dynamics in AI Decision-Making (00:05:38)Discussion on the disproportionate influence of a small group on global AI decisions. Cultural Perception of Impact (00:06:01)Reflection on societal beliefs that inhibit individual agency in effecting change.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner