AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
**Concerns about AI Risks in France (00:00:00)**
**Optimism in AI Solutions (00:01:15)**
**Introduction to the Episode (00:01:51)**
**Max Wingo's Powerful Clip (00:02:29)**
**AI Safety Summit Context (00:04:20)**
**Personal Journey into AI Safety (00:07:02)**
**Commitment to AI Risk Work (00:21:33)**
**France's AI Sacrifice (00:21:49)**
**Impact of Efforts (00:21:54)**
**Existential Risks and Choices (00:22:12)**
**Underestimating Impact (00:22:25)**
**Researching AI Risks (00:22:34)**
**Weak Counterarguments (00:23:14)**
**Existential Dread Theory (00:23:56)**
**Global Awareness of AI Risks (00:24:16)**
**France's AI Leadership Role (00:25:09)**
**AI Policy in France (00:26:17)**
**Influential Figures in AI (00:27:16)**
**EU Regulation Sabotage (00:28:18)**
**Committee's Risk Perception (00:30:24)**
**Concerns about France's AI Development (00:32:03)**
**International AI Treaties (00:32:36)**
**Sabotaging AI Safety Summit (00:33:26)**
**Quality of France's AI Report (00:34:19)**
**Misleading Risk Analyses (00:36:06)**
**Comparison to Historical Innovations (00:39:33)**
**Rhetoric and Misinformation (00:40:06)**
**Existential Fear and Rationality (00:41:08)**
**Position of AI Leaders (00:42:38)**
**Challenges of Volunteer Management (00:46:54)**