
For Humanity: An AI Safety Podcast
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Latest episodes

Sep 2, 2024 • 1h 16min
Episode #43: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast
In Episode #43, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Sep 2, 2024 • 8min
Episode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast
In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments.
LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 26, 2024 • 8min
Episode #43 TRAILER: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast
In Episode #43 TRAILER, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 21, 2024 • 1h 23min
Episode #42: “Actors vs. AI” For Humanity: An AI Risk Podcast
In Episode #42, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 19, 2024 • 3min
Episode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk Podcast
In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 14, 2024 • 49min
Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 12, 2024 • 9min
Episode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 7, 2024 • 1h 31min
Episode #40 “Surviving Doom” For Humanity: An AI Risk Podcast
In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 5, 2024 • 6min
Episode #40 TRAILER “Surviving Doom” For Humanity: An AI Risk Podcast
In Episode #40, TRAILER, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
Prepping Perspectives (00:00:00)Discussion on how to characterize preparedness efforts, ranging from common sense to doomsday prepping.
Personal Experience in Emergency Management (00:00:06)Speaker shares background in emergency management and Red Cross, reflecting on past preparation efforts.
Vision of AGI and Societal Collapse (00:00:58)Exploration of potential outcomes of AGI development and societal disruptions, including chaos and extinction.
Geopolitical Safety in the Philippines (00:02:14)Consideration of living in the Philippines as a safer option during global conflicts and crises.
Self-Reliance and Supply Chain Concerns (00:03:15)Importance of self-reliance and being off-grid to mitigate risks from supply chain breakdowns.
Escaping Potential Threats (00:04:11)Discussion on the plausibility of escaping threats posed by advanced AI and the implications of being tracked.
Nuclear Threats and Personal Safety (00:05:34)Speculation on the potential for nuclear conflict while maintaining a sense of safety in the Philippines.

Jul 31, 2024 • 1h 23min
Episode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast
In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
**GOP's AI Regulation Stance (00:00:41)**
**Welcome to Episode 39 (00:01:41)**
**Trump's Assassination Attempt (00:03:41)**
**Partisan Shift in AI Risk (00:04:09)**
**Matthew Tabor's Background (00:06:32)**
**Tennessee's "ELVIS" Law (00:13:55)**
**Bipartisan Support for ELVIS (00:15:49)**
**California's Legislative Actions (00:18:58)**
**Overview of California Bills (00:20:50)**
**Lobbying Influence in California (00:23:15)**
**Challenges of AI Training Data (00:24:26)**
**The Original Sin of AI (00:25:19)**
**Congress and AI Regulation (00:27:29)**
**Investigations into AI Companies (00:28:48)**
**The New York Times Lawsuit (00:29:39)**
**Political Developments in AI Risk (00:30:24)**
**GOP Platform and AI Regulation (00:31:35)**
**Local vs. National AI Regulation (00:32:58)**
**Public Awareness of AI Regulation (00:33:38)**
**Engaging with Lawmakers (00:41:05)**
**Roleplay Demonstration (00:43:48)**
**Legislative Frameworks for AI (00:46:20)**
**Coalition Against AI Development (00:49:28)**
**Understanding AI Risks in Hollywood (00:51:00)**
**Generative AI in Film Production (00:53:32)**
**Impact of AI on Authenticity in Entertainment (00:56:14)**
**The Future of AI-Generated Content (00:57:31)**
**AI Legislation and Political Dynamics (01:00:43)**
**Partisan Issues in AI Regulation (01:02:22)**
**Influence of Celebrity Advocacy on AI Legislation (01:04:11)**
**Understanding Legislative Processes for AI Bills (01:09:23)**
**Presidential Approach to AI Regulation (01:11:47)**
**State-Level Initiatives for AI Legislation (01:14:09)**
# Podcast Episode Timestamps
**State vs. Congressional Regulation (01:15:05)**
**Engaging Lawmakers (01:15:29)**
**YouTube Video Views Explanation (01:15:37)**
**Algorithm Challenges (01:16:48)**
**Celebration of Life (01:18:08)**
**Final Thoughts and Call to Action (01:19:13)**