

For Humanity: An AI Safety Podcast
John Sherman
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
Episodes
Mentioned books

Aug 19, 2024 • 3min
Episode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk Podcast
In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 14, 2024 • 49min
Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Aug 12, 2024 • 9min
Episode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Jul 31, 2024 • 1h 23min
Episode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast
In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
**GOP's AI Regulation Stance (00:00:41)**
**Welcome to Episode 39 (00:01:41)**
**Trump's Assassination Attempt (00:03:41)**
**Partisan Shift in AI Risk (00:04:09)**
**Matthew Tabor's Background (00:06:32)**
**Tennessee's "ELVIS" Law (00:13:55)**
**Bipartisan Support for ELVIS (00:15:49)**
**California's Legislative Actions (00:18:58)**
**Overview of California Bills (00:20:50)**
**Lobbying Influence in California (00:23:15)**
**Challenges of AI Training Data (00:24:26)**
**The Original Sin of AI (00:25:19)**
**Congress and AI Regulation (00:27:29)**
**Investigations into AI Companies (00:28:48)**
**The New York Times Lawsuit (00:29:39)**
**Political Developments in AI Risk (00:30:24)**
**GOP Platform and AI Regulation (00:31:35)**
**Local vs. National AI Regulation (00:32:58)**
**Public Awareness of AI Regulation (00:33:38)**
**Engaging with Lawmakers (00:41:05)**
**Roleplay Demonstration (00:43:48)**
**Legislative Frameworks for AI (00:46:20)**
**Coalition Against AI Development (00:49:28)**
**Understanding AI Risks in Hollywood (00:51:00)**
**Generative AI in Film Production (00:53:32)**
**Impact of AI on Authenticity in Entertainment (00:56:14)**
**The Future of AI-Generated Content (00:57:31)**
**AI Legislation and Political Dynamics (01:00:43)**
**Partisan Issues in AI Regulation (01:02:22)**
**Influence of Celebrity Advocacy on AI Legislation (01:04:11)**
**Understanding Legislative Processes for AI Bills (01:09:23)**
**Presidential Approach to AI Regulation (01:11:47)**
**State-Level Initiatives for AI Legislation (01:14:09)**
# Podcast Episode Timestamps
**State vs. Congressional Regulation (01:15:05)**
**Engaging Lawmakers (01:15:29)**
**YouTube Video Views Explanation (01:15:37)**
**Algorithm Challenges (01:16:48)**
**Celebration of Life (01:18:08)**
**Final Thoughts and Call to Action (01:19:13)**

Jul 29, 2024 • 4min
Episode #39 Trailer “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast
In Episode #39 Trailer, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation addresses the shifting political landscape around AI-risk legislation in America in July 2024.
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
RESOURCES:
Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Timestamps
Republican Party's AI Regulation Stance (00:00:41)The GOP platform aims to eliminate existing AI regulations, reflecting a shift in political dynamics.
Bipartisanship in AI Issues (00:01:21)AI is initially a bipartisan concern, but quickly becomes a partisan issue amidst political maneuvering.
Tech Companies' Frustration with Legislation (00:01:55)Major tech companies express dissatisfaction with California's AI bills, indicating a push for regulatory rollback.
Public Sentiment vs. Party Platform (00:02:42)Discrepancy between GOP platform on AI and average voter opinions, highlighting a disconnect in priorities.
Polling on AI Regulation (00:03:26)Polling shows strong public support for AI regulation, raising questions about political implications and citizen engagement.

Jul 22, 2024 • 7min
Episode #38 TRAILER “France vs. AGI” For Humanity: An AI Risk Podcast
In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty?
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS
Trust in AI Awareness in France (00:00:00)Discussion on France being uninformed about AI risks compared to other countries with AI labs.
International Treaty Concerns (00:00:46)Speculation on France's reluctance to sign an international AI safety treaty.
Personal Reflections on AI Risks (00:00:57)Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment.
Underestimating Impact (00:01:13)The tendency of people to underestimate their potential impact on global issues.
Researching AI Risks (00:01:50)Speaker shares their journey of researching AI risks and finding weak counterarguments.
Critique of Counterarguments (00:02:23)Discussion on the absurdity of opposing views on AI risks and societal implications.
Existential Dread and Rationality (00:02:42)Connection between existential fear and irrationality in discussions about AI safety.
Shift in AI Safety Focus (00:03:17)Concerns about the diminishing focus on AI safety in upcoming summits.
Quality of AI Strategy Report (00:04:11)Criticism of a recent French AI strategy report and plans to respond critically.
Optimism about AI Awareness (00:05:04)Belief that understanding among key individuals can resolve AI safety issues.
Power Dynamics in AI Decision-Making (00:05:38)Discussion on the disproportionate influence of a small group on global AI decisions.
Cultural Perception of Impact (00:06:01)Reflection on societal beliefs that inhibit individual agency in effecting change.

Jul 17, 2024 • 1h 21min
Episode #37 “Christianity vs. AGI” For Humanity: An AI Risk Podcast
In Episode #37, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI?
Some of Peter Biles related writing:
https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/
https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/
https://substack.com/@peterbiles
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
EMAIL JOHN: forhumanitypodcast@gmail.com
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
Matt Andersen - 'Magnolia' (JJ Cale Cover) LIVE at SiriusXM
JJ Cale Magnolia Flagstaff, AZ 2004
TIMESTAMPS:
**Christianity versus AGI (00:00:39)**
**Concerns about AI (00:02:45)**
**Christianity and Technology (00:05:30)**
**Interview with Peter Byles (00:11:09)**
**Effects of Social Media (00:18:03)**
**Religious Perspective on AI (00:23:57)**
**The implications of AI on Christian faith (00:24:05)**
**The Tower of Babel metaphor (00:25:09)**
**The role of humans as sub-creators (00:27:23)**
**The impact of AI on human culture and society (00:30:33)**
**The limitations of AI in storytelling and human connection (00:32:33)**
**The intersection of faith and AI in a future world (00:41:35)**
**Religious Leaders and AI (00:45:34)**
**Human Exceptionalism (00:46:51)**
**Interfaith Dialogue and AI (00:50:26)**
**Religion and Abundance (00:53:42)**
**Apocalyptic Language and AI (00:58:26)**
**Hope in Human-Oriented Culture (01:04:32)**
**Worshipping AI (01:07:55)**
**Religion and AI (01:08:17)**
**Celebration of Life (01:09:49)**

Jul 15, 2024 • 9min
Episode #37 Trailer “Christianity vs. AGI” For Humanity: An AI Risk Podcast
In Episode #37 Trailer, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI?
Some of Peter Biles related writing:
https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/
https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/
https://substack.com/@peterbiles
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
The impact of technology on human dignity (00:00:00) The speaker discusses the potential negative impact of technology on human dignity and the divine image.
The embodiment of souls and human dignity (00:01:00) The speaker emphasizes the spiritual nature of human beings and the importance of human dignity, regardless of religion or ethnicity.
The concept of a "sand god" and technological superiority (00:02:09) The conversation explores the cultural and religious implications of creating an intelligence superior to humans and the reference to a "sand god."
The Tower of Babel and technology (00:03:25) The speaker references the story of the Tower of Babel from the book of Genesis and its metaphorical implications for technological advancements and human hubris.
The impact of AI on communication and storytelling (00:05:26) The discussion delves into the impersonal nature of AI in communication and storytelling, highlighting the absence of human intention and soul.
Human nature, materialism, and work (00:07:38) The conversation explores the deeper understanding of human nature, the restlessness of humans, and the significance of work and creativity.

Jul 10, 2024 • 1h 25min
Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast
In Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.
Gladstone AI Action Plan
https://www.gladstone.ai/action-plan
TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
**The whistleblower's concerns (00:00:00)**
**Introduction to the podcast (00:01:09)**
**The urgency of addressing AI risk (00:02:18)**
**The potential consequences of falling behind in AI (00:04:36)**
**Transitioning to working on AI risk (00:06:33)**
**Engagement with the State Department (00:08:07)**
**Project assessment and public visibility (00:10:10)**
**Motivation for taking on the detective work (00:13:16)**
**Alignment with the government's safety culture (00:17:03)**
**Potential government oversight of AI labs (00:20:50)**
**The whistle blowers' concerns (00:21:52)**
**Shifting control to the government (00:22:47)**
**Elite group within the government (00:24:12)**
**Government competence and allocation of resources (00:25:34)**
**Political level and tech expertise (00:27:58)**
**Challenges in government engagement (00:29:41)**
**State department's engagement and assessment (00:31:33)**
**Recognition of government competence (00:34:36)**
**Engagement with frontier labs (00:35:04)**
**Whistleblower insights and concerns (00:37:33)**
**Whistleblower motivations (00:41:58)**
**Engagements with AI Labs (00:42:54)**
**Emotional Impact of the Work (00:43:49)**
**Workshop with Government Officials (00:44:46)**
**Challenges in Policy Implementation (00:45:46)**
**Expertise and Insights (00:49:11)**
**Future Engagement with US Government (00:50:51)**
**Flexibility of Private Sector Entity (00:52:57)**
**Impact on Whistleblowing Culture (00:55:23)**
**Key Recommendations (00:57:03)**
**Security and Governance of AI Technology (01:00:11)**
**Obstacles and Timing in Hardware Development (01:04:26)**
**The AI Lab Security Measures (01:04:50)**
**Nvidia's Stance on Regulations (01:05:44)**
**Export Controls and Governance Failures (01:07:26)**
**Concerns about AGI and Alignment (01:13:16)**
**Implications for Future Generations (01:16:33)**
**Personal Transformation and Mental Health (01:19:23)**
**Starting a Nonprofit for AI Risk Awareness (01:21:51)**

Jul 8, 2024 • 6min
Episode #36 Trailer “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast
In Episode #36 Trailer, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.
Gladstone AI Action Plan
https://www.gladstone.ai/action-plan
TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/
SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures
RESOURCES:
BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
TIMESTAMPS:
The assignment from the State Department (00:00:00) Discussion about the task given by the State Department team regarding the assessment of safety and security in frontier AI and advanced AI systems.
Transition to detective work (00:00:30) The transition to a detective-like approach in gathering information and engaging with whistleblowers and clandestine meetings.
Assessment of the AI safety community (00:01:05) A critique of the lack of action orientation and proactive approach in the AI safety community.
Engagement with the Department of Defense (DoD) (00:02:57) Discussion about the engagement with the DoD, its existing safety culture, and the organizations involved in testing and evaluations.
Shifting control to the government (00:03:54) Exploration of the need to shift control to the government and regulatory level for effective steering of the development of AI technology.
Concerns about weaponization and loss of control (00:04:45) A discussion about concerns regarding weaponization and loss of control in AI labs and the need for more ambitious recommendations.