For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Episode #31 - “Trucker vs. AGI” For Humanity: An AI Risk Podcast

Jun 5, 2024
01:15:53

In Episode #31 John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. They discuss the urgency of raising awareness about AI risks, the potential job displacement in industries like trucking, and the geopolitical implications of AI advancements. Leighton shares his plans to start a podcast and possibly use filmmaking to engage the public in AI safety discussions. Despite skepticism from others, they stress the importance of community and dialogue in understanding and mitigating AI threats, with Leighton highlighting the risk of a "singleton event" and ethical concerns in AI development.

Full Interview Starts at (00:10:18)

Please Donate Here To Help Promote For Humanity

https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Timestamps

- Layton's Introduction (00:00:00)

- Introduction to the Podcast (00:02:19)

- Power of the First Followers (00:03:24)

- Layton's Concerns about AI (00:08:49)

- Layton's Background and AI Awareness (00:11:11)

- Challenges in Spreading Awareness (00:14:18)

- Distrust of Government and Family Involvement (00:23:20)

- Government Imperfections (00:25:39)

- AI Impact on National Security (00:26:45)

- AGI Decision-Making (00:28:14)

- Government Oversight of AGI (00:29:32)

- Geopolitical Tension and AI (00:31:51)

- Job Loss and AGI (00:37:20)

- AI, Mining, and Space Race (00:38:02)

- Public Engagement and AI (00:44:34)

- Philosophical Perspective on AI (00:49:45)

- The existential threat of AI (00:51:05)

- Geopolitical tensions and AI risks (00:52:05)

- AI's potential for global dominance (00:53:48)

- Ethical concerns and AI welfare (01:01:21)

- Preparing for AI risks (01:03:02)

- The challenge of raising awareness (01:06:42)

- A hopeful outlook (01:08:28)

RESOURCES:

Leighton’s Podcast on YouTube:

https://www.youtube.com/@UrNotEvenBasedBro

JOIN THE FIGHT, help Pause AI!!!!

Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST

  / discord  

https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety

Statement on AI Risk | CAIS

https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes 

https://twitter.com/AISafetyMemes


Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode