For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast

Jul 10, 2024
01:25:28

In Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.

Gladstone AI Action Plan

https://www.gladstone.ai/action-plan

TIME MAGAZINE ON THE GLADSTONE REPORT

https://time.com/6898967/ai-extinction-national-security-risks-report/

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!

https://www.youtube.com/@DoomDebates

 

Please Donate Here To Help Promote For Humanity

https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. 

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner

Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg

Website: https://josef.pictures

RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!

https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!

Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST

  / discord  

https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety

Statement on AI Risk | CAIS

https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes 

https://twitter.com/AISafetyMemes

TIMESTAMPS:

**The whistleblower's concerns (00:00:00)**

**Introduction to the podcast (00:01:09)**

**The urgency of addressing AI risk (00:02:18)**

**The potential consequences of falling behind in AI (00:04:36)**

**Transitioning to working on AI risk (00:06:33)**

**Engagement with the State Department (00:08:07)**

**Project assessment and public visibility (00:10:10)**

**Motivation for taking on the detective work (00:13:16)**

**Alignment with the government's safety culture (00:17:03)**

**Potential government oversight of AI labs (00:20:50)**

**The whistle blowers' concerns (00:21:52)**

**Shifting control to the government (00:22:47)**

**Elite group within the government (00:24:12)**

**Government competence and allocation of resources (00:25:34)**

**Political level and tech expertise (00:27:58)**

**Challenges in government engagement (00:29:41)**

**State department's engagement and assessment (00:31:33)**

**Recognition of government competence (00:34:36)**

**Engagement with frontier labs (00:35:04)**

**Whistleblower insights and concerns (00:37:33)**

**Whistleblower motivations (00:41:58)**

**Engagements with AI Labs (00:42:54)**

**Emotional Impact of the Work (00:43:49)**

**Workshop with Government Officials (00:44:46)**

**Challenges in Policy Implementation (00:45:46)**

**Expertise and Insights (00:49:11)**

**Future Engagement with US Government (00:50:51)**

**Flexibility of Private Sector Entity (00:52:57)**

**Impact on Whistleblowing Culture (00:55:23)**

**Key Recommendations (00:57:03)**

**Security and Governance of AI Technology (01:00:11)**

**Obstacles and Timing in Hardware Development (01:04:26)**

**The AI Lab Security Measures (01:04:50)**

**Nvidia's Stance on Regulations (01:05:44)**

**Export Controls and Governance Failures (01:07:26)**

**Concerns about AGI and Alignment (01:13:16)**

**Implications for Future Generations (01:16:33)**

**Personal Transformation and Mental Health (01:19:23)**

**Starting a Nonprofit for AI Risk Awareness (01:21:51)**


Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode