Big Technology Podcast cover image

Big Technology Podcast

What the Ex-OpenAI Safety Employees Are Worried About — With William Saunders and Lawrence Lessig

Jul 3, 2024
Former OpenAI Superalignment team member William Saunders and Harvard Law School professor Lawrence Lessig discuss concerns within the AI community about safety issues at OpenAI. They touch on the 'Right to Warn' policy, parallels between AI development and historical projects, and the need for prioritizing safety over rapid product development.
48:13

Podcast summary created with Snipd AI

Quick takeaways

  • Former OpenAI employees raised concerns over prioritizing product launches over safety, risking ethical dilemmas.
  • Whistleblower protection and regulatory oversight are essential to ensure accountability and responsible AI development.

Deep dives

Concerns about OpenAI's Trajectory and Prioritization of Safety

OpenAI's former super alignment team member voiced concerns about the company's trajectory, drawing comparisons between OpenAI's approach and the Apollo Program versus the Titanic. Despite the mission of building safe and beneficial AGI, the speaker highlighted a shift towards prioritizing product launches over safety, leading to their resignation.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode