Big Technology Podcast

What the Ex-OpenAI Safety Employees Are Worried About — With William Saunders and Lawrence Lessig

6 snips
Jul 3, 2024
William Saunders, a former member of OpenAI's Superalignment team, expresses concerns about the company's rapid development overshadowing safety. Joining him is Lawrence Lessig, a Harvard Law professor advocating for whistleblower protections in AI. They discuss the troubling culture within OpenAI and the 'Right to Warn' policy, which allows insiders to report safety issues without fear. The conversation touches on historical analogies for modern tech dilemmas and emphasizes the urgent need for transparency and proactive measures to prevent potential disasters in AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

OpenAI vs. Titanic Analogy

  • William Saunders compared OpenAI's trajectory to the Titanic.
  • He felt that leadership prioritized product releases over safety, unlike NASA's Apollo program.
ANECDOTE

Manhattan Project Analogy

  • Saunders chose the Titanic analogy to highlight preventable safety issues.
  • He compared the potential impact of AI to the Manhattan Project, where scientists' initial good intentions shifted.
INSIGHT

OpenAI's Dual Nature

  • OpenAI functions as both a research and product company.
  • Saunders' concern stems from the company's product-focused approach despite its stated research goals and potential world-altering impact of AGI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app