
What the Ex-OpenAI Safety Employees Are Worried About — With William Saunders and Lawrence Lessig
Big Technology Podcast
00:00
Intro
This chapter explores a former OpenAI Super Alignment team member's worries about the company's focus on rapid development at the expense of safety practices. Drawing a comparison between the Apollo program and the Titanic, it highlights the urgency of addressing the risks associated with artificial general intelligence.
Transcript
Play full episode