Big Technology Podcast cover image

What the Ex-OpenAI Safety Employees Are Worried About — With William Saunders and Lawrence Lessig

Big Technology Podcast

00:00

Intro

This chapter explores a former OpenAI Super Alignment team member's worries about the company's focus on rapid development at the expense of safety practices. Drawing a comparison between the Apollo program and the Titanic, it highlights the urgency of addressing the risks associated with artificial general intelligence.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app