
Highlights: #159 – Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less
80k After Hours
The Importance of Backup Plans and Addressing Risks in Superintelligence
Discussion on the potential risks of superintelligence, the importance of backup plans, alignment teams at OpenAI and DeepMind, and the need for governance structures to ensure the safety and positive impact of AI.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.