Dwarkesh Podcast cover image

Shane Legg (DeepMind Founder) - 2028 AGI, New Architectures, Aligning Superhuman Models

Dwarkesh Podcast

CHAPTER

Navigating AI Safety and AGI Predictions

This chapter explores the need for a structured framework in AI development, specifically regarding safety measures as capabilities advance. The discussion highlights DeepMind's role in shaping AGI safety conversations and the broader implications of significant players in the field. Additionally, it examines early predictions about achieving AGI, analyzing the accelerating factors of computational power and the potential timeline for reaching human-level intelligence by 2028.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner