Dwarkesh Podcast cover image

Shane Legg (DeepMind Founder) - 2028 AGI, New Architectures, Aligning Superhuman Models

Dwarkesh Podcast

00:00

Navigating AI Safety and AGI Predictions

This chapter explores the need for a structured framework in AI development, specifically regarding safety measures as capabilities advance. The discussion highlights DeepMind's role in shaping AGI safety conversations and the broader implications of significant players in the field. Additionally, it examines early predictions about achieving AGI, analyzing the accelerating factors of computational power and the potential timeline for reaching human-level intelligence by 2028.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app