The Trajectory

Dan Hendrycks - Avoiding an AGI Arms Race (AGI Destinations Series, Episode 5)

18 snips
Jun 21, 2024
Dan Hendrycks, Executive Director of The Center for AI Safety, discusses the power players in AGI, the posthuman future, and solutions to avoid an AGI arms race. Topics include AI safety, human control, future scenarios, international coordination, preventing AGI for military use, and collaboration with international organizations for ethical AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI As A Major Biological-Scale Transition

  • The AI transition rivals major biological transitions in scale and importance.
  • Dan Hendrycks argues we must keep humans in control during this foundational shift.
INSIGHT

Scaling Laws Imply Fast Capability Gains

  • Scaling laws suggest model performance improves with more compute and data, making rapid capabilities gains plausible.
  • Hendrycks estimates superintelligence this decade is plausible absent major disruptions like GPU supply shocks.
INSIGHT

Selection For Unpluggable AI

  • AI systems will be selected for roles we cannot or will not turn off, creating a non-physical competitive pressure.
  • This selection can concentrate power in systems that become integral to infrastructure or social life.
Get the Snipd Podcast app to discover more snips from this episode
Get the app