AI Safety Fundamentals: Governance cover image

AI Safety Fundamentals: Governance

Nobody’s on the Ball on AGI Alignment

May 13, 2023
The podcast discusses the shortage of researchers working on AI alignment compared to machine learning capabilities researchers. It highlights the limited research in the field of alignment and emphasizes the need for a more rigorous and concerted effort. Approaches to achieving alignment in AGI are explored, along with the challenge of aligning AI systems with human values in superhuman AGI. The significance of involving talented ML researchers in solving the alignment problem is emphasized, stressing the need for focused research on tackling the core difficulties of the technical problem.
17:12

Podcast summary created with Snipd AI

Quick takeaways

  • The number of researchers working on AGI alignment is surprisingly low compared to those working on machine learning capabilities.
  • Alignment techniques relying on human supervision will not scale to superhuman AGI systems.

Deep dives

Lack of Focus on AGI Alignment

Despite concerns about AI risk and the perception of a well-funded effort, the number of researchers working on AGI alignment is surprisingly low compared to those working on machine learning capabilities.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode