4min snip

80,000 Hours Podcast cover image

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

80,000 Hours Podcast

NOTE

The Evaluations Project: Assessing Risks of AI Systems

The Alignment Research Center is engaged in evaluating the risks and dangers posed by AI systems and considering proto standards and proto expectations to contain and ensure safety./nDesigning evaluations and standards for AI systems is a challenging task with many intricate details and considerations./nAuditor access and understanding the limitations of AI models are important factors in evaluating their dangerous capabilities./nDefining standards and determining when AI systems become more powerful and dangerous is complex and multi-dimensional./nRather than a prescriptive standard, companies may need to propose their own tests and criteria for determining when AI systems become too dangerous to scale.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode