AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

NOTE

Automated Alignment Researcher's Objective

The objective of the automated alignment researcher is to solve the overarching problem of aligning superintelligence. This involves first aligning something roughly human level and then tackling additional problems as the intelligence levels advance.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner