80,000 Hours Podcast cover image

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

80,000 Hours Podcast

00:00

Aim for Human-Level Alignment: An Achievable Goal in Alignment Research

Don't worry about aligning GPT-20, focus on aligning GPT-5, then collaborate with GPT-5 to align GPT-6, and so on. Start with a more achievable goal. Look at GPT-5 empirically and fine-tune it on alignment data to help in research.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app