80,000 Hours Podcast cover image

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

80,000 Hours Podcast

00:00

The Impact of Reinforcement Learning on AI Alignment

This chapter explores the crucial role of Reinforcement Learning from Human Feedback (RLHF) in AI alignment, highlighting its practical applications and commercial significance. It also examines the advancements in interpretability of vision models and the associated challenges in ensuring safety and alignment.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app