80,000 Hours Podcast cover image

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

80,000 Hours Podcast

00:00

The Impact of Reinforcement Learning on AI Alignment

This chapter explores the crucial role of Reinforcement Learning from Human Feedback (RLHF) in AI alignment, highlighting its practical applications and commercial significance. It also examines the advancements in interpretability of vision models and the associated challenges in ensuring safety and alignment.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app