80,000 Hours Podcast cover image

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

80,000 Hours Podcast

00:00

Designing Objective Functions and AI Safety

This chapter explores the work of a researcher at DeepMind who focuses on designing objective functions in AI and making machine learning more robust. The speakers discuss the potential of AI to have a positive impact on global problems, but also the risks associated with new technology and the need to address AI's alignment with human intentions.

Play episode from 02:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app