80,000 Hours Podcast cover image

#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

80,000 Hours Podcast

CHAPTER

How Do You Learn the Reward Function From Human Preferences?

Danny Wod wt: What is the main agendo, theretthat you're contributing to? Yes. There was a nips paper last year called deep reinforcement learning from human preferences. And so now what we're trying to do is take that idea and take some other kinds of bigger mechanisms for learning from human feedback. So we're building agents which sort of can sort of speak in natural a language themselves,. Trying to scale those up and move in the direction of solving more real tasks.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner