Machine Learning Street Talk (MLST) cover image

#103 - Prof. Edward Grefenstette - Language, Semantics, Philosophy

Machine Learning Street Talk (MLST)

00:00

Enhancing AI with Human Feedback

This chapter explores Reinforcement Learning from Human Feedback (RLHF) as a crucial method for improving AI models through human-like preferences. It addresses challenges in preference tuning for language models, emphasizing the need for high-quality feedback and the risks associated with poor annotation practices.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app