Machine Learning Street Talk (MLST) cover image

Reasoning, Robustness, and Human Feedback in AI - Max Bartolo (Cohere)

Machine Learning Street Talk (MLST)

CHAPTER

Understanding Human Feedback in AI Training

This chapter explores a research paper focused on the role of human feedback in AI model training and evaluation, highlighting the limitations of relying on singular preference scores. It examines how human biases towards formatting, style, and assertiveness can affect perceptions of correctness in AI outputs. The discussion also addresses the diverse influences on user interactions with models and the importance of tailoring AI to meet the preferences of a varied user base.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner