Machine Learning Street Talk (MLST) cover image

Reasoning, Robustness, and Human Feedback in AI - Max Bartolo (Cohere)

Machine Learning Street Talk (MLST)

00:00

Improving AI Robustness Through Adversarial Examples

This chapter explores the topic of adversarial examples in machine learning, focusing on their potential impact on the reliability of models amidst distribution shifts. It emphasizes the importance of human feedback in enhancing model robustness and the challenges posed by varying data collection methods. The discussion includes how dynamic benchmarks and a data-centric approach can refine AI evaluation and training processes for better performance in real-world applications.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app