Machine Learning Street Talk (MLST) cover image

Reasoning, Robustness, and Human Feedback in AI - Max Bartolo (Cohere)

Machine Learning Street Talk (MLST)

CHAPTER

Improving AI Robustness Through Adversarial Examples

This chapter explores the topic of adversarial examples in machine learning, focusing on their potential impact on the reliability of models amidst distribution shifts. It emphasizes the importance of human feedback in enhancing model robustness and the challenges posed by varying data collection methods. The discussion includes how dynamic benchmarks and a data-centric approach can refine AI evaluation and training processes for better performance in real-world applications.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner