

On Adversarial Training & Robustness with Bhavna Gopal
14 snips May 8, 2024
Bhavna Gopal, a PhD candidate at Duke with research experience at top tech companies, uncovers the world of adversarial training and AI robustness. She explains how adversarial attacks threaten AI model integrity, especially in sensitive fields like healthcare and law. The conversation touches on the challenges of evaluating model performance and the ethical ramifications of AI deployment. Also discussed are the complexities of self-driving cars and the importance of interpretability in ensuring public trust in AI technologies.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7
Intro
00:00 • 2min
Adversarial Attacks and AI Robustness
02:13 • 15min
AI Trust and Self-Driving Cars
17:19 • 4min
Exploring Adversarial Training and Accuracy Metrics in Machine Learning
21:02 • 2min
Understanding Language Models: Challenges and Explainability
23:25 • 2min
The Intricacies of Intuition and Recognition
25:02 • 2min
Navigating AI's Explainability and Trust in Critical Fields
27:14 • 17min