Practical AI cover image

Attack of the C̶l̶o̶n̶e̶s̶ Text!

Practical AI

00:00

Navigating Safety and Robustness in NLP Adversarial Examples

This chapter examines the vulnerabilities of NLP models to adversarial attacks, particularly in the context of toxic comment classifiers. It also discusses strategies for bolstering model resilience through retraining with adversarial examples, while addressing the unique challenges faced compared to image models.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app