Future of Life Institute Podcast cover image

Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

Future of Life Institute Podcast

00:00

Navigating Adversarial Landscapes

This chapter explores the complexities of adversarial attacks on machine learning models, drawing insights from a recent study on obfuscated activations. It examines the nuanced relationship between model training, human perception, and adversarial examples, questioning the effectiveness of model sparsity and compression in enhancing robustness. The discussion emphasizes the need for resilient AI systems that adapt to vulnerabilities through human oversight and memory mechanisms.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app