Foresight Institute Radio cover image

Matjaz Leonardis | Interpretability and Security of AI Models

Foresight Institute Radio

00:00

Exploring the Security and Interpretability of AI Models with Hidden Backdoors

The chapter explores the security challenges posed by backdoors in AI models, how adversarial examples can manipulate machine learning models, and the implications for AI interpretability and robustness.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app