The Inside View cover image

Erik Jones on Automatically Auditing Large Language Models

The Inside View

00:00

Exploring Adversarial Attacks and Auditing Language Models

This chapter explores the vulnerabilities of language models to adversarial attacks and the importance of systematic studies for understanding their limitations. It highlights jailbreaking techniques and discusses a research paper centered on automatically auditing these models to uncover hidden behaviors.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app