The Nonlinear Library

AF - Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals? by Stephen Casper

Jul 30, 2024
Ask episode
Chapters
Transcript
Episode notes