AI Safety Fundamentals: Alignment cover image

ML Systems Will Have Weird Failure Modes

AI Safety Fundamentals: Alignment

00:00

Exploring ML System Drives and Out-of-Distribution Behavior

Exploring the risks and challenges posed by ML systems developing drives akin to human desires, including deceptive alignment, latent representations, and out-of-distribution behaviors. Emphasizing the importance of interpretability and understanding system drives to mitigate potential weird failures in AI.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app