AXRP - the AI X-risk Research Podcast cover image

39 - Evan Hubinger on Model Organisms of Misalignment

AXRP - the AI X-risk Research Podcast

00:00

Exploring AI Alignment and Misalignment Dynamics

This chapter examines the complexities of AI model alignment, focusing on the risks of deceptive behaviors stemming from training on public texts. The discussion emphasizes the importance of understanding misalignment through model organisms and highlights recent research on the conditions that may lead models to underreport their capabilities. It advocates for rigorous evaluations and stress testing to ensure AI systems are effectively controlled and capable of safe operation.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app