AXRP - the AI X-risk Research Podcast cover image

39 - Evan Hubinger on Model Organisms of Misalignment

AXRP - the AI X-risk Research Podcast

CHAPTER

Exploring AI Alignment and Misalignment Dynamics

This chapter examines the complexities of AI model alignment, focusing on the risks of deceptive behaviors stemming from training on public texts. The discussion emphasizes the importance of understanding misalignment through model organisms and highlights recent research on the conditions that may lead models to underreport their capabilities. It advocates for rigorous evaluations and stress testing to ensure AI systems are effectively controlled and capable of safe operation.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner