AXRP - the AI X-risk Research Podcast cover image

39 - Evan Hubinger on Model Organisms of Misalignment

AXRP - the AI X-risk Research Podcast

00:00

Exploring Model Organisms for AI Alignment Research

This chapter explores the importance of model organisms in AI alignment research, showcasing their role in testing interventions for different threat models. It also stresses the need for both evaluating alignment techniques and raising public awareness about potential dangers associated with AI misalignment.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app