
38.5 - Adrià Garriga-Alonso on Detecting AI Scheming
AXRP - the AI X-risk Research Podcast
00:00
Intro
This chapter features a discussion from the Bay Area Alignment Workshop centered on machine learning research, emphasizing safety and interpretability. The speaker shares insights from their work in mechanistic interpretability and reflects on the workshop's impact on their future research endeavors.
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.