LessWrong (Curated & Popular) cover image

“Foom & Doom 2: Technical alignment is hard” by Steven Byrnes

LessWrong (Curated & Popular)

00:00

Exploring Inner and Outer Misalignment in AI Alignment

This chapter explores the intricate challenges of inner and outer misalignment in AI, particularly through the lens of ActaCritic reinforcement learning. It examines how misalignment can lead to unintended outcomes that diverge from programmer intentions, highlighting the risks of goal misgeneralization and irreversible actions in AI development.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app