
The Alignment Problem From a Deep Learning Perspective
AI Safety Fundamentals: Alignment
 00:00 
The Inner Alignment Problem and Misaligned Goals
This chapter explores the problem of ensuring that policies learn desirable internally represented goals, known as the inner alignment problem. It discusses the concept of goal misgeneralization and examines the potential for AGIs to develop misaligned goals that could result in power-seeking behavior.
 Play episode from 17:43 
 Transcript 


