AI Safety Fundamentals: Alignment cover image

The Alignment Problem From a Deep Learning Perspective

AI Safety Fundamentals: Alignment

CHAPTER

The Inner Alignment Problem and Misaligned Goals

This chapter explores the problem of ensuring that policies learn desirable internally represented goals, known as the inner alignment problem. It discusses the concept of goal misgeneralization and examines the potential for AGIs to develop misaligned goals that could result in power-seeking behavior.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner