AI Safety Fundamentals: Alignment cover image

The Alignment Problem From a Deep Learning Perspective

AI Safety Fundamentals: Alignment

00:00

The Inner Alignment Problem and Misaligned Goals

This chapter explores the problem of ensuring that policies learn desirable internally represented goals, known as the inner alignment problem. It discusses the concept of goal misgeneralization and examines the potential for AGIs to develop misaligned goals that could result in power-seeking behavior.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app