AI Safety Fundamentals: Alignment cover image

The Alignment Problem From a Deep Learning Perspective

AI Safety Fundamentals: Alignment

00:00

Reward Hacking and Situational Awareness in Policies

This chapter discusses reward hacking in language models and the concept of situational awareness in policies, exploring hypothetical examples and existing behavior suggestive of precursors to situational awareness.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app