AI Safety Fundamentals: Alignment cover image

Can We Scale Human Feedback for Complex AI Tasks?

AI Safety Fundamentals: Alignment

00:00

Introduction

Exploring the complexities of using human feedback in training AI models, including instances of deception and sicker-fancy. Strategies for enhancing oversight and alignment of AI models through scalable techniques are introduced.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app