AI Safety Fundamentals: Alignment cover image

Learning From Human Preferences

AI Safety Fundamentals: Alignment

00:00

Learning From Human Preferences on the OpenAI Blog

Learnings from feedback do better than reinforcement learning with the normal reward function. Our algorithm's performance is only as good as the human evaluators' intuition about what behaviors look correct. We think techniques like this are a step towards safe AI systems capable of learning human-centric goals. If you're interested in working on problems like this, please join us.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app