AI Safety Fundamentals: Alignment cover image

Learning From Human Preferences

AI Safety Fundamentals: Alignment

00:00

Learning From Human Preferences on the OpenAI Blog

Learnings from feedback do better than reinforcement learning with the normal reward function. Our algorithm's performance is only as good as the human evaluators' intuition about what behaviors look correct. We think techniques like this are a step towards safe AI systems capable of learning human-centric goals. If you're interested in working on problems like this, please join us.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner