The 80000 Hours Podcast on Artificial Intelligence cover image

Two: Ajeya Cotra on accidentally teaching AI models to deceive us

The 80000 Hours Podcast on Artificial Intelligence

NOTE

Reward Plans, Debate Models

Rewarding plans instead of outcomes and having models debate each other are strategies to prevent rewarding scheming or sycophancy in training and reinforcement approaches. By rewarding a model for creating sensible plans and another for pointing out flaws in the plan, the debate approach can enhance the effectiveness of the training process.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner