Generally Intelligent cover image

Dylan Hadfield-Menell, UC Berkeley/MIT: The value alignment problem in AI

Generally Intelligent

00:00

Optimizing Agents - What's the Biggest Problem?

As systems get more optimized, you start to hit the resource bounds on the overall set. Rather than efficiently reallocating things between what you're referencing,you start to take resources from unreferenced features or utes of the world and reallicate those towards your features. And because we assume things like diminishing marginal returns, eventually this gets out weight. If you don't have diminishing marginal returns,. then perhaps you're ok with doing this big realication. But that's our primary result. This applies to optimizing agents. Write this theory, and it's intended for a i systems, i think it can describe some of the ways that a i systems optimizing engagement, were really useful

Play episode from 18:50
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app