London Futurists cover image

Creating Benevolent Decentralized AGI, with Ben Goertzel

London Futurists

00:00

Reward Maximizing Is Just Another Word for Having Goals

The AGI hacks its own reward systems. It makes up new goals on the fly which is what we all do and don't see why an AGI wouldn't do that. Human evolution has not driven us in nearly as rigorous a goal oriented way as you get if you're trying to maximize metrics like clicks on a web page or shareholder value. If you're developing an AI to serve a company or a country's purposes you're going to gravity toward much more rigorously goal driven systems than something like the human brain is.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app