London Futurists cover image

Creating Benevolent Decentralized AGI, with Ben Goertzel

London Futurists

CHAPTER

Reward Maximizing Is Just Another Word for Having Goals

The AGI hacks its own reward systems. It makes up new goals on the fly which is what we all do and don't see why an AGI wouldn't do that. Human evolution has not driven us in nearly as rigorous a goal oriented way as you get if you're trying to maximize metrics like clicks on a web page or shareholder value. If you're developing an AI to serve a company or a country's purposes you're going to gravity toward much more rigorously goal driven systems than something like the human brain is.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner