Hear This Idea cover image

#65 – Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up

Hear This Idea

00:00

The Dangers of Expecting Superhuman AI Systems to Have Goals

There are various reasons for expecting them to have goals. I think maybe a big one is that goals just seem very useful. So you can expect that they'll behave in a roughly human way. You can still have something that's pretty goal directed in the sense that you might be interested in economically. Like it sort of systematically knows that it should like look at a calendar and figure out what else is going on.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app