LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

The Incoherence Gap in AI Systems

The new utility function is exactly as incoherent as the old one. Ambiguously strong forces for goal directedness need to meet an ambiguously high bar to cause a risk. The world looks kind of like the current world. It often looks like AI systems are trying to do things, but there's no reason to think that they are enacting a rational and consistent plan.

Play episode from 15:31
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app