Astral Codex Ten Podcast cover image

Contra The xAI Alignment Plan

Astral Codex Ten Podcast

00:00

The Problem With a Maximally Curious AI

A maximally curious AI would not be safe for humanity. Even if an AI decides human flourishing is briefly interesting, after a while it will already know lots of things about human flourishing and want to learn something else instead. If it was cheaper to simulate abstracted humans in low fidelity the same way SimCity has simulated citizens, wouldn't the AI do that instead? Are humans more interesting than sentient lizard people? I don't know. Is leaving human society intact really an efficient way to study humans? Maybe it would be better to dissect a few thousand humans, learn the best principles, then run a lot of simulations of humans in various contrived situations. Would the humans in the simulation be conscious

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app