
Why I Am Not (As Much Of) A Doomer (As Some People)
Astral Codex Ten Podcast
00:00
The Case for Optimism
Some intermediate AIs that we're trusting to solve our problems for us are actually sleeper agents. The next generation of AIs will replicate the same alignment bugs that produce the previous generation all the way up to the world killer. If we wait for them to fail in ways that put the world on high alert, we will wait in vain. Failing in an obvious way is stupid and doesn't achieve any plausible goals. Maybe we'll be very lucky and some AI will have a purely cognitive bug that makes it do a stupid thing which kills lots of people, but not everyone. maybe that won't happen, and AIs will have only motivational bugs, which will make them act like model citizens
Transcript
Play full episode