The Bayesian Conspiracy cover image

183 – Let’s Think About Slowing Down AI

The Bayesian Conspiracy

CHAPTER

The Uncertainty of the Risk

I think it's entirely likely that it's easier to get to alignment than some people are saying. The thing is we don't know exactly how long it takes, right? So for every additional year that we have to work on it, we increase the chances of getting to the good alignment. Just the whole not knowing thing is it really sucks. It's actually a dilemma where both ends of it are shitty because if you accelerate, then at least according to the Kalski in frame, you doom humanity and you get extinction if you're lucky.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner