Hear This Idea cover image

Bonus: Preventing an AI-Related Catastrophe

Hear This Idea

00:00

Arguments Against Working on AI Risk

In principle, research into preventing power-seeking in AIs could succeed. But for the moment, we don't actually know how to prevent this power-seeking behavior. Karl Smith thinks there's only a 40% chance that building advanced planning AI systems is both feasible and desirable. It will be much harder to develop a line of systems that don't seek power than to develop misaligned systems that do seek power. We'll look at objections that we think are less persuasive, and give some reasons why.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app