FUTURATI PODCAST cover image

Ep. 129: Applying the 'security mindset' to AI and x-risk | Jeffrey Ladish

FUTURATI PODCAST

CHAPTER

The Importance of a Hope for AI

I think that this works a little bit if the thing that happens with AI is that we end up hitting these like big bottlenecks or we hit these like these big barriers and like scaling doesn't continue to work. I don't really buy this inside view at all, but from an outside view I could be like sure. Let's say that we can't achieve super intelligence for a really long time. Then I'm like even even strong AGI from a really longtime, then I'm like sure, if we can't achieved that. You know, I don't think GPT five, like, might be very disruptive. I don’t think it kills everyone.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner