FUTURATI PODCAST cover image

Ep. 129: Applying the 'security mindset' to AI and x-risk | Jeffrey Ladish

FUTURATI PODCAST

00:00

The Importance of a Hope for AI

I think that this works a little bit if the thing that happens with AI is that we end up hitting these like big bottlenecks or we hit these like these big barriers and like scaling doesn't continue to work. I don't really buy this inside view at all, but from an outside view I could be like sure. Let's say that we can't achieve super intelligence for a really long time. Then I'm like even even strong AGI from a really longtime, then I'm like sure, if we can't achieved that. You know, I don't think GPT five, like, might be very disruptive. I don’t think it kills everyone.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app