3min chapter

FUTURATI PODCAST cover image

Ep. 129: Applying the 'security mindset' to AI and x-risk | Jeffrey Ladish

FUTURATI PODCAST

CHAPTER

The Importance of a Hope for AI

I think that this works a little bit if the thing that happens with AI is that we end up hitting these like big bottlenecks or we hit these like these big barriers and like scaling doesn't continue to work. I don't really buy this inside view at all, but from an outside view I could be like sure. Let's say that we can't achieve super intelligence for a really long time. Then I'm like even even strong AGI from a really longtime, then I'm like sure, if we can't achieved that. You know, I don't think GPT five, like, might be very disruptive. I don’t think it kills everyone.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode