2min chapter

The Foresight Institute Podcast cover image

Anders Sandberg | Post-Scarcity Civilizations & Cognitive Enhancement

The Foresight Institute Podcast

CHAPTER

Is There a Possible Alignment With AGI?

David Wheeler says he has proof that there is no possibility of alignment in the general case even for slow takeoff. The real deep question I think is, do you have things like the for takeoff scenario where alignment has to be really perfect or everything is lost? We don't know it. But if we look at AGI, the humans can make and the AGI has been put to when they work in the environment of earth surface, etc. In that case, I think we should get good enough things. Some visionaries who say don't worry about alignment with AGI because we will become the AGI. Don't worry about robots taking over the world. We will be augmented and enhanced

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode