2min chapter

Manifold cover image

Geoffrey Miller: Evolutionary Psychology, Polyamorous Relationships, and Effective Altruism — #26

Manifold

CHAPTER

Why Should We Slow Down AI Alignment Research?

I've always thought AI alignment was you know of course you can't prove any theorems about this but it just seemed very very implausible that it was a solvable problem yeah. I think there's a certain culture in AI alignment that is very nerdy and a bit aspergory and worship's formalization, he says. "If you seriously want to align with humans you have to align with them as they are not as you want to abstractify them into being"

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode