Manifold cover image

Geoffrey Miller: Evolutionary Psychology, Polyamorous Relationships, and Effective Altruism — #26

Manifold

00:00

Why Should We Slow Down AI Alignment Research?

I've always thought AI alignment was you know of course you can't prove any theorems about this but it just seemed very very implausible that it was a solvable problem yeah. I think there's a certain culture in AI alignment that is very nerdy and a bit aspergory and worship's formalization, he says. "If you seriously want to align with humans you have to align with them as they are not as you want to abstractify them into being"

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app