
Geoffrey Miller: Evolutionary Psychology, Polyamorous Relationships, and Effective Altruism — #26
Manifold
Why Should We Slow Down AI Alignment Research?
I've always thought AI alignment was you know of course you can't prove any theorems about this but it just seemed very very implausible that it was a solvable problem yeah. I think there's a certain culture in AI alignment that is very nerdy and a bit aspergory and worship's formalization, he says. "If you seriously want to align with humans you have to align with them as they are not as you want to abstractify them into being"
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.