
20 - 'Reform' AI Alignment with Scott Aaronson
AXRP - the AI X-risk Research Podcast
The Dangers of Secrecy in AI Governance
I see democracy as a terrible form of human organization, except for all of the alternatives that have been tried. I am scared by someone laterally deciding what goals AI should have and ought to pursue. One of the things that caused me to stay at arm's length from the orthodoxAI alignment community was the constant emphasis on secrecy. Because publishing your progress is a way, you know, to just cause acceleration risk.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.