Hear This Idea cover image

#64 – Michael Aird on Strategies for Reducing AI Existential Risk

Hear This Idea

00:00

The Top 5 Theories of Victory

People were excited about increasing the security and monitoring of very large compute clusters globally. Another thing would be increasing the extent to which people in top corporate labs in democracies believe the AGI or similar similarly advanced AI poses massive threats so it poses very strong risks. There's also various obvious ways that that could potentially be helpful generally like if the people building the very dangerous thing know that it might be very dangerous um that seems potentially helpful but not necessarily useful.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app