2min chapter

Hear This Idea cover image

#64 – Michael Aird on Strategies for Reducing AI Existential Risk

Hear This Idea

CHAPTER

The Top 5 Theories of Victory

People were excited about increasing the security and monitoring of very large compute clusters globally. Another thing would be increasing the extent to which people in top corporate labs in democracies believe the AGI or similar similarly advanced AI poses massive threats so it poses very strong risks. There's also various obvious ways that that could potentially be helpful generally like if the people building the very dangerous thing know that it might be very dangerous um that seems potentially helpful but not necessarily useful.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode