22min chapter

Super Data Science: ML & AI Podcast with Jon Krohn cover image

565: AGI: The Apocalypse Machine

Super Data Science: ML & AI Podcast with Jon Krohn

CHAPTER

Exploring AI Safety and AGI Risks

The chapter dives into the necessity of discussions surrounding Artificial General Intelligence (AGI) safety, analyzing risks associated with AGI surpassing human intelligence and highlighting the importance of benevolent AGI entities. It delves into founder motivations for addressing AGI risks, advocating for global awareness and the establishment of institutions to manage AGI risks. The chapter also emphasizes the importance of becoming an AI safety expert, recommending resources like books, organizations, and papers for gaining insight into AI safety research and alignment programs.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode