
The Wright Show
The case for alarm about artificial intelligence (Robert Wright & Holly Elmore)
May 8, 2025
In this engaging discussion, Holly Elmore, Executive Director of Pause AI US and AI safety advocate, outlines the pressing need for a pause in AI development. She shares insights from her newsletter and highlights why accelerating AI poses drastic risks. The conversation dives into the historical shift in attitudes toward AI, addressing the dangers of unchecked advancements. Holly emphasizes the importance of international collaboration and critiques the commercialization of AI organizations. Her impassioned call to action urges listeners to consider the societal impacts of rapid AI evolution.
01:00:00
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- The movement emphasizes a critical need for a pause in AI advancements to assess safety and ethical frameworks before proceeding further.
- Public awareness of AI risks is increasing, highlighting the demand for government regulation and cautious development in the face of rapid technological growth.
Deep dives
The Vision of Pausing AI Development
The movement aims to slow the advancement of artificial intelligence to ensure safety and ethical considerations are addressed before further capabilities are developed. Initially, the proposal was to pause large-scale AI training once it surpassed a certain threshold of floating point operations. This pause is envisioned as a necessary step to assess the implications and potential risks associated with super powerful AI, as there is a concern that rapid advancements could outpace societal preparedness and understanding. The objective is to foster an environment where society can reflect on what the future with AI should look like, rather than being thrust into an unconsidered path of technological evolution.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.