

For Humanity: An AI Safety Podcast
The AI Risk Network
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Jun 5, 2024 • 1h 16min
Episode #31 - “Trucker vs. AGI” For Humanity: An AI Risk Podcast
In Episode #31 John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. They discuss the urgency of raising awareness about AI risks, the potential job displacement in industries like trucking, and the geopolitical implications of AI advancements. Leighton shares his plans to start a podcast and possibly use filmmaking to engage the public in AI safety discussions. Despite skepticism from others, they stress the importance of community and dialogue in understanding and mitigating AI threats, with Leighton highlighting the risk of a "singleton event" and ethical concerns in AI development.Full Interview Starts at (00:10:18)Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Timestamps- Layton's Introduction (00:00:00)- Introduction to the Podcast (00:02:19)- Power of the First Followers (00:03:24)- Layton's Concerns about AI (00:08:49)- Layton's Background and AI Awareness (00:11:11)- Challenges in Spreading Awareness (00:14:18)- Distrust of Government and Family Involvement (00:23:20)- Government Imperfections (00:25:39)- AI Impact on National Security (00:26:45)- AGI Decision-Making (00:28:14)- Government Oversight of AGI (00:29:32)- Geopolitical Tension and AI (00:31:51)- Job Loss and AGI (00:37:20)- AI, Mining, and Space Race (00:38:02)- Public Engagement and AI (00:44:34)- Philosophical Perspective on AI (00:49:45)- The existential threat of AI (00:51:05)- Geopolitical tensions and AI risks (00:52:05)- AI's potential for global dominance (00:53:48)- Ethical concerns and AI welfare (01:01:21)- Preparing for AI risks (01:03:02)- The challenge of raising awareness (01:06:42)- A hopeful outlook (01:08:28)RESOURCES:Leighton’s Podcast on YouTube:https://www.youtube.com/@UrNotEvenBasedBroJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jun 3, 2024 • 5min
Episode #31 TRAILER - “Trucker vs. AGI” For Humanity: An AI Risk Podcast
Episode #31 TRAILER - “Trucker vs. AGI” For Humanity: An AI Risk PodcastIn Episode #31 TRAILER, John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.TimestampsThe challenge of keeping up (00:00:00) Discussion about the difficulty of staying informed amidst busy lives and the benefit of using podcasts to keep up.The impact of social media bubbles (00:01:22) Exploration of how social media algorithms create bubbles and the challenge of getting others to pay attention to important information.Geopolitical implications of technological advancements (00:02:00) Discussion about the potential implications of technological advancements, particularly in relation to artificial intelligence and global competition.Potential consequences of nationalizing AGI (00:04:21) Speculation on the potential consequences of nationalizing artificial general intelligence and the potential use of a pandemic to gain a competitive advantage.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

May 29, 2024 • 1h 40min
Episode #30 - “Dangerous Days At Open AI” For Humanity: An AI Risk Podcast
Exploration of AI safety competence at Open AI and the shift to AI Risk. Challenges in achieving super alignment, unethical behavior in powerful organizations, and navigating AI ethics and regulation. Risks of AI biothreats, uncertainties in AI development, and debates on human vs AI intelligence limits.

May 27, 2024 • 3min
Episode #30 - “Dangerous Days At Open AI” For Humanity: An AI Risk Podcast
Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastIn episode 30, John Sherman interviews Professor Olle Häggström on a wide range of AI risk topics. At the top of the list is the super-instability and the super-exodus from OpenAI’s super alignment team following the resignations of Jan Lieke and Ilya Sutskyver.This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

May 22, 2024 • 1h 7min
Episode #29 - “Drop Everything To Stop AGI” For Humanity: An AI Safety Podcast
Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThe world is waking up to the existential danger of unaligned AGI. But we are racing against time. Some heroes are stepping up, people like this week’s guest Chris Gerrby. Chris was successful in organizing people against AI in Sweden. In early May he left Sweden, moved to England, and is now spending 14 hours a day 7 days a week to stop AGI. Learn how he plans to grow Pause AI as its new Chief Growth Officer and his thoughts on how to make the case for pausing AI.This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Timestamps:imestamps:Dropping Everything to Stop AGI (00:00:00) Chris Gerbe's dedication to working 14 hours a day to pause AI and the challenges he faces.OpenAI's Recent Events (00:01:11) Paused AI and Chris Gerrby's Involvement (00:05:28) Chris Gerrbys Journey and Involvement in AI Safety (00:06:44) Coping with the Dark Outlook of AI Risk (00:19:02) Beliefs About AGI Timeline (00:24:06) The pandemic risk (00:25:30) Losing control of AGI (00:26:40) Stealth control and treacherous turn (00:28:38) Relocation and intense work schedule (00:30:20)Growth strategy for PAI (00:33:39) Marketing and public relations (00:35:35) Tailoring communications and gaining members (00:39:41) Challenges in communicating urgency (00:44:36) Path to growth for Pause AI (00:48:51)Joining the Pause AI community (00:49:57) Community involvement and support (00:50:33) Pause AI's role in the AI landscape (00:51:22)Maintaining work-life balance (00:53:47) Adapting personal goals for the cause (00:55:50) Probability of achieving a pause in AI development (00:57:50)Finding hope in personal connections (01:00:24) RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

May 20, 2024 • 3min
Episode #29 TRAILER - “Drop Everything To Stop AGI” For Humanity: An AI Safety Podcast
Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThe world is waking up to the existential danger of unaligned AGI. But we are racing against time. Some heroes are stepping up, people like this week’s guest Chris Gerrby. Chris was successful in organizing people against AI in Sweden. In early May he left Sweden, moved to England, and is now spending 14 hours a day 7 days a week to stop AGI. Learn how he plans to grow Pause AI as its new Chief Growth Officer and his thoughts on how to make the case for pausing AI.This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Timestamps:Tailoring Communication (00:00:51) The challenge of convincing others about the importance of a cause and the need to tailor communications to different audiences.Audience Engagement (00:02:13) Discussion on tailoring communication strategies to different audiences, including religious people, taxi drivers, and artists.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

May 15, 2024 • 1h 30min
Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast
Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety PodcastFull Interview Starts At: (00:09:54)Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastBIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on.This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk?This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.JOIN THE PAUSE AI PROTEST MONDAY MAY 13THhttps://pauseai.info/2024-mayTIMESTAMPS:**Emergency Preparedness in AI (00:00:00)****Introduction to the Podcast (00:02:49)****Discussion on AI Risk and Disinformation (00:06:27)****Engagement with Lawmakers and Policy Development (00:09:54)****Control AI's Role in AI Risk Awareness (00:19:00)****Engaging with congressional offices (00:25:00)****Establishing AI emergency preparedness office (00:32:35)****Congressional focus on AI competitiveness (00:37:55)****Expert opinions on AI risks (00:40:38)****Commerce vs. national security (00:42:41)****US AI Safety Institute's placement (00:46:33)****Expert concerns and raising awareness (00:50:34)****Influence of protests on policy (00:57:00)****Public opinion on AI regulation (01:02:00)****Silicon Valley Culture vs. DC Culture (01:05:44)****International Cooperation and Red Lines (01:12:34)****Eliminating Race Dynamics in AI Development (01:19:56)****Government Involvement for AI Development (01:22:16)****Compute-Based Licensing Proposal (01:24:18)****AI Safety as Emergency Preparedness (01:27:43)****Closing Remarks (01:29:09)**RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

May 13, 2024 • 10min
Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast
Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastBIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on.This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk?This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.JOIN THE PAUSE AI PROTEST MONDAY MAY 13THhttps://pauseai.info/2024-mayTIMESTAMPS:The meetings with congressional staffers (00:00:00) Akash discusses his experiences and strategies for engaging with congressional staffers and policymakers regarding AI risks and national security threats.Understanding AI risks and national security (00:00:14) Akash highlights the interest and enthusiasm among policymakers to learn more about AI risks, particularly in the national security space.Messaging and communication strategies (00:01:09) Akash emphasizes the importance of making less intuitive threat models understandable and getting the time of day from congressional offices.Emergency preparedness in AI risk (00:02:45) Akash introduces the concept of emergency preparedness in the context of AI risk and its relevance to government priorities.Preparedness approach to uncertain events (00:04:17) Akash discusses the preparedness approach to dealing with uncertain events and the significance of having a playbook in place.Prioritizing AI in national security (00:06:08) Akash explains the strategic prioritization of engaging with key congressional offices focused on AI in the context of national security.Policymaker concerns and China's competitiveness (00:07:03) Akash addresses the predominant concern among policymakers about China's competitiveness in AI and its impact on national security.AI development and governance safeguards (00:08:15) Akash emphasizes the need to raise awareness about AI research and development misalignment and loss of control threats in the context of China's competitiveness.RESOURCES:JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

May 8, 2024 • 1h 20min
Episode #27 - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast
Episode #27 - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety PodcastPlease Donate Here To Help Promote This Showhttps://www.paypal.com/paypalme/forhumanitypodcastIn episode #27, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coalition about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists.This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.JOIN THE PAUSE AI PROTEST MONDAY MAY 13THhttps://pauseai.info/2024-mayTIMESTAMPS:The protest at OpenAI (00:00:00) Discussion on the non-violent protest at the OpenAI headquarters and the response from the employees.The Road Trip to Protest (00:09:31) Description of the road trip to San Francisco for a protest at OpenAI, including a video of the protest and interactions with employees.Formation of the World Pause Coalition (00:15:07) Introduction to the World Pause Coalition and its mission to raise awareness about AI and superintelligence.Challenges and Goals of Protesting (00:18:31) Exploration of the challenges and goals of protesting AI risks, including education, government pressure, and environmental impact.The smaller countries' stakes (00:22:53) Highlighting the importance of smaller countries' involvement in AI safety negotiations and protests.San Francisco protest (00:25:29) Discussion about the experience and impact of the protest at the OpenAI headquarters in San Francisco.Interactions with OpenAI workers (00:26:56) Insights into the interactions with OpenAI employees during the protest, including their responses and concerns.Different approaches to protesting (00:41:33) Exploration of peaceful protesting as the preferred approach, contrasting with more extreme methods used by other groups.Embrace Safe AI (00:43:47) Discussion about finding a position for the company that aligns with concerns about AI and the need for safe AI.Suffering Risk (00:48:24) Exploring the concept of suffering risk associated with superintelligence and the potential dangers of AGI.Religious Leaders' Role (00:52:39) Discussion on the potential role of religious leaders in raising awareness and mobilizing support for AI safety.Personal Impact of AI Concerns (01:03:52) Reflection on the personal weight of understanding AI risks and maintaining hope for a positive outcome.Finding Catharsis in Taking Action (01:08:12) How taking action to help feels cathartic and alleviates the weight of the issue.Weighing the Impact on Future Generations (01:09:18) The heavy burden of concern for future generations and the motivation to act for their benefit.RESOURCES:Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-risk This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

May 6, 2024 • 2min
Episode #27 Trailer - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast
Please Donate Here To Help Promote This Showhttps://www.paypal.com/paypalme/forhumanitypodcastIn episode #27 Trailer, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coaltion about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-risk This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com