AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast
Full Interview Starts At: (00:09:54)
Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast
BIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on.
This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk?
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH
https://pauseai.info/2024-may
TIMESTAMPS:
**Emergency Preparedness in AI (00:00:00)**
**Introduction to the Podcast (00:02:49)**
**Discussion on AI Risk and Disinformation (00:06:27)**
**Engagement with Lawmakers and Policy Development (00:09:54)**
**Control AI's Role in AI Risk Awareness (00:19:00)**
**Engaging with congressional offices (00:25:00)**
**Establishing AI emergency preparedness office (00:32:35)**
**Congressional focus on AI competitiveness (00:37:55)**
**Expert opinions on AI risks (00:40:38)**
**Commerce vs. national security (00:42:41)**
**US AI Safety Institute's placement (00:46:33)**
**Expert concerns and raising awareness (00:50:34)**
**Influence of protests on policy (00:57:00)**
**Public opinion on AI regulation (01:02:00)**
**Silicon Valley Culture vs. DC Culture (01:05:44)**
**International Cooperation and Red Lines (01:12:34)**
**Eliminating Race Dynamics in AI Development (01:19:56)**
**Government Involvement for AI Development (01:22:16)**
**Compute-Based Licensing Proposal (01:24:18)**
**AI Safety as Emergency Preparedness (01:27:43)**
**Closing Remarks (01:29:09)**
RESOURCES:
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
https://www.safe.ai/work/statement-on-ai-risk
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes