For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Latest episodes

undefined
May 20, 2024 • 3min

Episode #29 TRAILER - “Drop Everything To Stop AGI” For Humanity: An AI Safety Podcast

Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast The world is waking up to the existential danger of unaligned AGI. But we are racing against time. Some heroes are stepping up, people like this week’s guest Chris Gerrby. Chris was successful in organizing people against AI in Sweden. In early May he left Sweden, moved to  England, and is now spending 14 hours a day 7 days a week to stop AGI. Learn how he plans to grow Pause AI as its new Chief Growth Officer and his thoughts on how to make the case for pausing AI. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Timestamps: Tailoring Communication (00:00:51) The challenge of convincing others about the importance of a cause and the need to tailor communications to different audiences. Audience Engagement (00:02:13) Discussion on tailoring communication strategies to different audiences, including religious people, taxi drivers, and artists. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
May 15, 2024 • 1h 30min

Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast

Episode #28  - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast Full Interview Starts At: (00:09:54) Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast BIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on. This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk? This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH https://pauseai.info/2024-may TIMESTAMPS: **Emergency Preparedness in AI (00:00:00)** **Introduction to the Podcast (00:02:49)** **Discussion on AI Risk and Disinformation (00:06:27)** **Engagement with Lawmakers and Policy Development (00:09:54)** **Control AI's Role in AI Risk Awareness (00:19:00)** **Engaging with congressional offices (00:25:00)** **Establishing AI emergency preparedness office (00:32:35)** **Congressional focus on AI competitiveness (00:37:55)** **Expert opinions on AI risks (00:40:38)** **Commerce vs. national security (00:42:41)** **US AI Safety Institute's placement (00:46:33)** **Expert concerns and raising awareness (00:50:34)** **Influence of protests on policy (00:57:00)** **Public opinion on AI regulation (01:02:00)** **Silicon Valley Culture vs. DC Culture (01:05:44)** **International Cooperation and Red Lines (01:12:34)** **Eliminating Race Dynamics in AI Development (01:19:56)** **Government Involvement for AI Development (01:22:16)** **Compute-Based Licensing Proposal (01:24:18)** **AI Safety as Emergency Preparedness (01:27:43)** **Closing Remarks (01:29:09)** RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
May 13, 2024 • 10min

Episode #28 - “AI Safety Equals Emergency Preparedness” For Humanity: An AI Safety Podcast

Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast BIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on. This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28 trailer, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk? This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH https://pauseai.info/2024-may TIMESTAMPS: The meetings with congressional staffers (00:00:00) Akash discusses his experiences and strategies for engaging with congressional staffers and policymakers regarding AI risks and national security threats. Understanding AI risks and national security (00:00:14) Akash highlights the interest and enthusiasm among policymakers to learn more about AI risks, particularly in the national security space. Messaging and communication strategies (00:01:09) Akash emphasizes the importance of making less intuitive threat models understandable and getting the time of day from congressional offices. Emergency preparedness in AI risk (00:02:45) Akash introduces the concept of emergency preparedness in the context of AI risk and its relevance to government priorities. Preparedness approach to uncertain events (00:04:17) Akash discusses the preparedness approach to dealing with uncertain events and the significance of having a playbook in place. Prioritizing AI in national security (00:06:08) Akash explains the strategic prioritization of engaging with key congressional offices focused on AI in the context of national security. Policymaker concerns and China's competitiveness (00:07:03) Akash addresses the predominant concern among policymakers about China's competitiveness in AI and its impact on national security. AI development and governance safeguards (00:08:15) Akash emphasizes the need to raise awareness about AI research and development misalignment and loss of control threats in the context of China's competitiveness. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes
undefined
May 8, 2024 • 1h 20min

Episode #27 - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast

Episode #27  - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast Please Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #27, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coalition about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH https://pauseai.info/2024-may TIMESTAMPS: The protest at OpenAI (00:00:00) Discussion on the non-violent protest at the OpenAI headquarters and the response from the employees. The Road Trip to Protest (00:09:31) Description of the road trip to San Francisco for a protest at OpenAI, including a video of the protest and interactions with employees. Formation of the World Pause Coalition (00:15:07) Introduction to the World Pause Coalition and its mission to raise awareness about AI and superintelligence. Challenges and Goals of Protesting (00:18:31) Exploration of the challenges and goals of protesting AI risks, including education, government pressure, and environmental impact. The smaller countries' stakes (00:22:53) Highlighting the importance of smaller countries' involvement in AI safety negotiations and protests. San Francisco protest (00:25:29) Discussion about the experience and impact of the protest at the OpenAI headquarters in San Francisco. Interactions with OpenAI workers (00:26:56) Insights into the interactions with OpenAI employees during the protest, including their responses and concerns. Different approaches to protesting (00:41:33) Exploration of peaceful protesting as the preferred approach, contrasting with more extreme methods used by other groups. Embrace Safe AI (00:43:47) Discussion about finding a position for the company that aligns with concerns about AI and the need for safe AI. Suffering Risk (00:48:24) Exploring the concept of suffering risk associated with superintelligence and the potential dangers of AGI. Religious Leaders' Role (00:52:39) Discussion on the potential role of religious leaders in raising awareness and mobilizing support for AI safety. Personal Impact of AI Concerns (01:03:52) Reflection on the personal weight of understanding AI risks and maintaining hope for a positive outcome. Finding Catharsis in Taking Action (01:08:12) How taking action to help feels cathartic and alleviates the weight of the issue. Weighing the Impact on Future Generations (01:09:18) The heavy burden of concern for future generations and the motivation to act for their benefit. RESOURCES: Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk
undefined
May 6, 2024 • 2min

Episode #27 Trailer - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast

Please Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #27 Trailer, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coaltion about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk
undefined
May 1, 2024 • 1h 52min

Episode #26 - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety Podcast

Please Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #26, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: Resources: Azeer Azar+Connor Leahy Podcast Debating the existential risk of AI, with Connor Leahy Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk
undefined
Apr 29, 2024 • 4min

Episode #26 TRAILER - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety Podcast

Please Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #26 TRAILER, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: The surprise of rapid progress in AI (00:00:00) Former OpenAI employee's perspective on the unexpected speed of AI development and its impact on safety. Concerns about OpenAI's focus on safety (00:01:00) The speaker's decision to start his own company due to the lack of sufficient safety focus within OpenAI and the belief in the inevitability of advancing AI technology. Differing perspectives on AI risks (00:01:53) Discussion about the urgency and approach to AI development, including skepticism and the limitations of human imagination in understanding AI risks. Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk
undefined
Apr 24, 2024 • 1h 51min

Episode #25 - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast

Episode #25  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast FULL INTERVIEW STARTS AT (00:08:20) DONATE HERE TO HELP PROMOTE THIS SHOW https://www.paypal.com/paypalme/forhumanitypodcast In episode #25, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat.  Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction. Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: **The definition of human extinction and AI Safety Podcast Introduction (00:00:00)** **Paul Christiano's perspective on AI risks and debate on AI safety (00:03:51)** **Interview with Dr. Emil Torres on transhumanism, AI safety, and historical perspectives (00:08:17)** **Challenges to AI safety concerns and the speculative nature of AI arguments (00:29:13)** **AI's potential catastrophic risks and comparison with climate change (00:47:49)** **Defining intelligence, AGI, and unintended consequences of AI (00:56:13)** **Catastrophic Risks of Advanced AI and perspectives on AI Safety (01:06:34)** **Inconsistencies in AI Predictions and the Threats of Advanced AI (01:15:19)** **Curiosity in AGI and the ethical implications of building superintelligent systems (01:22:49)** **Challenges of discussing AI safety and effective tools to convince the public (01:27:26)** **Tangible harms of AI and hopeful perspectives on the future (01:37:00)** **Parental instincts and the need for self-sacrifice in AI risk action (01:43:53)** RESOURCES: THE TWO MAIN PAPERS ÉMILE LOOKS TO MAKING HIS CASE: Against the singularity hypothesis By David Thorstad:  https://philpapers.org/archive/THOATS-5.pdf Challenges to the Omohundro—Bostrom framework for AI motivations By Oleg Häggstrom: https://www.math.chalmers.se/~olleh/ChallengesOBframeworkDeanonymized.pdf Paul Christiano on Bankless How We Prevent the AI’s from Killing us with Paul Christiano Emile Torres TruthDig Articles: https://www.truthdig.com/author/emile-p-torres/ https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065 Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes JOIN THE FIGHT, help Pause AI!!!! Pause AI
undefined
Apr 22, 2024 • 3min

Episode #25 TRAILER  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast

DONATE HERE TO HELP PROMOTE THIS SHOW Episode #25 TRAILER  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast In episode #25 TRAILER, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat.  Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction. Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: Defining Humanity and Future Descendants (00:00:00) Discussion on the concept of humanity, future descendants, and the implications of artificial general intelligence (AGI) and machine superintelligence. Concerns about AI Safety Research (00:01:11) Expressing concerns about the approach of AI safety research and skepticism about the intentions of companies like OpenAI. Questioning the Purpose of Building Advanced AI Systems (00:02:23) Expressing skepticism about the purpose and potential benefits of building advanced AI systems and being sympathetic to the "stop AI" movement. RESOURCES: Emile Torres TruthDig Articles: https://www.truthdig.com/author/emile-p-torres/ Emile Torres Latest Book: Human Extinction (Routledge Studies in the History of Science, Technology and Medicine) 1st Edition https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065 Best Account on Twitter: AI Notkilleveryoneism Memes  JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS
undefined
Apr 17, 2024 • 1h 22min

Episode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern. They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals to work on this issue. Kat highlights potential ways in which AI could harm humanity, such as creating super viruses or starting a nuclear war. They address common misconceptions, including the belief that AI will need humans or that it will be human-like.  Overall, the conversation emphasizes the urgency of addressing AI safety and the need for greater awareness and action. The conversation delves into the dangers of AI and the need for AI safety. The speakers discuss the potential risks of creating superintelligent AI that could harm humanity. They highlight the ethical concerns of creating AI that could suffer and the moral responsibility we have towards these potential beings. They also discuss the importance of funding AI safety research and the need for better regulation. The conversation ends on a hopeful note, with the speakers expressing optimism about the growing awareness and concern regarding AI safety. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: AI Safety Urgency (00:00:00) Emphasizing the immediate need to focus on AI safety. Superintelligent AI World (00:00:50) Considering the impact of AI smarter than humans. AI Safety Charities (00:02:37) The necessity for more AI safety-focused charities. Personal AI Safety Advocacy Journey (00:10:10) Kat Woods' transformation into an AI safety advocate. AI Risk Work Encouragement (00:16:03) Urging skilled individuals to tackle AI risks. AI Safety's Global Impact (00:17:06) AI safety's pivotal role in global challenges. AI Safety Prioritization Struggles (00:18:02) The difficulty of making AI safety a priority. Wealthy Individuals and AI Safety (00:19:55) Challenges for the wealthy in focusing on AI safety. Superintelligent AI Threats (00:23:12) Potential global dangers posed by superintelligent AI. Limits of Imagining Superintelligent AI (00:28:02) The struggle to fully grasp superintelligent AI's capabilities. AI Containment Risks (00:32:19) The problem of effectively containing AI. AI's Human-Like Risks (00:33:53) Risks of AI with human-like qualities. AI Dangers (00:34:20) Potential ethical and safety risks of AI. AI Ethical Concerns (00:37:03) Ethical considerations in AI development. Nonlinear's Role in AI Safety (00:39:41) Nonlinear's contributions to AI safety work. AI Safety Donations (00:41:53) Guidance on supporting AI safety financially. Effective Altruism and AI Safety (00:49:43) The relationship between effective altruism and AI safety. AI Safety Complexity (00:52:12) The intricate nature of AI safety issues. AI Superintelligence Urgency (00:53:52) The critical timing and power of AI superintelligence. AI Safety Work Perception (00:56:06) Changing the image of AI safety efforts. AI Safety and Government Regulation (00:59:23) The potential for regulatory influence on AI safety. Entertainment's AI Safety Role (01:04:24) How entertainment can promote AI safety awareness. AI Safety Awareness Progress (01:05:37) Growing recognition and response to AI safety. AI Safety Advocacy Funding (01:08:06) The importance of financial support for AI safety advocacy. Effective Altruists and Rationalists Views (01:10:22) The stance of effective altruists and rationalists on AI safety. AI Risk Marketing (01:11:46) The case for using marketing to highlight AI risks. RESOURCES: Nonlinear: https://www.nonlinear.org/ Best Account on Twitter: AI Notkilleveryoneism Memes  JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAISco

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode