AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern.
They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals to work on this issue. Kat highlights potential ways in which AI could harm humanity, such as creating super viruses or starting a nuclear war. They address common misconceptions, including the belief that AI will need humans or that it will be human-like.
Overall, the conversation emphasizes the urgency of addressing AI safety and the need for greater awareness and action. The conversation delves into the dangers of AI and the need for AI safety. The speakers discuss the potential risks of creating superintelligent AI that could harm humanity. They highlight the ethical concerns of creating AI that could suffer and the moral responsibility we have towards these potential beings. They also discuss the importance of funding AI safety research and the need for better regulation. The conversation ends on a hopeful note, with the speakers expressing optimism about the growing awareness and concern regarding AI safety.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
AI Safety Urgency (00:00:00) Emphasizing the immediate need to focus on AI safety.
Superintelligent AI World (00:00:50) Considering the impact of AI smarter than humans.
AI Safety Charities (00:02:37) The necessity for more AI safety-focused charities.
Personal AI Safety Advocacy Journey (00:10:10) Kat Woods' transformation into an AI safety advocate.
AI Risk Work Encouragement (00:16:03) Urging skilled individuals to tackle AI risks.
AI Safety's Global Impact (00:17:06) AI safety's pivotal role in global challenges.
AI Safety Prioritization Struggles (00:18:02) The difficulty of making AI safety a priority.
Wealthy Individuals and AI Safety (00:19:55) Challenges for the wealthy in focusing on AI safety.
Superintelligent AI Threats (00:23:12) Potential global dangers posed by superintelligent AI.
Limits of Imagining Superintelligent AI (00:28:02) The struggle to fully grasp superintelligent AI's capabilities.
AI Containment Risks (00:32:19) The problem of effectively containing AI.
AI's Human-Like Risks (00:33:53) Risks of AI with human-like qualities.
AI Dangers (00:34:20) Potential ethical and safety risks of AI.
AI Ethical Concerns (00:37:03) Ethical considerations in AI development.
Nonlinear's Role in AI Safety (00:39:41) Nonlinear's contributions to AI safety work.
AI Safety Donations (00:41:53) Guidance on supporting AI safety financially.
Effective Altruism and AI Safety (00:49:43) The relationship between effective altruism and AI safety.
AI Safety Complexity (00:52:12) The intricate nature of AI safety issues.
AI Superintelligence Urgency (00:53:52) The critical timing and power of AI superintelligence.
AI Safety Work Perception (00:56:06) Changing the image of AI safety efforts.
AI Safety and Government Regulation (00:59:23) The potential for regulatory influence on AI safety.
Entertainment's AI Safety Role (01:04:24) How entertainment can promote AI safety awareness.
AI Safety Awareness Progress (01:05:37) Growing recognition and response to AI safety.
AI Safety Advocacy Funding (01:08:06) The importance of financial support for AI safety advocacy.
Effective Altruists and Rationalists Views (01:10:22) The stance of effective altruists and rationalists on AI safety.
AI Risk Marketing (01:11:46) The case for using marketing to highlight AI risks.
RESOURCES:
Nonlinear: https://www.nonlinear.org/
Best Account on Twitter: AI Notkilleveryoneism Memes
JOIN THE FIGHT, help Pause AI!!!!
Join the Pause AI Weekly Discord Thursdays at 3pm EST
22 Word Statement from Center for AI Safety