
Overthink AI Safety with Shazeda Ahmed
8 snips
Apr 9, 2024 Dr. Shazeda Ahmed discusses the AI safety philosophy, from the potential utopia vs. dystopia of AI to aligning AI with human values for positive outcomes. The conversation delves into ethics, AI risks like global labor exploitation, and the need for human involvement in content moderation to prevent harmful content.
AI Snips
Chapters
Transcript
Episode notes
Origins of AI Safety Concerns
- The AI safety community largely emerged from Effective Altruism and long-termism movements focusing on speculative existential risks from superintelligent AI.
- Most fears stem from concerns about AI alignment, not dystopian machines, influenced heavily by tech philanthropy and hype.
Who Shapes AI Safety?
- AI safety experts come mainly from computer science, philosophy (utilitarianism), engineering, math, and physics backgrounds.
- Funding from disgraced billionaires and philanthropies helped rapidly professionalize AI safety despite skepticism in traditional academia.
Effective Altruism's Contrasting Faces
- Effective Altruism mixes a public face prioritizing immediate causes and a 'core' focus on speculative risks like AI-related extinction.
- This creates internal tensions and critiques that the movement maintains status quo while appearing radical.
