Overthink

AI Safety with Shazeda Ahmed

8 snips
Apr 9, 2024
Dr. Shazeda Ahmed discusses the AI safety philosophy, from the potential utopia vs. dystopia of AI to aligning AI with human values for positive outcomes. The conversation delves into ethics, AI risks like global labor exploitation, and the need for human involvement in content moderation to prevent harmful content.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Origins of AI Safety Concerns

  • The AI safety community largely emerged from Effective Altruism and long-termism movements focusing on speculative existential risks from superintelligent AI.
  • Most fears stem from concerns about AI alignment, not dystopian machines, influenced heavily by tech philanthropy and hype.
INSIGHT

Who Shapes AI Safety?

  • AI safety experts come mainly from computer science, philosophy (utilitarianism), engineering, math, and physics backgrounds.
  • Funding from disgraced billionaires and philanthropies helped rapidly professionalize AI safety despite skepticism in traditional academia.
INSIGHT

Effective Altruism's Contrasting Faces

  • Effective Altruism mixes a public face prioritizing immediate causes and a 'core' focus on speculative risks like AI-related extinction.
  • This creates internal tensions and critiques that the movement maintains status quo while appearing radical.
Get the Snipd Podcast app to discover more snips from this episode
Get the app