Dr. Shazeda Ahmed discusses the AI safety philosophy, from the potential utopia vs. dystopia of AI to aligning AI with human values for positive outcomes. The conversation delves into ethics, AI risks like global labor exploitation, and the need for human involvement in content moderation to prevent harmful content.
57:06
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Origins of AI Safety Concerns
The AI safety community largely emerged from Effective Altruism and long-termism movements focusing on speculative existential risks from superintelligent AI.
Most fears stem from concerns about AI alignment, not dystopian machines, influenced heavily by tech philanthropy and hype.
insights INSIGHT
Who Shapes AI Safety?
AI safety experts come mainly from computer science, philosophy (utilitarianism), engineering, math, and physics backgrounds.
Funding from disgraced billionaires and philanthropies helped rapidly professionalize AI safety despite skepticism in traditional academia.
insights INSIGHT
Effective Altruism's Contrasting Faces
Effective Altruism mixes a public face prioritizing immediate causes and a 'core' focus on speculative risks like AI-related extinction.
This creates internal tensions and critiques that the movement maintains status quo while appearing radical.
Get the Snipd Podcast app to discover more snips from this episode
Welcome your robot overlords! In episode 101 of Overthink, Ellie and David speak with Dr. Shazeda Ahmed, specialist in AI Safety, to dive into the philosophy guiding artificial intelligence. With the rise of LLMs like ChatGPT, the lofty utilitarian principles of Effective Altruism have taken the tech-world spotlight by storm. Many who work on AI safety and ethics worry about the dangers of AI, from how automation might put entire categories of workers out of a job to how future forms of AI might pose a catastrophic “existential risk” for humanity as a whole. And yet, optimistic CEOs portray AI as the beginning of an easy, technology-assisted utopia. Who is right about AI: the doomers or the utopians? And whose voices are part of the conversation in the first place? Is AI risk talk spearheaded by well-meaning experts or investor billionaires? And, can philosophy guide discussions about AI toward the right thing to do?
Nick Bostrom, Superintelligence Adrian Daub, What Tech Calls Thinking Virginia Eubanks, Automating Inequality Mollie Gleiberman, “Effective Altruism and the strategic ambiguity of ‘doing good’” Matthew Jones and Chris Wiggins, How Data Happened William MacAskill, What We Owe the Future Toby Ord, The Precipice Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality” Inioluwa Deborah Raji and Roel Dobbe, “Concrete Problems in AI Safety, Revisted” Peter Singer, Animal Liberation Amia Srinivisan, “Stop The Robot Apocalypse”