Shazeda Ahmed, a Chancellor’s Postdoctoral fellow at UCLA, dives into AI safety's geopolitical landscape, particularly the U.S.-China relationship. She critiques the urgency surrounding AI safety and reveals how it is often fueled by anti-China sentiment. The discussion covers the implications of surveillance technologies, the complexities of AI ethics, and the intersection of corporate interests with safety efforts. Ahmed also highlights the historical influences of eugenics in shaping current AI policies, urging for more nuanced conversations to include marginalized perspectives.
Shazeda Ahmed emphasizes that the field of AI safety is often driven by a manufactured urgency linked to anti-China sentiments.
The podcast critiques Western simplifications of China's technological advancements, advocating for a nuanced understanding of their governance structures.
Ahmed discusses the intersection between effective altruism and AI, highlighting how well-intentioned approaches can lead to unintended negative consequences.
Deep dives
The Significance of Grounded Research in Technology Politics
Understanding technology politics requires a deep investigation of the power dynamics that underpin new technologies. The podcast features Shazada Ahmed, who emphasizes fieldwork experience and learning Mandarin to conduct effective research in China. Her study of social credit systems reveals how Western interpretations often oversimplify complex government and tech company relationships. This perspective critiques how assumptions about authoritarianism can obscure the reality of governance that incorporates citizen behavior and compliance.
Critique of AI Safety Narratives
The conversation delves into the concept of AI safety, highlighting the urgency around existential risks posed by AI systems. Ahmed examines the motivations behind why certain researchers focus on the speculative dangers of advanced AI, often fueled by underlying sentiments that distort public understanding of technology. Such narratives may unintentionally foster anti-Chinese sentiment and simplify the intricate nature of global AI development. The ongoing debate around these risks also reflects broader themes of technological competition and regulation.
The Intersection of Effective Altruism and AI Development
The podcast highlights the relationship between effective altruism and AI, discussing how values and historical contexts shape individual and institutional responses to technological advancements. Ahmed explores how those working on AI from an altruistic perspective might endorse specific approaches that could lead to unintended consequences. This approach intersects with discussions on how policies can protect people while navigating complex ethical considerations. The tension between altruism and the practicality of technology underscores the need for informed debate around societal impacts.
Revisiting Western Narratives on Chinese Technology
Ahmed critiques the simplification of Chinese technological development within Western narratives, particularly regarding social credit systems and emotion recognition technologies. By examining documentation and state systems, she identifies the cultural and legal frameworks that guide these technologies in China, contrasting them with their portrayal in Western media. This comparison reveals that various aspects of governance and technology are embedded within legal structures rather than being merely dystopian tools. Such insights encourage a deeper rethink of how technology's implications are understood globally.
Exploring the Cultural Dimensions of AI and Surveillance
The podcast discusses how cultural acceptance of surveillance technologies varies between countries, using the example of extensive CCTV usage in the UK compared to the concerns shared about China's surveillance measures. Ahmed encourages examining the broader socio-political contexts surrounding surveillance and emotion recognition technologies while critiquing the moral high ground often taken by Western nations against non-Western practices. This conversation promotes understanding the interconnectedness of technological governance regardless of geographical boundaries. There’s a powerful call for recognizing shared narratives that shape our collective futures with technology.
Are you tired of hearing the phrase ‘AI Safety’ and rolling your eyes? Do you also sometimes think… okay but what is technically wrong with advocating for ‘safer’ AI systems? Do you also wish we could have more nuanced conversations about China and AI?
In this episode Shazeda Ahmed goes deep on the field of AI Safety, explaining that it is a community that is propped up by its own spiral of reproduced urgency; and that so much of it is rooted in American anti-China sentiment. Read: the fear that the big scary authoritarian country will build AGI before the US does, and destroy us all.
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Shazeda Ahmed is a Chancellor’s Postdoctoral fellow at the University of California, Los Angeles. Shazeda completed her Ph.D. at UC Berkeley’s School of Information in 2022, and was previously a postdoctoral research fellow at Princeton University’s Center for Information Technology Policy. She has been a research fellow at Upturn, the Mercator Institute for China Studies, the University of Toronto's Citizen Lab, Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and NYU's AI Now Institute.
Shazeda’s research investigates relationships between the state, the firm, and society in the US-China geopolitical rivalry over AI, with implications for information technology policy and human rights. Her work draws from science and technology studies, ranging from her dissertation on the state-firm co-production of China’s social credit system, to her research on the epistemic culture of the emerging field of AI safety.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode