Shazeda Ahmed, a Chancellor’s Postdoctoral fellow at UCLA, dives into AI safety's geopolitical landscape, particularly the U.S.-China relationship. She critiques the urgency surrounding AI safety and reveals how it is often fueled by anti-China sentiment. The discussion covers the implications of surveillance technologies, the complexities of AI ethics, and the intersection of corporate interests with safety efforts. Ahmed also highlights the historical influences of eugenics in shaping current AI policies, urging for more nuanced conversations to include marginalized perspectives.
55:55
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
question_answer ANECDOTE
Social Credit System Reality
Western media portrays China's social credit system as a dystopian plan.
In reality, it's a vague plan with unclear execution, aimed at increasing administrative law compliance.
insights INSIGHT
Tech Companies and State Collaboration
Chinese tech companies want to appear cooperative with state projects but protect their interests.
They aim to avoid overregulation and data sharing with rivals, similar to US companies.
question_answer ANECDOTE
Shifting Research Focus
Shazeda Ahmed's initial research plan focused on user experience of the social credit system.
She shifted focus after discovering most citizens don't interact with it or are not blacklisted.
Get the Snipd Podcast app to discover more snips from this episode
In this book, Nick Bostrom delves into the implications of creating superintelligence, which could surpass human intelligence in all domains. He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values. The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks[3][5][4].
Are you tired of hearing the phrase ‘AI Safety’ and rolling your eyes? Do you also sometimes think… okay but what is technically wrong with advocating for ‘safer’ AI systems? Do you also wish we could have more nuanced conversations about China and AI?
In this episode Shazeda Ahmed goes deep on the field of AI Safety, explaining that it is a community that is propped up by its own spiral of reproduced urgency; and that so much of it is rooted in American anti-China sentiment. Read: the fear that the big scary authoritarian country will build AGI before the US does, and destroy us all.
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Shazeda Ahmed is a Chancellor’s Postdoctoral fellow at the University of California, Los Angeles. Shazeda completed her Ph.D. at UC Berkeley’s School of Information in 2022, and was previously a postdoctoral research fellow at Princeton University’s Center for Information Technology Policy. She has been a research fellow at Upturn, the Mercator Institute for China Studies, the University of Toronto's Citizen Lab, Stanford University’s Human-Centered Artificial Intelligence (HAI) Institute, and NYU's AI Now Institute.
Shazeda’s research investigates relationships between the state, the firm, and society in the US-China geopolitical rivalry over AI, with implications for information technology policy and human rights. Her work draws from science and technology studies, ranging from her dissertation on the state-firm co-production of China’s social credit system, to her research on the epistemic culture of the emerging field of AI safety.