Trust in Tech: an Integrity Institute Member Podcast

Integrity Institute
undefined
Mar 24, 2023 • 46min

Trust in Tech, Episode 13: Transparency Reports, Theatre and Power with Nima Mozhgani

Transparency is at the forefront of the discourse today with Tik Tok CEO Shou Zi Chew testifying in front of congress and GPT4 releasing a system card. Nima Mozhgani, an expert in transparency reports and  Talha Baig, a former Content Moderation Engineer, discuss one of the mechanisms of transparency – transparency reports: what they are, and how they can hold the powerful accountable.Tik Tok CEO Shou Zi Chew testified in front of Congress yesterday (March 23, 2023). However, it seemed both Tik Tok and Congress were running theatre for their own self-interests. Nima Mozhgani works at Snap on the policy team running our transparency reporting function. He holds a Bachelor’s Degree in Economics and Political Science from Columbia University with specialization in Middle Eastern, South Asian, and African Studies. He also serves as the Vice-Chair of the Transparency & Accountability Working Group of the Tech Coalition - an alliance of global tech companies who are working together to combat child sexual exploitation and abuse online.Nima and Talha Baig, a former content moderation engineer, discuss transparency. They talk about the importance of transparency, and the different forms that transparency comes in including the traditional transparency reports, Tik Tok’s transparency center and GPT-4’s system card.They also discuss algorithmic transparency, transparency in a global setting and why advertisers love transparency reports.Links:TRUST framework from the Tech CoalitionCredits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued supportDisclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Snap or any other entity’s views.
undefined
Mar 18, 2023 • 1h 6min

Trust in Tech, Episode 12: Deepfakes, Biases and AI Hegemony with Claire Boine

Deepfakes have gained steam on video platforms including Tik Tok and Reels. For example, we hear Obama, Trump and Biden ranking their favorite rappers and even playing dungeons and dragons. Does this technology have potential harmful effects?This episode features Claire Boine, an expert in AI law, in conversation with Integrity Institute member Talha Baig, a Machine Learning (ML) Engineer. Claire is a PhD candidate in AI Law at the University of Ottawa, and a Research Associate at the Artificial and Natural Intelligence Toulouse Institute and in the Accountable AI in a Global Context Research Chair at UOttawa. Claire also runs a nonprofit organization whose goal is to help senior professionals motivated by evidence and reason transition into high impact fields including AI. We discuss how deep fakes present an asymmetrical power dynamic and some mitigations we can put in place including data trusts - a collective to put the data back in the hands of users. We also ponder the use of simulacras to replace dead actors and discuss whether we can resurrect dead philosophers by the use of deep learning. Towards the end of the episode, we surmise how chatbots develop bias, and even discuss if AI is sentient and whether that matters.Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta’s or any other entity’s views. Links:Sabelo Mhlambi: From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance [link]Kevin Roose: Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’ [link]Credits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support
undefined
Mar 9, 2023 • 1h

Trust in Tech, Episode 11: The Impact of Social Media on the Past and Present: History, Hate, and Techno-imperialism

This episode features Jason Steinhauer, author of "History Disrupted: How Social Media and the World Wide Web Have Changed the Past", and Integrity Institute member Theodora Skeadas, public policy professional with 10 years of experience at the intersection of technology, society, and safety. Theo has worked in Twitter, Booz Allen Hamilton, and is currently president of Harvard W3D: Women in Defense, Diplomacy, and Development.In recent years, social media has been a breeding ground for disinformation, hate speech, and the spread of harmful ideologies.Jason argues that social media has birthed a new genre of historical communication that he calls “e-history,” a user-centric, instantly-gratifying version of history that often avoids the true complexity of the past. Theo retorts that social media and wikipedia are non-gate-kept institutions that have allowed for the democratization of history - so both the winners and the losers write the past.Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta’s or any other entity’s views. Links:Jason’s book: History, Disrupted: How Social Media and the World Wide Web Have Changed the PastJason’s substack: History ClubHarvard’s W3D: Women in Defense, Diplomacy and Development newsletter: ThreoAll Tech is Human: websiteCredits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support
undefined
Mar 2, 2023 • 1h 2min

Trust in Tech, Episode 10: Counter-terrorism on Tech Platforms w/ GIFCT Director of Technology Tom Thorley

Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, Institute member Talha Baig is in conversation with Tom Thorley of the Global Internet Forum to Counter Terrorism (GIFCT). The Forum was established to foster technical collaboration among member companies, advance relevant research, and share knowledge with smaller platforms. Tom Thorley is the Director of Technology at GIFCT and delivers cross-platform technical solutions for GIFCT members. He worked for over a decade at the British government’s signals intelligence agency, GCHQ, where Tom specialized in issues at the nexus of technology and human behavior. As a passionate advocate for responsible technology, Tom is a member of the board of the SF Bay Area Internet Society Chapter; is a mentor with All Tech Is Human and Coding It Forward; and also volunteers with America On Tech and DataKind.Tom and Talha discuss the historical context behind founding GIFCT, the difficulties of cross-platform content moderation, and fighting terrorism over encrypted networks while maintaining human rights.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.Credits:If you enjoyed today’s conversation please share this episode to your friends, so we can continue making episodes like this.Today’s episode was produced by Talha Baig Music by Zhao ShenSpecial thanks to Sahar, Cass, Rachel and Sean for their continued support
undefined
Feb 22, 2023 • 46sec

Trust in Tech, Episode 9: Positioning Generative AI to Empower Artists

Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, Institute co-founder Jeff Allen and Institute member Derek Slater discuss the Creative Commons statement in favor of generative AI. Derek is a founding partner at Proteus Strategies, and, among his various hats, was formerly Google’s Global Director of Information Policy. As context: on Feb 6, 2023, the Creative Commons came out with a statement in favor of generative AI, claiming “Just as people learn from past works, generative AI is trained on previous works, analyzing past materials in order to extract underlying ideas and other information in order to build new works”Jeff and Derek reflect on this statement: discussing how past platforms have failed and succeeded at working with creators, and musing on what the future of work could look like.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.Credits:If you enjoyed today’s conversation please share this episode to your friends, so we can continue making episodes like this.Today’s episode was produced by Talha Baig Music by Zhao ShenSpecial thanks to Sahar, Cass, Rachel and Sean for their continued support
undefined
Feb 15, 2023 • 35sec

Trust in Tech, Episode 8: Hiring and growing trust & safety teams at small companies

Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, two Trust & Safety leaders discuss what it’s really like to build teams at small companies. We discuss the pros and cons of working at a small company, what hiring managers look for, how small teams are structured, and career growth opportunities.Alice Hunsberger, VP CX, Grindr interviews Colleen Mearn, who currently leads Trust & Safety at Clubhouse. Previously she was the Global Vertical Lead at YouTube for Harmful and Dangerous policies. In both of these roles, Colleen has loved figuring out how to scale global policies and building high-performing teams.Timestamps:0:30 - Intro/ Colleen’s background2:30 - Tech policy jobs5:26 - Downsides of Big Tech6:30- Collaborating cross-functionally, working with product teams9:45 - Building teams at small companies12:30 - Types of people who succeed at small companies16:00 - Career growth17:00 - Growing a team, which roles to prioritize20:45 - The hiring process at small companies23:15- What hiring managers at small companies look for24:45 - Cover letter controversy27:20 - Pivoting to Trust and Safety mid-career, vs. starting as a content moderator34:30 - OutroTrust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi, Cass Marketos, Rachel Fagen and Sean Wang.
undefined
Jan 30, 2023 • 40min

Trust in Tech, Episode 7: XCheck — Policing the Elite of Facebook Users

Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this episode, Integrity Institute member Lauren Wagner and fellow Karan Lala discuss Meta’s cross-check program and the Oversight Board’s policy advisory opinion. They cover how Meta treats its most influential and important users, the history and technical details of the cross-check program, the public response to its leak, what the Oversight Board found with respect to Meta’s scaled content moderation, and what the company could do to address its gaps going forward. Lauren Wagner is a venture capitalist and fellow at the Berggruen Institute researching trust and safety. She previously worked at Meta, where she developed product strategy to tackle misinformation at scale and built privacy-protected data sharing products. Karan Lala is currently a J.D. Candidate at the University of Chicago Law School working at the intersection of policy and technology. He was a software engineer on Facebook’s Civic Integrity team, where he led efforts to detect and enforce against abusive assets and sensitive entities in the civic space.Timestamps:0:00: Intro1:36: Overview of the XCheck program7:53: Data-sharing with the Oversight Board11:01: XCheck around the world12:59: The Oversight Board’s findings19:25: Public response to the leak22:40: Recommendations and fixes 34:02: What should the future of XCheck look like? Credits:Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi, Cass Marketos, Rachel Fagen and Sean Wang.
undefined
Jan 25, 2023 • 55min

Trust in Tech, Episode 6: Reconciling Capitalism & Community

Welcome to the Trust in Tech podcast, a project by the Integrity Institute — a community driven think tank which advances the theory and practice of protecting the social internet, powered by our community of integrity professionals.In this sixth episode, Integrity Institute member Alice Hunsberger and Community Advisor Cassandra Marketos discuss digital spaces and community building. They discuss how to live in a world where community is not the default; whether being anonymous in online spaces is a good thing; and how product design and perception can influence the legitimacy of the content and community of a product. Cassandra “Cass” Marketos has a varied background and a diverse range of skills. She started out as a product manager for the music label Insound. Then she was the first employee at Kickstarter, where she worked on everything related to editorial and community. After her time there, Cass was deputy director of digital outbound during the Obama administration. And now she serves as Community Advisor on the Integrity Institute staff, making our community at the Integrity Institute feel like home. Cass has launched several non-profits including Dollar a Day, and now builds her local community with compost.Timestamps:0:00: Intro0:50 What is community2:30: Business and Community11:00: Being Idealistic and Realistic12:50: Is Discord the future?19:50: Anonymity in Online Spaces25:00: Universal ToS is Impossible31:20: Social Media as Road Rage34:10: Building Community in Real and Online life46:00: Urban Dictionary and Product Design Legitimizing Content51:20: Having a Community Advocate on your TeamCredits:Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi, Cass Marketos, Rachel Fagen and Sean Wang.
undefined
Jan 18, 2023 • 46min

Trust in Tech, Episode 5: Keeping the Metaverse Safe

In this fifth episode, Integrity Institute members Talha Baig and Lizzy Donahue talk Integrity in the metaverse. The conversation ranges from defining what the metaverse is to discussing whether it should even exist! We also discuss other fun topics, such as: integrity issues with augmented reality and dating in the metaverse. Lizzy is an experienced integrity professional who worked at Meta for 7 years where she pioneered machine learning to proactively detect suicidal intent, worked on Integrity at Oculus Rift home, and kept us safe on Horizon worlds. On top of that Lizzy was a “Global Social Benefit” fellow at SCU, where she won the top prize at her senior design conference for building a tool to aid Social Enterprises with training employees and customers. She is now working as a Trust and Safety engineer at Clubhouse.Talha steps in for Alice Hunsberger as host. Talha worked at Meta for the past 3 years as an ML engineer on Marketplace Integrity and is currently acting as producer for this podcast.Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They DO NOT represent Meta’s or any other entity’s views. Timestamps:0:00 Intro1:45 What is the metaverse3:50 Integrity in the metaverse6:15 Privacy in the metaverse9:50 Should children be allowed in the metaverse14:30 Overwatch18:50 Body language in the metaverse24:50 Self-governance in the metaverse27:45 Decentralized recording29:45 Is the metaverse good for society?38:10 Dating in the metaverse40:50 Integrity for Augmented Reality44:55 CreditsTrust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi and Cassandra Marketos for their continued support, and to all the members of the Integrity Institute.
undefined
Dec 9, 2022 • 1h 11min

Trust in Tech, Episode 4: Preventing and Reacting to Burnout

In this third episode, Integrity Institute member Alice Hunsberger talks with Institute cofounders Sahar Massachi and Jeff Allen about the issues around integrity in tech and why the Integrity Institute was founded, how to define integrity work, and why integrity teams are the true long-term growth teams of tech companies. We have a bit of a deep dive into hate speech and talk about several reasons why it’s important to remove it, and the dreaded death spiral that can happen when platforms don’t invest in integrity properly. We also discuss why building social media companies is an ethical endeavor, and the work the Integrity Institute has done to establish a code of ethics and a hippocratic oath for integrity workers. And we touch on Jeff and Sahar’s thoughts on safety regulation for the industry, the importance of initial members to define a group’s norms, the benefit to growing slowly, and why integrity workers are heroes. Trust in Tech is hosted by Alice Hunsberger, and produced by Talha Baig.Edited by Alice Hunsberger.Music by Zhao Shen. Special thanks to Sahar Massachi and Cassandra Marketos for their continued support, and to all the members of the Integrity Institute.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app