Trust in Tech: an Integrity Institute Member Podcast

Integrity Institute
undefined
Jun 2, 2023 • 1h 10min

Civic Integrity at Twitter Pre- and Post-Elon Musk: w/ Rebecca Thein and Theodora Skeadas

The acquisition of Twitter broke, well, Twitter. Around 90% of the workforce left the company leaving shells of former teams to handle the same responsibility. Today, we welcome two guests from Twitter’s civic integrity team. We welcome new guest Rebecca Thein. Rebecca, was a senior engineering technical program manager for Twitter’s Information Integrity team. She is also a Digital Sherlock for the Atlantic Council’s Digital Forensic Research Lab (DFRLab).Theodora Skeadas is a returning guest from our previous episode! She managed public policy at Twitter and was recently elected as an Elected Director of the Harvard Alumni Association.We answer the following questions on today’s episode:How much was the civic integrity team hurt by the acquisition?What are candidate labels?How did Twitter prioritize its elections?What did the org structure of Twitter look like pre and post acquisition?And finally, what is this famous Halloween party that all the ex-Twitter folks are talking about?
undefined
May 25, 2023 • 41min

Tech Policy 101: “It’s complicated!” with Pearlé Nwaezeigwe, the Yoncé of Tech Policy

This episode is a bit different – instead of getting deep into the weeds with a guest, we’re starting from the beginning. Our guest today, Pearlé Nwaezeigwe, aka the Yoncé of Tech Policy, chats with me about Tech Policy 101. I get a lot of questions from people who are fascinated by Trust & Safety and Integrity work in tech, and they want to know – what does it look like? How can I do it too? What kinds of jobs are out there? So, I thought we’d tackle some of those questions here on the podcast. Today’s episode covers the exciting topics of nipples, Lizzo, weed, and much more. And as any of us who have worked in policy would tell you, “it’s complicated.” Let me know what you think (if you want to see more of these, or less) – this is an experiment. (You can reach me here on LinkedIn). — Alice HunsbergerLinks:Pearlé’s newsletterLizzo talks about censorship and body shamingOversight board on nipples and nudityGrindr’s Best Practices for Gender-Inclusive Content ModerationTSPA curriculum: creating and enforcing policyAll Tech is Human - Tech Policy HubCredits:Hosted and edited by Alice HunsbergerProduced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support
undefined
May 17, 2023 • 1h 25min

The Ultimate Guide to Election Integrity! with Katie Harbath and Glenn Ellingson

It might be May 2023, but it’s never too early to start worrying about elections! 2024 is slated to be the biggest year of elections in platform history. In this episode Katie Harbath  and Glenn Ellingson join the show to prepare you for the storm of elections coming in 2024.You may recognize Katie as the inaugural guest of Trust in Tech. Katie is an Integrity Institute Fellow and global leader at the intersection of elections, democracy, and technology. She is Chief Executive of Anchor Change where she helps clients think through tech policy issues. Before that she worked at Meta for 10 years where she built and led a 30 person team managing elections. Glenn is an Integrity Institute member who was previously an engineering manager for Meta’s civic integrity team and before that Head of Product Engineering for Hustle - a company which helped progressive political organizations and other nonprofit and for-profit groups forge personal relationships at scale.Glenn and Katie led the development of the Elections Best Practices deck the Integrity Institute just shared on their website, which we discuss in the episode. We also answer some of the following questions:How to prioritize different elections across the world?What principles to adhere to when working on election integrity?What are the challenges of dealing with political harassment?How to map out the landscape of election integrity work?What was Cambridge Analytica, and did the scandal actually make platforms less transparent?And how your company can learn best practices and responsibly deal with electionsLinks:Election Integrity best practices deckAnchor Change A Brief History of Tech and Elections: A 26-Year JourneyDemystifying the Cambridge Analytica Scandal Five Years LaterDisclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta’s or any other entity’s views. 
undefined
May 12, 2023 • 1h 17min

Transparency, Trade-offs, and Free Speech with Brandon Silverman

We live in a world where platforms influence the digital and real lives of billions of people across the world, perhaps with more influence than many governments. However, the decision making processes around the platform are generally opaque and obscure. This is why today’s guest — Integrity Institute Fellow Brandon Silverman — transitioned to policy advocacy for platform transparency, data sharing, and an open internet helping regulators, lawmakers and advocacy groups think through the best ways to set-up online transparency regimes.Brandon is the former CEO and co-founder of Crowdtangle. a social analytics tool that is used by tens of thousands of newsrooms, academics, researchers, fact-checkers, civil society and more to help monitor public content in real-time.Some questions we answer on today’s episode:What tradeoffs exist between free speech and transparency?How did Crowdtangle partner with civic actors across the world?Brandon’s thoughts on the leaking of the Twitter AlgorithmWhat principle Crowdtangle used when sharing access to governments?What metric did the Crowdtangle team optimize for?What Brandon wished he could have done differently at Meta?And of course how you the listener can help in this fight for platform transparency.Links:The United States’ Approach to 'Platform' Regulation by Eric GoldmanState Abuse of Transparency Laws and How to Stop It by Daphne KellerThe Impression of Influence: Legislator Communication, Representation, and Democratic Accountability by Solomon MessingGarbage Day by Ryan BroderickAs a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.
undefined
May 4, 2023 • 1h 18min

GPT4: Eldritch abomination or intern? A discussion with OpenAI

OpenAI, creators of ChatGPT, join the show! In November 2022, ChatGPT upended the tech (and larger) world with a chatbot that passes not only the Turing test, but the bar exam. In this episode, we talk with Dave Willner and Todor Markov, integrity professionals at OpenAI, about how they make large language models safer for all. Dave Willner is the Head of Trust and Safety at OpenAI. He previously was Head of Community Policy at both Airbnb and Facebook, where he built the teams that wrote the community guidelines and oversaw the internal policies to enforce them. Todor Markov is a deep learning researcher at OpenAI. He builds content moderation tools for ChatGPT and GPT4. He graduated from Stanford with a Master’s in Statistics and a Bachelor’s in Symbolic Systems. Alice Hunsberger hosts the episode. She is the VP of Customer Experience at Grindr. She leads Customer support, insights and trust and safety. Previously, she worked at OKCupid as Director & Global Head of Customer Experience. Sahar Massachi is a visiting host today. He is the co-founder and Executive Director of the Integrity Institute. A past fellow of the Berkman Klein Center, Sahar is currently an advisory committee member for the Louis D. Brandeis Legacy Fund for Social Justice, a StartingBloc fellow, and a Roddenbery Fellow.They discuss what content moderation looks like for ChatGPT, why T&S stands for Tradeoffs and Sadness, and how integrity workers can help OpenAI.They also chat about the red-teaming process for GPT4, overlaps between platform integrity and AI integrity, their favorite GPT jailbreaks and how moderating GPTs is basically like teaching an Eldritch Abomination. Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent Meta’s or any other entity’s views.
undefined
Apr 27, 2023 • 1h 7min

Turkey’s complicated relationship with social media: how authoritarianism and platforms can clash

Turkey’s election is only 3 weeks away and, during the recording of this episode, the Turkish regime has detained 150 people including activists, lawyers, and journalists. Gürkan Özturan  talks about this context and Turkey’s fraught relationship with the media. Gürkan Özturan is the coordinator of Media Freedom Rapid Response at the European Centre for Press and Media Freedom. He is the former executive manager of rights-focused independent grassroots journalism platform dokuz8news.Talha Baig returns as host and talks to Gürkan regarding different intersections regarding social media and activism. For example, Gurkan mentions how he despises the WhatsApp 5 reshare limit, and he also mentions how the removal of a government sanctioned troll army made his life easier on Twitter.On top of this we learn how the Turkish government controlled the media landscape, and what happens if a social media algorithm has to comply with authoritarian regions. If you enjoy the episode, please subscribe and share with your friends! If you have feedback, feel free to email Talha at tbaig6@gmail.com. His linkedin is: https://www.linkedin.com/in/talha-baig/. Links:Arushi Saxena’s episode on Digital Media LiteracyDisclaimer: The views in this episode only represent the views of the people involved in the recording of the episode.
undefined
Apr 21, 2023 • 25sec

Trust in Tech Ep 17 (Special): A window into navigating emerging technology innovation with Nick Reese

Today we are joined by professor and strategy expert Nick Reese, Deputy Director of emerging technology policy at The Department of Homeland Security. He is the Author of the DHS Artificial Intelligence Strategy, The Space Policy, and the Post Quantum Cryptography Roadmap. He’s also DHS’s representative at various interagency Policy Coordination Committee meetings at the white house, chaired by the National Security Council, Office of Science and Technology Policy, and National Space Council. We discuss challenges involving new and emerging technologies and the complexities of navigating policy environments between the private and public sectors. Specifically they discuss the challenges of navigating unknown technologies, trying not to stifle innovation and how to improve private-public partnerships.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.
undefined
Apr 12, 2023 • 34min

Trust in Tech, Episode 16: Auntie, WHAT did you just send me?! with Arushi Saxena

Arushi Saxena was frustrated by seeing and hearing about misinformation memes in large family WhatsApp groups, so she set out to do something about it. Arushi is the Head of Policy, Partnerships, Product Marketing at DynamoFL, and former Senior Product Marketing Manager at Twitter. She was also a graduate fellow at Berkman Klein Center for Internet & Society at Harvard University, focusing on Disinformation. In this episode, Alice Hunsberger chats with Arushi about what she learned while trying to combat her loved ones’ accidental misinfo sharing, and what methods work (especially in an Indian cultural context). Come away with some specific learnings about intergenerational understanding, whether people respond better to comedy or serious posts, and what inoculation theory is. Plus, we have an internal debate about whether people are basically good or not. What do you think?Disclaimer: The views in this episode only represent the views of the individuals involved in the recording of the episode, and do not represent any company’s views. Further reading: Arushi’s blog post on the EkMinute ProjectLearning to Detect Fake News: A Field Experiment to Inoculate Against Misinformation in India. Guest Post by Naman GargMisinformation surges amid India's COVID-19 calamity | AP NewsPsychological inoculation improves resilience against misinformation on social media | Science Advances
undefined
Apr 7, 2023 • 59min

Trust in Tech, Episode 15: Gaming the Algorithm with Hallie Stern

What is the difference between a Hollywood actor and a trust and safety professional? Not much!In this episode Talha Baig, an ML Engineer, interviews Hallie Stern on how Hollywood actors game the algorithm, and how the mass surveillance ecosystem incentives niche targeting which leads to the spread of misinformation.Hallie is a former Hollywood actor turned Integrity professional. She received her MS from NYU in Global Security, Conflict & Cybercrime where she studied the human side of global cyber conflict and digital disorder. She now runs her own Trust and Safety consulting firm Mad Mirror Media. We discuss how to go viral on social media, the difference between data and tech literacy, and why it can feel like platforms are listening to you. We also have a huge announcement in this episode, so be sure to tune in to find out!Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode.Credits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support
undefined
Mar 31, 2023 • 52min

Trust in Tech, Episode 14: Addressing Systemic Bias in Tech w/ Meredith Broussard

Bias exists all around us, and unfortunately it is present in the technologies we use today.Today we are joined by Professor Meredith Broussard, a data journalism professor at NYU, and research director at the NYU Alliance for Public Interest Technology. She is the author of a new book More Than A Glitch: Confronting Race, gender, and Ability Bias in Tech.We discuss the root cause of systemic biases in tech and why the current paradigm of establishing and optimizing metrics leads to misaligned incentives in both technology companies and journalism.Along the way Meredith explains techno-chauvanism, its prevalence and why computer science students are taught to think this way. We also discuss how Mid-Journey maps from text to image, the purpose of science fiction, and how algorithmic audits can help mitigate bias for technology companies.As a reminder, the views stated in this episode are not affiliated with any organization and only represent the views of the individuals. We hope you enjoy the show.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app