Trust in Tech: an Integrity Institute Member Podcast

Integrity Institute
undefined
Jan 11, 2024 • 55min

Dark Patterns and Photography w/ Caroline Sinders

Caroline Sinders is a ML design researcher, online harassment expert, and artist. We chat about common dark tech patterns, how to prevent them in your company, a novel way to think about your career and how photography is related to generative AI.Sinders has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, We answer the following questions on today’s show:1. What are dark tech patterns… and how to prevent them2. How to navigate multi stakeholder groups to prevent baking in these dark patterns?3. What is a public person?4.. What is a framework to approach data visualization?5. How is photography an analogue to generative AI?This episode goes in lots of directions to cover Caroline’s varied interests - hope you enjoy it!
undefined
Jan 1, 2024 • 1h 2min

Holiday Special: Alice and Talha Mailbag Episode!

Alice and Talha answer some listener question, recap the year, the podcast, and perhaps where they want to take it!
undefined
Dec 22, 2023 • 33min

Introduction to Generative AI

An Introduction to Generative AIIn this episode, Alice Hunsberger talks with Numa Dhamani and Maggie Engler, who recently co-authored a book about the power and limitations of AI tools and their impact on society, the economy, and the law.In this conversation, they deep dive into some of the topics in the book, and discuss what writing a book was like, as well as what the process was to get to publication.You can preoder the book here, and follow Maggie and Numa on LinkedIn.
undefined
Dec 15, 2023 • 46min

How to build a Movement w/ David Jay

It seems everyday we are pulled in different directions on social media. However, what we are feeling seldom resonates. Enter David Jay! A master in building movements including leading it for the Center Humane Technology. In this episode, we will learn precisely how to build a movement, and why communities are perpetually underfunded.David Jay is an advisor of the Integrity Institute and played a pivotal role in the early days of the Institute. He is also currently the founder of Relationality Labs which hopes to make the impact of relational organizing visible so that organizers can be resourced for the strategic value that they create. In the past, he has had a diverse range of experiences, including founding asexuality.org, and as chief mobilization officer for the Center for Humane Technology.Here are some of the questions we answer on today’s show:1. How do you create, scale, and align relationships to create a movement?2. How to structure stories to resonate?3. How to keep your nose on the edge for new movements?4. How to identify leaders for the future?5. Why David Jay is excited by the Integrity Institute and the future of integrity workers?6. Why community based initiatives don’t get funded at the same rate as non-community based initiatives. Check out David Jay’s Relationality Lab!Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent any other entity’s views.
undefined
Dec 10, 2023 • 48min

The Ultimate Guide to Election Integrity Part II

Elections matter, and history has demonstrated online platforms will find themselves grappling with these challenges whether they want to be or not. The two key questions facing online platforms now, as they stare down the tsunami of global elections heading their way, are: Have they initiated an internal elections integrity program? And if so, how do they ensure the best possible preparation to safeguard democracies globally?The Integrity Institute launched an elections integrity best practices guide on “Defining and Achieving Success in Elections Integrity.” This latest guide extends the first and provides companies – large or small, established or new-on-the-block – concrete details as they fully implement an elections integrity program. Today on the podcast, we talk to four contributors about this guide: Glenn Ellingson, Diane Chang, Swapneel Mehta, and Eric Davis.Also check out our first episode on elections!
undefined
Oct 4, 2023 • 27min

Creeps, Consulting, and Creating Trust

Alice Hunsberger talks to Heather Grunkemeier again, this time covering Heather’s solution for dealing with creeps at Rover from a policy and operational lens, measuring trust, and what it’s been like for her to strike out on her own as a consultant.Also check out our first episode with Heather, How to Find Your Place in Trust & Safety: A Story of Career Pivoting.
undefined
Sep 27, 2023 • 19min

How to Find Your Place in Trust & Safety: A Story of Career Pivoting

Alice Hunsberger talks to Heather Grunkemeier (former Program Owner of Trust & Safety at Rover, and current owner of consultancy firm Twinkle LLC) and discusses how Heather finally broke into the field of Trust & Safety after years of trying, what it was actually like for her, and what her advice is for other people in the midst of career pivots. We also touch on mental health, identity, self worth, and how working in Trust & Safety has unique challenges (and rewards). If you liked our Burnout Episode, you may enjoy this one too. (And if you haven’t listened to it yet or read our Burnout resource guide, please check it out).CreditsThis episode of Trust in Tech was hosted, edited, and produced by Alice Hunsberger.Music by Zhao Shen. Special thanks to the staff and members of the Integrity Institute for their continued support.
undefined
Aug 2, 2023 • 1h 24min

The Future of AI Regulation with James Alexander

On today's episode, our host Talha Baig is joined by guest James Alexander to discuss all things AI liability. The episode begins with a discussion on liability legislation, as well as some of the unique situations that copyright law has created. Later in the episode, the conversation shifts to James's experience as the first member of Wikipedia's Trust and Safety team.Here are some of the questions we answer in today’s episode.Who is liable for AI-generated content?How does section 230 affect AI?Why does AI have no copyright?How will negotiations play out between platforms and the companies building AI models?Why do the Spiderman multiverse movies exist?What did it look like to be the first trust and safety worker at Wikipedia?What does fact-checking look like at Wikipedia?
undefined
Jul 24, 2023 • 1h 14min

Should We Have Open-Sourced Llama 2?

On today's episode, our host Talha Baig is joined by guest David Harris, who has been writing about Llama since the initial leak. The two of them begin by discussing all things Llama, from the leak to the open-sourcing of Llama 2. Later in the episode, they dive deeper into policy ideas seeking to improve AI safety and ethics.Show Links: David’s Guardian ArticleCNN Article Quoting DavidLlama 2 release Article
undefined
Jun 15, 2023 • 41min

Happy Pride! Let’s talk about protecting the LGBTQ+ community online

What can companies do to support the LGBTQ+ community during this pride season, beyond slapping a rainbow logo on everything? Integrity Institute members Alex Leavitt and Alice Hunsberger discuss the state of LGBTQ+ safety online and off, how the queer community is unique and faces disproportionate risks, and what are some concrete actions that platforms should be taking.Show Links:Human Rights Campaign declares LGBTQ state of emergency in the USSocial Media Safety IndexDigital Civility Index & Our Challenge | Microsoft Online SafetyBest Practices for Gender-Inclusive Content Moderation — Grindr BlogTinder - travel alertAssessing and Mitigating Risk for the Global Grindr CommunityStrengthening our policies to promote safety, security, and well-being on TikTokMeta's LGBTQ+ Safety centerData collection for queer minorities

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app