

The Tech Policy Press Podcast
Tech Policy Press
Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy.
You can find us at https://techpolicy.press/, where you can join the newsletter.
You can find us at https://techpolicy.press/, where you can join the newsletter.
Episodes
Mentioned books

Feb 18, 2024 • 35min
Ranking Content On Signals Other Than User Engagement
Today's guests are Jonathan Stray, a senior scientist at the Center for Human Compatible AI at the University of California Berkeley, and Ravi Iyer, managing director of the Neely Center at the University of Southern California's Marshall School. Both are keenly interested in what happens when platforms optimize for variables other than engagement, and whether they can in fact optimize for prosocial outcomes. With several coauthors, they recently published a paper based in large part on discussion at an 8-hour working group session featuring representatives from seven major content-ranking platforms and former employees of another major platform, as well as university and independent researchers. The authors say "there is much unrealized potential in using non-engagement signals. These signals can improve outcomes both for platforms and for society as a whole."

8 snips
Feb 18, 2024 • 33min
FTC Commissioner Alvaro Bedoya on Algorithmic Fairness, Voice Cloning, and the Future
FTC Commissioner Alvaro Bedoya discusses algorithmic fairness, facial recognition, voice cloning, and the future of technology regulation. Topics include FTC actions on AI risks, international efforts in algorithmic fairness, teen mental health online, individuals' control over technology impact, FCC's voice cloning challenge, and FTC's fraud prevention efforts.

17 snips
Feb 11, 2024 • 38min
Imagining AI Countergovernance
Blair Attard-Frost discusses AI countergovernance and explores topics such as the challenges of AI ethics and governance, the concept of the AI interregnum, the importance of explainability in AI systems, examples of AI countergovernance in practice, and the need for participatory policymaking in AI governance.

Feb 4, 2024 • 22min
Tech CEOs Face the US Senate on Child Safety
On Wednesday, January 31st, the US Senate Judiciary Committee hosted a hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis." The CEOs of Meta, TikTok, X, Discord and Snap were called to the Capitol to answer questions from lawmakers on their efforts to protect children from sexual exploitation, drug trafficking, dangerous content, and other online harms. Gabby Miller reported on the hearing from New York, and Haajrah Gilani reported from Washington D.C.

9 snips
Jan 28, 2024 • 36min
How to Assess AI Governance Tools
Kate Kaye, an expert in AI governance tools, discusses the faulty fixes in AI governance tools that undermine fairness and explainability. The podcast explores the involvement of large tech companies in shaping AI governance tools and the role of organizations like the OECD. It emphasizes the need to consult overlooked communities and the importance of evaluation in AI governance.

Jan 21, 2024 • 41min
How to Defend Independent Technology Research from Corporate and Political Opposition
In October 2022, a group of researchers published a manifesto establishing a Coalition for Independent Technology Research. “Society needs trustworthy, independent research to relieve the harms of digital technologies and advance the common good,” they wrote. “Research can help us understand ourselves more clearly, identify problems, hold power accountable, imagine the world we want, and test ideas for change. In a democracy, this knowledge comes from academics, journalists, civil society, and community scientists, among others. Because independent research on digital technologies is a powerful force for the common good, it also faces powerful opposition.”In the months since that document was published, that opposition has grown. From investigations in Congress to lawsuits aimed at specific researchers, there is a backlash particularly against those who study communications and media, especially where the subjects of that research are often those most interested in advancing false and misleading claims about issues including elections and public health. Justin Hendrix, who is a member of the coalition, caught up with Brandi Geurkink, who was hired as the coalition's first Executive Director in December 2023, to discuss its priorities.

Jan 14, 2024 • 20min
Questioning OpenAI's Nonprofit Status
Today’s guest is Robert Weissman, president of the nonprofit consumer advocacy organization Public Citizen. He is the author of a letter addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary, potentially violating its nonprofit mission. The letter raises broader issues about the future of AI and how it will be governed.

Jan 7, 2024 • 46min
Evaluating Social Media's Role in the Israel-Hamas War
Authors of the report 'Distortion by Design' discuss the role of social media platforms in shaping perceptions of the Israel-Hamas conflict. They explore content moderation, political expression, and the preservation of the historical record. The podcast covers monitoring the conflict on platforms like Twitter, TikTok, and Telegram, along with contrasting responses from Twitter and X during the Russian invasion of Ukraine. The challenges of accessing information in Gaza and the growing distrust towards TikTok are also discussed.

Dec 31, 2023 • 40min
Exposing the Rotten Reality of AI Training Data
Discussion on the use of child sexual abuse imagery in AI training, challenges in identifying and eliminating problematic content, implications of stable diffusion 1.5 models, challenges in addressing problematic content on hosting platforms, AI Foundation Model Transparency Act, and future directions in generative AI and training sets.

15 snips
Dec 24, 2023 • 34min
An FDA for AI?
Discussion on the need for a regulatory agency to govern AI, comparing it to the FDA model and emphasizing the importance of safety reviews and monitoring. Exploring the potential of implementing FDA-style approval and oversight for AI governance. The chapter discusses the limitations and potential measures to prevent monopolies in the current AI ecosystem. Exploring the application of the FDA model to regulate AI, including post-market monitoring and international governance. Discussing international aspects of standard setting and certification for AI. Importance of assessing risks in AI and potential application of banking regulation and third-party auditing.