Exploring the Intersection of Information Integrity, Race, and US Elections
Mar 10, 2024
auto_awesome
Experts discuss combating misinformation, AI impact on marginalized communities in elections, historical disenfranchisement of black individuals, racial dynamics in the US, nativism, misogyny in politics, tech facilitating communication, and the need for innovative tech policy and corporate accountability.
AI technology can harm marginalized communities in elections through biased algorithms, leading to voter disenfranchisement.
Efforts are being made to initiate narrative change in digital journalism spaces like Black Lives Matter protests, challenging traditional media portrayals.
Investing in people over tech solutions is crucial for building trust within communities and empowering trusted messengers to foster civic responsibility.
Deep dives
Challenges of AI Technology in Elections
AI technology introduces challenges in elections by potentially harming marginalized communities through biased algorithms. The commercialization and design decisions behind AI often overlook the impacts on marginalized groups. Issues like targeted chaos, phishing attacks on election officials, and bias in voter data cleanup processes can disproportionately affect marginalized communities, leading to voter disenfranchisement and messaging saturation exploiting wedge issues.
Narrative Change Through Technology and Journalism
Efforts to initiate narrative change through non-traditional mechanisms in digital journalism spaces like the Black Lives Matter protests are a focal point. The research delves into the failures of mainstream media to accurately portray events like protests and racial reckonings. It underscores the need for reparative narrative change and challenges traditional journalism's portrayal of democratic activities.
Empowering Communities through Trust and Information
Focusing on investing in people rather than tech solutions, the narrative highlights the importance of real human knowledge in building trust within communities. Emphasizing existing networks of trust and reliable information systems in marginalized communities, the discussion advocates for empowering trusted messengers to foster information pathways and civic responsibility. Recognizing the role of trustworthy individuals over institutions, the narrative stresses the need to strengthen information integrity at the community level.
Policy Responses and Solutions to AI Risks
The discussion around policy responses to AI risks involves considering accountability, data privacy protections, and algorithmic bias mitigation. Suggestions include implementing data privacy laws to safeguard communities against data exploitation, enforcing accountability through whistleblower protections and private rights of action, and structuring policies to address algorithm bias and racial disinformation. Highlighting the importance of local organizing and civil rights infrastructure, the conversation underscores the need for equitable and tailored data privacy regulations.
Role of Media, Tech, and Community Oversight
The engagement with media, tech, and community oversight emphasizes the need for diverse media voices, civil rights structures within tech companies, and policy measures addressing algorithmic bias. Advocating for accountability and transparency in media representation, the dialogue suggests promoting local organizing to counter data privacy concerns and push for effective policy responses that prioritize democratic protection over capitalist interests. The exploration concludes by underlining strategies like data privacy laws that align with democratic values and community empowerment.
At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to Spencer Overton, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.
He's joined by three other experts, including:
Brandi Collins-Dexter, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, Black Skinhead: Reflections on Blackness and Our Political Future. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;
Dr. Danielle Brown, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; and
Kathryn Peters, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.