

Facebook, Twitter take steps to limit the president’s false election claims
Nov 6, 2020
Adi Robertson, a Senior reporter at The Verge with expertise in tech policy, joins the discussion on how social media platforms are grappling with misinformation during the election. The team delves into Twitter and Facebook’s strategies to combat false claims, particularly from political figures. They analyze the effectiveness of content moderation and its repercussions on political discourse. Additionally, they touch on state legislation impacting technology, like Massachusetts' right to repair law, and discuss future challenges for content moderation.
AI Snips
Chapters
Transcript
Episode notes
Trump's TV Claims
- Trump made claims on TV, not social media, about winning and election fraud.
- This has proven pertinent as platforms struggle to contain the spread of misinformation.
TV vs. Social Media
- TV is linear; social media platforms can add context around information.
- This makes misinformation harder to combat on TV as social media can label or hide it.
Limited Reach
- Platforms are limiting the reach of misleading posts, but their effectiveness is unknown.
- Data on reach limitations needs to be released to see actual impact.