Tristan Harris, co-founder of the Center for Humane Technology, discusses the harmful effects of AI in social media including the unraveling of shared reality, mental health crisis, polarization, and cyberbullying. He also explores the challenges of self-regulation within technology companies and the potential risks and dangers of AI. The chapter emphasizes the importance of moving at a pace that ensures safety and raises concerns about the current speed of deployment.
The race for dominance in the AI industry leads to reckless deployment and prioritizes speed over safety testing.
The absence of effective regulation and coordination in AI development increases the likelihood of catastrophic outcomes.
Generative AI models like GPT pose significant risks, including the spread of misinformation, cybersecurity vulnerabilities, and the manipulation of political discourse.
Deep dives
The Race for Market Dominance and Risky Deployment
The AI industry is currently engaged in a race for dominance, with companies competing to deploy their AI technologies faster than their competitors. The goal is to get as many users and developers on their platforms as possible, which drives the speed at which these technologies are released. This race for dominance leads to a race to recklessness, as companies prioritize quick deployment over safety testing. The competitive pressures override concerns for potential risks and externalities, increasing the likelihood of a catastrophic outcome.
The Urgent Need for Coordination and Regulation
The current approach to AI development lacks coordination and regulation, which is essential for ensuring safety and responsible deployment. The absence of effective regulation, similar to the FDA for drugs, allows companies to release AI technologies without thorough safety assessments. Urgent action is required to establish regulatory agencies that can evaluate and approve AI deployments based on comprehensive safety assessments, such as the EVALS framework. Coordination among experts in the field is crucial to determine the appropriate pace and standards for AI deployment.
Recognizing Externalities and Anticipating Risks
The rapid advancement of AI technologies, particularly generative AI models like GPT, poses significant externalities and risks. The potential misuse and unintended consequences of such technologies cannot be underestimated. It is essential to anticipate the societal impact and risks associated with AI deployments, such as AI-enabled cyberattacks, synthetic media manipulation, and deceptive human interactions. Taking a proactive approach to identifying and addressing these risks is critical to prevent potential catastrophes.
Lessons from the Social Media Dilemma
The lessons learned from the social media industry highlight the importance of responsible technology deployment. Just as social media platforms prioritize engagement over user well-being, the AI industry must avoid repeating the same mistakes. Regulation should focus on optimizing AI systems towards goals that can be democratically deliberated, such as promoting a healthy information commons for democracy. Safeguarding against AI-driven societal harms requires a multidisciplinary approach that includes the expertise of technical, legal, and ethical professionals.
The Need for Regulation and Oversight
Regulating and slowing down the deployment of AI technology is crucial in order to prevent negative consequences. It is important to consider the potential risks and harms associated with the unregulated use of AI, particularly in the context of social media. The focus should be on ensuring that AI development and deployment align with the values of cohesion, trust, education, community, and well-being. Companies like OpenAI are advocating for measures such as licensing AI models and establishing international coordination and monitoring systems. However, the role of lobbyists in protecting the interests of corporations poses challenges in implementing regulations.
Threats Posed by Generative AI
Generative AI, specifically large language models, has the ability to decode and generate language and media at scale. This technology can be manipulated to spread misinformation, fake news, and polarization in an unprecedented manner. It can be used to manipulate political discourse, fabricate convincing arguments, and synthesize media that appeals to individuals' biases. The potential risks include supercharging all the harms associated with social media, while also enabling the identification of cybersecurity vulnerabilities and the manipulation of code. The ability to hack language poses a significant threat to democracies and raises the need for proactive measures to protect against the spread of manipulated and biased information.
AI has already affected our society fundamentally. That effect first happened through social media. In this episode, we speak with Tristan Harris, co-founder of the Center for Humane Technology, about that first effect, and what we can expect as AI evolves.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode