The podcast discusses the biases and dangers of current AI systems, highlighting the need for regulation and prioritizing public good. It explores issues such as data bias, exclusivity in AI technology, the surveillance business model, and the importance of data privacy. The potential for policy ratings in AI regulation and the need for counter pressure from activists is also highlighted. Listeners can learn how to navigate AI headlines and understand its implications without being computer scientists.
33:52
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Artificial intelligence today is influenced by biases and controlled by corporations, requiring accountability and resistance to protect the public good.
The concentration of power in the tech industry, driven by profit and growth, creates inequalities and the need for regulation and privacy legislation to counteract these issues.
Deep dives
The Rise of Artificial Intelligence in 2023
The year 2023 marked the mainstream rise of artificial intelligence (AI) and its potential dangers. From the increased use of chat GPT to concerns about facial recognition technology, AI played a significant role. While some experts warned of an existential threat and even compared it to Skynet from the Terminator, others like Meredith Whitaker focused on the more immediate risks. Whitaker highlighted how a handful of corporations control the development of AI systems driven by profit and growth, rather than the public good. She emphasized the need for accountability, regulation, and resistance against the consolidation of power by these corporations.
The Business Model and Inequality of AI
One of the main concerns about AI is the business model behind it. The large-scale AI models, such as ChatGPT, are expensive to create and maintain, making them accessible only to wealthy corporations and individuals. This creates a concentration of power in the tech industry, where these systems are used to serve the interests of profit and growth, rather than the public good. The reliance on surveillance data for training these models raises concerns about data bias and the potential perpetuation of inequalities. Meredith Whitaker highlights the need for guardrails on AI use, regulation from below, and privacy legislation to counteract these issues.
The Importance of Balancing Long-Term Concerns and Short-Term Harms
The debate around AI often includes discussions on long-term existential risks. However, Meredith Whitaker argues that focusing solely on these hypothetical risks could downplay the more urgent and tangible harms caused by AI. These harms include misidentification and biased outcomes, with marginalized communities being disproportionately affected. Whitaker emphasizes the need to address the short-term risks faced by low-wage workers, historically marginalized groups, and those on the brink of climate catastrophe. She calls for resistance, regulation, and building power in workplaces and communities to create a more equitable and accountable AI future.
While the What Next: TBD team spends some time with their families during the holidays, we revisit some of 2023’s biggest, strangest, and best stories. Regularly scheduled programming resumes in January.
Artificial intelligence—as it already exists today—is drawing from huge troves of surveillance data and is rife with the biases built into the algorithm, in service of the huge corporations that develop and maintain the systems. The fight for the future doesn’t look like war with Skynet; it’s happening right now on the lines of the Writer’s Guild strike.
Guests:
Meredith Whittaker, president of the Signal Foundation, co-founder of the AI Now Institute at NYU