Dru Oja Jay is an author and web developer, currently the Executive Director of CUTV. James Steinhoff, an Assistant Professor at University College Dublin, studies the political economy of AI. Together, they dissect the AI hype cycle, exploring exaggerated narratives and the impact on creative jobs. They also tackle the ethical dilemmas of AI, advocating for community-driven approaches in data management. Finally, they delve into societal fears surrounding AI's apocalyptic themes, urging a balanced perspective on its implications for labor and capitalism.
01:06:25
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
AI Hype Cycle
The hype around AI, fueled by investment and emerging facts, drives the tech cycle.
This raises concerns about job displacement, plagiarism, and exploited labor behind AI.
insights INSIGHT
Control of AI's Future
Few are questioning who controls the future of AI, despite significant changes.
The EU's AI Act is a watershed moment in regulating private AI enclosure.
insights INSIGHT
ChatGPT and Human Cognition
OpenAI's ChatGPT is a large language model, statistically modeling language through machine learning.
It reveals insights into human cognition and information processing differences between humans and machines.
Get the Snipd Podcast app to discover more snips from this episode
Dru Oja Jay is an author, organizer and web developer who currently serves as Executive Director of CUTV and Publisher of The Breach. He’s also a co-founder of the Media Co-op and Friends of Public Services. He wrote a book with Nikolas Barry-Shaw called Paved with Good Intentions: Canada's Development NGOs from Idealism to Imperialism.
James Steinhoff is an Assistant Professor and Ad Astra Fellow in the School of Information and Communication Studies at University College Dublin. His research focuses on the political economy of algorithmic technologies, data and digital labour. We talk about his stunning, insightful book Automation and Autonomy: Labour, Capital and Machines in the Artificial Intelligence Industry, which is chock full of information about the history of AI and its relationship to capitalist modes of production. I should note, too, that he co-authored a book called Inhuman Power: Artificial Intelligence and the Future of Capitalism in 2019, which is also a great book on AI.
It feels as though every other day we encounter a new angle or emerging fact around machine learning, generative AI, and the incipient market for these sorts of data-driven digital products. Whether it’s the billions of investment dollars that are driving the sudden boom in startups focusing on applications of generative AI, concerns about automation and job loss, concerns about plagiarism with the saturation achieved by ChatGPT, or important discussions about the exploited labour force that makes ChatGPT’s core functions possible (an army of US contractors are being paid about $15 per hour to perform the pivotal work of data labeling that enables the platform), we’re being inundated by information about this supposed technological revolution. And that inundation is firing up the hype cycle, further fueling investment.
Here we talk about the goals of the capitalist class in determining the future of AI. What will fragmentation of the labour force look like in the wake of this technological change? Are large language models going to replace human communicators? Does this signal a last shifting in the market for intellectual labour? What about all of the data that is collected to drive the creation of those large language models? Can we imagine ways to produce machine learning out of that massive corporate capture of our data? Whose data is it anyway?
There are lots of changes coming, there is no question. But the question too few of us are asking is: who will be in command of that change? In the EU, there is the AI Act, which Steinhoff calls a “watershed moment” in the regulation of private business and its enclosure of AI technology. Jay reminds us that, when it comes to the potential for public and democratic control of data, even though it seems like an unfair fight, we still “have to start building power somewhere.”
We also dig into fictional representations of AI. We ponder what movies like Terminator 2: Judgment Day get right in terms of AI generating its own programs—generating, as it were, its own ideas about function. Or, as James puts it, creating a situation where the “program is the output rather than the input.”
Steinhoff and Jay share some insights on potential avenues of resistance, too. Not just resistance in the classical political sense, but also a kind of imaginative or intellectual resistance. They discuss their research into the history of AI, and unpack these moments of “AI winter” or “AI depression” where social or technological barriers shut down the hype cycle, they demystify machine learning, and also talk out some of the basic facts around AI-generated art and text.