Episode 28: LLMs Are Not Human Subjects, March 4 2024
Mar 13, 2024
auto_awesome
Alex and Emily discuss the use of large language models in social science research, critiquing the idea of replacing human subjects with LLMs. They explore algorithmic bias, concerns about monetizing text data, AI events gone wrong, fake quotes in news, and the impact of deep fakes on election integrity.
AI hype in social science research, LLMs not suitable for human subjects.
Misuse of AI leading to fake news, election manipulation through deep fakes.
Deep dives
AI Hype Theater 3000 Event Causes Chaos with AI-Generated Content
A Willy Wonka-themed event using AI-generated scripts and visuals led to chaos with actors reciting scripts given hours before, kids rationed jelly beans. A character named The Unknown scared kids leading to tears. The event symbolized the dark side of AI hype laden with promises but cruel outcomes.
Fake Quotes Attributed to Individuals Using AI-Generated Content
Fake quotes attributed to individuals based on AI-generated content surfaced in a news site. Upon request, the editor retracted the false quote. The site admitted to using AI to generate the piece.
Trump Supporters Use AI-Generated Fake Images to Target Black Voters
Donald Trump supporters created and shared AI-generated images showing Trump with Black voters to encourage African Americans to vote Republican. The images displayed distortions and AI artifacts revealing attempts to influence election outcomes using deep fakes.
Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data.