Richard Mathenge, a former team lead at Sama, shares his gripping experiences from training OpenAI's GPT models in Nairobi. He reveals the harsh realities of his job, including exposure to explicit content and insufficient mental health support. Mathenge discusses the toll this work took on him and his team, highlighting payment disparities and the urgent need for proper advocacy and mental health resources in tech. The conversation emphasizes the human side of AI training and the ethical dilemmas faced by those behind the technology.
46:55
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
question_answer ANECDOTE
Laid Off from LIDAR
Richard Mathenge started at Sama in July 2021, working on a LIDAR project for self-driving cars.
He and his colleagues were later laid off due to friction with management.
question_answer ANECDOTE
Rehired for ChatGPT
After being laid off, Mathenge was rehired for the ChatGPT project, leading a team of ten.
Their task involved rating text based on benchmarks like illegal, erotic, and non-erotic content.
question_answer ANECDOTE
Traumatizing Text
Mathenge describes the traumatizing nature of the text, including graphic depictions of bestiality and child abuse.
Despite Sama's promises, adequate counseling was not provided to cope with this exposure.
Get the Snipd Podcast app to discover more snips from this episode
Richard Mathenge was part of a team of contractors in Nairobi, Kenya who trained OpenAI's GPT models. He did so as a team lead at Sama, an AI training company that partnered on the project. In this episode of Big Technology Podcast, Mathenge tells the story of his experience. During the training, he was routinely subjected to sexually explicit material, offered insufficient counseling, and his team members were paid, in some cases, just $1 per hour. Listen for an in-depth look at how these models are trained, and for a look at the human side of Reinforcement Learning with Human Feedback.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
----
OpenAI's response:
We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We take the mental health of our employees and our contractors very seriously. One of the reasons we first engaged Sama was because of their commitment to good practices. Our previous understanding was that wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so. Upon learning of Sama worker conditions in February of 2021 we immediately sought to find out more information from Sama. Sama simultaneously informed us that they were exiting the content moderation space all together.
OpenAI paid Sama $12.50 / hour. We tried to obtain more information about worker compensation from Sama but they never provided us with hard numbers. Sama did provide us with a study they conducted across other companies that do content moderation in that region and shared Sama’s wages were 2-3x the competition.