

Agents, the most junior data labelers who made up the majority of the three teams, were paid a basic salary of 21,000 Kenyan shillings ($170) per month, according to three Sama employees. The contracts stated that OpenAI would pay an hourly rate of $12.50 to Sama for the work, which was between six and nine times the amount Sama employees on the project were taking home per hour. These therapists were accessible at any time, the spokesperson added. Employees were entitled to both individual and group sessions with “professionally-trained and licensed mental health therapists,” the spokesperson said. In a statement, a Sama spokesperson said it was “incorrect” that employees only had access to group sessions.

Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. All of the four employees interviewed by TIME described being mentally scarred by the work. Those snippets could range from around 100 words to well over 1,000. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Around three dozen workers were split into three teams, one focusing on each subject.

By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.ĭocuments reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. “You will read a number of statements like that all through the week. One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. These invisible workers remain on the margins even as their work contributes to billion-dollar industries. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative.

Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.Įven as the wider tech economy slows down amid anticipation of a downturn, investors are racing to pour billions of dollars into “generative AI,” the sector of the tech industry of which OpenAI is the undisputed leader. OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest. Much of that text appeared to have been pulled from the darkest recesses of the internet. To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. It could also help scrub toxic text from the training datasets of future AI models. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. But an Interview With ChatGPT Reveals Their Limits Read More: AI Chatbots Are Getting Better.
