Chuyển tới nội dung
Trang chủ » Behind hello ChatGPT

Behind hello ChatGPT

Behind hello ChatGPT

Behind hello ChatGPT

ChatGPT is considered one of the biggest technological breakthroughs of the last year. OpenAI, the developer of this AI, raised capital from investors to increase the company’s valuation to $29 billion. However, behind it lies a hidden corner that few people know about. Investigation Time revealed that OpenAI hired a Kenyan skinny worker to do the content moderation work so ChatGPT could quickly generate impressive responses.





Image of an African worker working in front of a computer screen, redrawn by OpenAI artificial intelligence.  Figure: time

Image “African worker working in front of computer screen” redrawn by OpenAI artificial intelligence. Illustration: Time

This work is very important for OpenAI. Before ChatGPT, GPT-3 platforms could chain but not post content because of their propensity to post offensive, sexist, and racist comments. This is also a common problem with AI models, as the input data is often collected from the internet, which contains a lot of malicious and fake information. It is estimated that given the amount of data OpenAI collects to train its AI, the company will need hundreds of people for decades to manually sift through all the data.

Control AI with AI

An effective way to limit AI bias and error is to create AI-enabled security mechanisms, similar to what major platforms like Facebook have already done. OpenAI has researched and developed an AI that can detect hate speech and remove it from the platform before providing users with a “clean” result.

The problem is very simple: The company provides AI with information about violence, hate speech, hatred and sexual violence. After “eating” this amount of malicious data, the AI ​​automatically detects keywords and related content. The platform then integrates with ChatGPT to control and filter data deemed invalid before it reaches the user. This also helps remove malicious text from existing OpenAI datasets before training future AI models.

The question is how to make the malicious data repository big enough for AI training.

Across the Indian Ocean, under the hot African sun, a Kenyan outsourcing company has the answer to that question. Sama, an OpenAI partner, is well known to major Silicon Valley companies such as Google, Meta and Microsoft. They claim to be an “ethical artificial intelligence” company that has lifted more than 50,000 people out of poverty. Your job is to find malicious content in the darkest corners of the internet, such as child abuse, bestiality, murder, suicide, torture, self-mutilation… to enrich the input for AI training partners.

Sama pays between $1.32 and $2 per hour on behalf of OpenAI data markers, depending on seniority and productivity.

A hidden corner that few know

“Despite the important role they play, these workers’ lives are always precarious, working conditions are harsh,” the AI ​​partnership said.

An OpenAI spokesperson confirmed that employees at Sama in Kenya contributed to a malicious content detection tool they developed, which has been integrated into ChatGPT. “Our mission is to ensure that AI benefits all of humanity, and we work to create safe and useful AI systems that restrict biased and harmful content. Filtering out harmful content is an important step in reducing the level of violence and sexuality in training data.” time Quote from an OpenAI representative.

AI is expected to be a beacon that will guide the tech industry through dark times. But the working conditions of those who label the data show a darker side of the picture. Artificial intelligence relies on human labor, which can be exploited and harmed by low wages, to contribute to a multi-billion dollar industry.

says employee Sama, who is responsible for reading and writing text for OpenAI. time that he frequently hallucinated after reading the pictorial description of a man having sex with a dog in front of a child. “It’s torture. You’ll read stuff like that all week. It drives you crazy at the weekend just thinking about it. It’s a toxic work environment for us,” the person said.

Contract between Sama and OpenAI

OpenAI has signed three contracts with Sama totaling $200,000 through the end of 2021 to label written descriptions of sexual violence, hate speech and violence. About three dozen employees are divided into three teams, each working on their own content. Three employees stated that they had to read and mark 150 to 250 texts in every nine-hour shift. This fragment consists of 100-1000 words.

An employee of Sama must have read rape stories on porn sites. He then had to take a test from OpenAI to determine if it was sexually violent content. Such tests were part of Sama’s work before they were allowed to “breed AI”.

All employees interviewed said they had been seriously injured despite multiple health consultations. A Sama spokesman said the invited doctors were professionally trained and licensed.

Under the contract, OpenAI pays partners $12.5 per hour, but employees only get about 1/6 of that amount. On average, they make $170 per month and about $70 in bonuses for meeting performance metrics. If they work overtime, each person can earn between US$1.32 and US$1.44 after tax, which is less than the minimum wage for secretaries in Kenya.

An OpenAI spokesperson confirmed that the company does not set performance targets and is responsible for managing employee payroll and mental health conditions. “We take the mental health of our employees and contractors very seriously. Workers can refuse to work with impunity. Exposure to obscene content and confidential information will be limited. This is done by highly skilled workers,” said the OpenAI representative.

However, the relationship between OpenAI and Sama started to deteriorate in February 2022. Sama stopped all work for OpenAI eight months before the end of the contract. The outsourcing company says the OpenAI image collection doesn’t provide specific instructions about what’s prohibited by law. After that, OpenAI sent “further instructions,” but Sama still issued a notice to cancel the entire project.

Following Sama’s decision, a group of workers were transferred to a poorly paid pornography classification project for $70 a month. Due to the termination of the contract, OpenAI and Sama said the $200,000 was not paid in full.

Experts say the need for workers to label data for AI systems won’t go away as the industry grows. “They’re impressive, but ChatGPT and AI models aren’t magic, they’re based on large supply chains of human labor and aggregated data,” said Andrew Strait, AI ethicist.

Huong Nha (associated time)

Tag:

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *