Content moderation is a new factory floor of exploitation – labour protections must catch up

26 June 2025 | 7 mins

Many millions of people scroll seamlessly through polished, curated content on social media platforms like TikTok and Instagram, and pose questions to AI tools like ChatGPT daily. 

But few know that behind this experience is an invisible workforce of content moderators and data-labelers, who filter, flag, and annotate immense volumes of violent and sexually explicit content in order to make social media spaces safer and AI tools functional, whether in line with the platforms’ own standards or applicable laws.

Data labelers and content moderators are the unseen gatekeepers of our digital lives, but they often work in exploitative conditions. Equidem’s new research has documented violations of international labour standards, including International Labour Organisation (ILO) protections on fair wages, the right to unionise, and safeguards against forced labour, as well as little regard for their mental health. As the NYU Stern Center notes, “Watching the worst social media has to offer leaves many moderators emotionally debilitated. Too often, they don’t get the support or benefits they need and deserve.” 

Moderators are often subcontracted by multinationals through third-party outsourcing companies with offices in poorer countries, adding additional layers of complexity in terms of accountability. 

This is a new, modern-day factory floor. Unless labour regulations catch up with the digital age, we risk repeating industrial-era exploitation on a global scale.

Long days, poor working conditions and opaque outsourcing

Data workers are often subjected to punishing work schedules of 18 to 20 hours a day in blatant violation of labour laws.  A content moderator for a firm in Colombia, subcontracted by Meta and ByteDance, described having just seconds to react to a queue of 1,000 images a day. He told Equidem, “We have to moderate a lot of cases. In a normal shift, we have to go through 700 to 1,000 cases, which means we have to accomplish an average time of 7-12 seconds to moderate each case.” 

These conditions are not isolated but replicated across regions, mostly prevalent in African, South Asian, and Southeast Asian countries, and companies, revealing a global pattern of exploitation.

Like many other sectors, tech giants typically operate through complex chains of subcontractors and outsourcing firms, creating distance legally and ethically from poor working conditions on the ground. For example, Meta has outsourced content moderation to firms like Sama in Kenya, where moderators have reported psychological trauma, poverty wages, and the suppression of union organising conditions that would not be tolerated under US or European labor laws. Similarly, data labeling for AI systems is frequently offloaded to workers in the Philippines or Colombia, often under opaque contracts and with little recourse for grievances.

According to the media reports, more than 140 former Facebook content moderators who worked for the Sama-subcontracted facility in Nairobi have sued Meta and Samasource Kenya. They allege severe psychological trauma—including PTSD (Post-traumatic stress disorder), anxiety, depression—stemming from exposure to disturbing content such as necrophilia, child sexual abuse, terrorism, and more. The case invokes Kenyan laws on forced labor, human trafficking, modern slavery, unfair labour practices, intentional infliction of mental harm, and more. As reported by The Guardian and Time, several major tech companies have come under scrutiny for labour rights violations in their outsourced moderation and data-labeling operations. TikTok (ByteDance) and OpenAI have subcontracted work to firms in Kenya and Colombia, where workers report poverty wages, psychological trauma, and union suppression (Time, The Guardian). Firms like Google, Amazon, and Microsoft, through vendors such as Appen, have also been linked to exploitative data-labelling jobs paying as little as $1.77 per task. 

This setup exploits legal loopholes across borders, where one country’s lax enforcement shields tech companies from accountability by placing responsibility on third-party vendors operating in regulatory grey zones.

Human cost of content moderation

Our work at Equidem on the human costs of content moderation and data labeling includes a recently published study based on interviews with 113 content moderators and data-labelers across Colombia, Kenya, and the Philippines. During the interviews  my colleagues and I at Equidem conducted for our new report, we heard horrific stories from individuals who faced punishing work conditions, including long hours without breaks as well as repeated exposure to video content containing extreme violence. For some workers, the result was psychological harm and mental breakdowns.  

On top of this, these digital workers endure unstable employment, lack fixed salaries, and are routinely forced into unpaid overtime and mandatory unpaid training. Through low wages and task-based payments, workers are penalised financially for failing to meet targets they consider to be unrealistic or for taking essential sick leave, sometimes losing their entire bonuses or facing unfair wage deductions. These exploitative practices are compounded by forced periods of unpaid unemployment, leaving workers in a constant state of economic precarity. Alongside these financial abuses, many also suffer profound psychological harm and sexual harassment. Effective grievance channels are essential for addressing and resolving these labour rights violations. But as one Kenyan data labeler told us, “There are no clear dispute resolution mechanisms with management. The only available space is talking to the supervisor, which has not been effective.”

When exploitation becomes standard practice, facilitated by weak regulation, it is no longer accidental – it is institutional. These conditions amount to a clear breach of the ILO Violence and Harassment Convention, 2019 (No. 190), which recognises such systemic and sustained harm as workplace violence. It is worth mentioning here that parent countries of Meta (USA), Google (USA), Microsoft (USA), ByteDance (China), and OpenAI (USA), as well as Kenya, have not ratified the convention. In the Philippines, the measure has been in effect since February 2025, and in Colombia, the bill was approved by the House, pending presidential approval.

Regulatory frameworks are not sufficiently adapted to the digital age

While some governments have taken steps to protect these digital workers, it is often limited and inadequate. Between 2023–2024, the Kenyan court heard a landmark case brought by former content moderators against Meta and its subcontractor Sama. While this reflects some judicial independence, the government itself has been largely passive, and there's no systemic protection mechanism yet. A 2022 ruling in a Colombian court forced a gig economy platform to recognize some workers as employees, suggesting potential, but progress is slow and has not yet extended to outsourced digital labor.

The exploitation of content moderators can only be addressed through comprehensive and enforceable legal protections. But this is not a challenge that can be solved at the national level alone. 

Countries must begin working together to set regional and cross-border standards that can hold tech firms accountable regardless of where they operate. Regional bodies like the African Union (AU), the Association of Southeast Asian Nations (ASEAN), the South Asian Association for Regional Cooperation (SAARC), and the Caribbean Community (CARICOM) are well-positioned to lead these efforts by developing binding frameworks for digital labour protections and ethical outsourcing practices. At the same time, governments must strengthen enforcement of their own labour laws and hold local subcontractors accountable, ensuring that international firms cannot outsource harm through domestic intermediaries. 

Decisive domestic action is urgently needed on a range of issues including:

  • Enforcing existing labour laws robustly within domestic jurisdictions, especially in the digital and outsourcing sectors.

  • Creating specific legal provisions to regulate AI data work, content moderation, and algorithm-driven gig platforms.

  • Licensing and auditing local subcontractors, ensuring that those supplying digital labour to multinational tech companies are in full compliance with international standards on wages, working hours, mental health, and union rights.

  • Penalising subcontractors engaged in exploitative or opaque practices, including suspensions, fines, and public blacklisting.

  • Empowering labour inspectorates and digital ombuds offices to monitor tech-sector violations, with adequate resources and independent authority.

Binding transnational agreements and shared enforcement mechanisms are needed as well that prevent companies from exploiting legal loopholes across jurisdictions.

National governments must also strengthen domestic labour laws to explicitly address realities of outsourced digital work, ensuring strong protections for gig workers, mandatory mental health support for high-risk content roles, and strict regulations on algorithmic performance metrics that penalise human vulnerability.

France has led efforts within the European Union (EU) to reclassify gig workers as employees, particularly in transport and delivery platforms. Courts have ruled against platforms like Uber, asserting that gig workers are de facto employees entitled to benefits and protections. France has also introduced discussions on algorithmic fairness in the workplace. The EU Platform Work Directive, adopted in 2024, introduces a presumption of employment for platform workers unless proven otherwise, and demands algorithmic transparency. It’s not specific to outsourced content moderation, but lays the groundwork for regulating algorithmic control and labor classification.

Governments must act collectively, engaging civil society and other stakeholders as well as working through the ILO and other United Nations agencies and regional bodies, leveraging their regulatory authority to challenge a digital supply chain marked by cross-border exploitation. While processes such as the UN Treaty on Business and Human Rights, the ILO’s Discussion on Decent Work in the Platform Economy, and various national and regional due diligence laws represent steps in the right direction, they often fall short by focussing on general supply chains, and overlooking the specific conditions of outsourced digital labour and AI annotation work. International organizations must facilitate the creation of binding, enforceable standards through processes led by member states, to protect the rights and well-being of these workers. 

Future binding frameworks must explicitly include outsourced digital labour and place obligations not just on tech multinationals, but also on the domestic subcontractors they operate through. This includes guaranteeing protections for freedom of association, fair pay, mental health, algorithmic transparency, and safe grievance mechanisms. Without this, even the most well-meaning treaties risk reinforcing a two-tiered system of labour rights — one for the physical industries, and another for the digital.

Tech giants must no longer be allowed to hide abuse behind subcontracting chains. The future of labour justice depends on the proactive willingness of the actors. Ultimately, what is needed is a global consensus: that human dignity is non-negotiable, regardless of nationality or industry, to break this unjust cycle of exploitation of content moderators.

Eiffel Abedin is a senior investigator with Equidem, a human rights organisation that documents labour exploitation and corporate abuse in global supply chains, with a focus on the Global South.