Updated 2 hours ago
700+ Workers Training Meta's AI Face Layoffs as Models Replace Them

Meta AI Layoffs

700+ Workers Training Meta's AI Face Layoffs as Models Replace Them

More than 700 data annotators at Meta contractor Covalen in Ireland are at risk of losing their jobs — after training the very AI systems that are displacing them. Workers call it 'undignified.'

The Workers Who Built Meta's AI Are Being Replaced by It

More than 700 workers in Ireland who refine Meta's AI models have been told their jobs are at risk, according to documents obtained by WIRED. The affected workers are employed by Dublin‑based Covalen, which handles content moderation and labeling services for Meta. They were informed of the layoffs over a brief video meeting on Monday afternoon — and were not allowed to ask questions.

"We had a pretty bad feeling [before the meeting]," Nick Bennett, one of the employees on the call, told WIRED. "This has happened before."

Training Your Own Replacement

Roughly 500 of the affected workers are data annotators — people who check material generated by Meta's AI models against the company's rules barring dangerous and illegal content. Their work is literally training the AI to make the same judgments independently. "It's essentially training the AI to take over our jobs," another Covalen employee, who asked to remain anonymous, told WIRED. "We take actions as the perfect decision for the AI to emulate."

The work itself is grueling. Some annotators spend their days crafting elaborate prompts to try to bypass guardrails meant to prevent models from serving up child sexual abuse material or descriptions of suicide. "It's quite a grueling job," Bennett told WIRED. "You spend your whole day pretending to be a pedophile."

  • 700+ workers at risk More than 700 Covalen employees in Dublin face potential layoffs, roughly 500 of whom are data annotators
  • Second round of cuts This is the second layoff round in recent months — Covalen cut around 400 jobs in November, leading to a worker strike
  • Headcount halved Between the two rounds, Covalen's Dublin headcount is on track to be almost halved, per the Communications Workers' Union

Meta's Official Line

Last week, Meta announced plans to cut one in 10 jobs as part of sweeping layoffs aimed at making the company more efficient. A memo reportedly indicated the layoffs were motivated by a need to increase spending on other aspects of the business — though it didn't mention AI specifically. The company recently announced plans to nearly double its spending on AI technology.

Meta spokesperson Erica Sackin told WIRED: "As we shared in March, over the next few years, Meta will be deploying more advanced AI systems to transform our approach to content enforcement and operations across our platforms, so that it delivers the safety and protection people expect. As we do that, we'll be reducing our reliance on third‑party vendors and strengthening our internal systems."

In other words: the AI these workers trained is now good enough that Meta needs fewer of them. The six‑month cooldown period preventing laid‑off workers from applying to competing Meta vendors, as WIRED reported, adds insult to injury.

Zuckerberg's 2026 AI Vision Meets Reality

In January, Meta CEO Mark Zuckerberg said, "I think that 2026 is going to be the year that AI starts to dramatically change the way that we work," as WIRED reports. For the Covalen workers, that prediction isn't a forecast — it's a layoff notice.

But the displacement pattern here is specific and instructive. Content moderation and data annotation are exactly the kind of tasks where AI models become self‑sustaining: once a model can reliably classify harmful content, you no longer need humans to do it at scale. The workers who labeled the training data literally built the system that makes them redundant. It's a pattern that's likely to repeat across every industry where human judgment is being captured as training data.

What Builders Should Take From This

For developers building AI‑powered tools, the Covalen layoffs carry a specific lesson: the data annotation phase of any AI project has a built‑in expiration date. Companies that rely on human labeling today should plan for the day when the model can label its own data — and structure their businesses accordingly.

There's also a practical warning about the AI adoption wave that Fortune highlighted this week: the gap between AI's impact in tech versus the rest of corporate America is wide. Box CEO Aaron Levie noted that in tech, "the feedback loop is tight, the productivity gains are measurable, and the headcount math changes accordingly." Outside tech, not so much — legacy systems, fragmented data, and less technical workforces slow adoption dramatically.

  • Annotation has a shelf life Every human‑labeling operation should plan for self‑labeling models — the transition comes faster than expected
  • The displacement pattern is clear Tasks where human judgment becomes training data are the first to be automated — content moderation, data annotation, quality assurance
  • Tech vs. enterprise gap persists AI displaces tech workers first because the feedback loops are tight — but the rest of the economy moves much slower

The Bigger Question

Christy Hoffman, general secretary of UNI Global Union, put it plainly: "Tech companies are treating the workers whose labor and data helped build AI as disposable," she told WIRED. "To fight back, it's absolutely critical that workers organize and demand notice about the introduction of AI, training linked to employment, and a plan for their futures. Workers should also have the right to refuse to train their AI replacements."

That last point — the right to refuse to train your replacement — is the one that should make every builder pause. The entire AI industry rests on human‑labeled data. If the people doing the labeling start refusing, the pipeline slows. If regulators start requiring companies to offer transition plans before displacing annotation workers, the economics of AI training change. This isn't just a labor story. It's a supply‑chain story that affects every AI product being built today.

Share this article

PostShare

More on This Story