He opened a clip and froze: a bathroom, a door open, a private moment recorded by a pair of smart glasses. You can feel the weight of that image even if you’ve never watched it — it lands in the chest the way bad news does. I kept thinking about who hired the person who had to watch that footage, and what they were told to say.
A contractor in Nairobi watched footage of someone using the toilet
That was one of the scenes disclosed this year by Svenska Dagbladet and Göteborgs‑Posten. The clips came from Meta’s Ray‑Ban smart glasses and were handed to data annotators employed by a Kenyan firm called Sama. Their job: label people, objects, and actions so AI can learn.
The task is tedious, meticulous, and—according to the reporting—harrowing. Contractors described seeing people undress, use the bathroom, watch pornography, and have sex. I don’t pretend this is abstract; you and I both know privacy violations don’t stay theoretical when they’re stored as video.
Why did Meta cut ties with Sama?
Meta told The Guardian it ended the contract because Sama “didn’t meet our standards.” That is a tidy corporate line that shifts responsibility. It’s easier to blame a vendor than to fix product design or change policy. The company also insisted users consented to human review — a claim that sounds defensive when the people doing the reviewing are seeing scenes no user would reasonably expect to be indexed by a third party.
A WhatsApp thread filled with complaints went quiet after layoffs
Workers who complained about exposure and working conditions were the ones who made the problem public. After the Swedish papers ran their piece, Meta severed the contract and Sama laid off more than 1,000 people. The layoffs came with six days’ notice, according to Oversight Lab, an advocacy group for African tech workers.
This isn’t just a personnel hiccup. It’s an ethics test. Meta removed a supplier instead of redesigning a workflow that funnels intimate, sensitive video to underpaid reviewers. Sometimes companies treat whistleblowing like a pest — easier to remove than to solve.
Were Kenyan workers forced to review explicit footage?
The reporting says yes. These annotators sat through hours of unfiltered clips to mark labels for machine learning. The psychological toll was real; the job asked them to catalogue human moments that most platforms and products hide behind user privacy settings. When employees pointed this out, they were not given a solution — they were given unemployment.
A severed contract left more than a thousand people without pay
Meta’s step removed its direct link to the problem while leaving the human cost intact. Sama’s contractors lost income, benefits, and any immediate recourse. Oversight Lab is helping workers explore legal options, but legal fights take time and money that newly unemployed people don’t have.
Think of this as two images: a mirror held up to product teams and a door slammed in the face of a worker paid to clean up the mess. The first shows accountability; the second is damage control.
What protections exist for data annotators reviewing sensitive content?
Policy is patchwork. Companies like Meta say they secure consent and review content to improve systems for Facebook, Instagram, and devices such as Ray‑Ban smart glasses. In practice, protections depend on contractors, vendors, and local labor rules. Advocacy groups, journalists, and industry watchdogs are increasingly the ones pushing for baseline standards: mental‑health support, clearer consent flows, and safer data pipelines.
I’m not neutral about the way this played out. You can either design a product that reduces the flow of intimate footage to human reviewers, or you can cut the vendor who complained. The latter is cheaper on paper and easier in press statements, but what does that buy you next time users discover their most private moments are being labeled by strangers?
Who pays for that damage — the company that shipped the product or the people who warned everyone it was broken?