I was ten minutes into a late-night scroll when a colorful, soupy cartoon kept reappearing in my recommendations. You watch a familiar character melt into a drone and you sit up. I want you to hear what that moment means for parents, creators and platforms.
At bedtime this week, my daughter tapped a video and froze.
Google is funneling $1,000,000 (€930,000) from its AI Futures Fund accelerator into Animaj, a studio that makes AI-generated videos for kids, Bloomberg reports. Animaj will reportedly get early access to Google’s Veo video models and special insights from DeepMind — privileges that give it a head start on the same tools other creators will only see later.
I’m telling you this not as headline recycling but because the money is strategic: it buys software access, model tuning and a closer line to Google’s research teams. That matters when the product is content aimed at young viewers.
Is Animaj creating AI-generated videos for kids?
Yes. Animaj’s own YouTube channel states it “acquires and turns iconic Kids’ IPs into global franchises using an AI-driven, digital-first, and multi-platform approach.” Co-founder Sixte de Vauplane told Bloomberg the company drove 22 billion views across its channels. Animaj previously raised €100,000,000 from Left Lane Capital and another $85,000,000 (€79,050,000) from HarbourView Equity Partners, so Google’s injection is relatively small in cash but large in platform leverage.
On the recommendation feed, I watched a string of clips that made no sense.
Last month the New York Times analyzed YouTube recommendations and found that after 15 minutes of following a popular, non-AI video, about 40% of the material recommended felt AI-produced and often unlabeled. The result is often incoherent mush — goo being sluiced into animal shapes, animals morphing into machines — a kind of cheap spectacle replacing story and craft, like a factory spitting out cartoons.
You and I both see why that feels wrong: children learn from repetition, pattern and narrative. When algorithms prioritize novelty over coherence, the signal that shapes attention becomes noise. Google’s deal gives one company privileged access to model updates and research collaborations, which could mean more polished AI work — or a faster spread of the same sloppy outputs unless policy and editorial standards change.
Should parents be worried about AI videos on YouTube?
Worry is the wrong word; vigilance is better. Platforms such as YouTube and companies like Animaj are racing to scale. The New York Times and other outlets found many AI clips go unlabeled. You can monitor watch time, use supervised accounts and favor verified channels, but the greater risk is systemic: recommendation systems reward engagement, not quality, and that creates incentives for quantity over care.
On Bloomberg’s page, I read the co-founder saying he sees the problem too.
Sixte de Vauplane told Bloomberg that Google “knows the problem” of AI slop on YouTube. That’s an admission worth parsing: a platform admitting harm while investing in a player building the product that contributes to that harm feels like lending a match to someone who’s promising to put out the fire.
The market context amplifies the stakes. MoffettNathanson’s analysis, cited by The Hollywood Reporter, says YouTube quietly became the world’s largest media company, surpassing Disney’s media arm. When a platform with that scale and a commercial engine for watch time starts surfacing mass-produced AI content, you get a distribution machine that can turn rough outputs into mainstream culture.
What does Google’s investment mean for YouTube content moderation?
Practically: closer technical ties and earlier model access could let Animaj fine-tune outputs to perform better under YouTube’s ranking signals. Politically: it raises questions about conflicts of interest — a platform funding creators who receive preferential model access while policing the space they operate in. You should ask whether funding relationships create incentives that tilt moderation and discovery to favor certain formats or producers.
At my desk, I imagine a small studio doing fast edits at scale.
You and I both want safe, creative content for kids. The options available to parents, teachers and regulators are straightforward: demand transparency about AI use, require labeling, and push platforms toward stricter discovery controls for content targeting young viewers. Tools like YouTube’s supervised experiences, parental controls and third-party monitoring groups become more important as AI-generated material scales.
One last image: this moment feels like a street where every shop starts selling the same noisy toy, and you’re left choosing what your child will listen to as they grow. The market will move fast; the policies will lag. Who gets to decide what plays in that space — companies with privileged access or the families and educators who care for kids — is the question we need to answer now?