Tubi’s Bad AI Branding Turns Users Against Recommendations

Tubi's Bad AI Branding Turns Users Against Recommendations

I was ten minutes into a late-night Tubi scroll when the app offered me the same three space films I’d dismissed last week. You probably know that feeling: the recommendation list that insists on being helpful but feels lazy. I closed the app and watched the comments light up—people were angry, and not just a little.

I tapped “@Tubi” inside ChatGPT and asked for a weird, cozy midnight movie.

The feature itself is simple: type a plain-language request—“a movie that feels like a fever dream but isn’t horror,” or “a thriller for tonight”—and Tubi’s ChatGPT integration returns picks from its library. On paper this is familiar territory: it’s the same promise Netflix used to make when it bragged about algorithms keeping you glued to the screen. What has shifted is the label. Tubi chose to brand routine content discovery with an AI badge and a ChatGPT app, and that label is doing more harm than good.

How does Tubi’s AI recommendation work?

Short answer: it layers modern language models on top of traditional recommendation systems. The backbone is still machine learning—user signals, watch time, ad economics—but OpenAI-style conversational interfaces now front the experience. That can feel like progress, or like a marketing sticker slapped on the same engine you were already using. When an interface talks like a friend but behaves like a commissioned salesperson, trust frays.

I saw a “creator-made” tag and kept scrolling.

Tubi’s plan to serve more short-form, creator-made pieces—some generated, some human—was covered by the Wall Street Journal and TechCrunch. The platform hopes that letting AI help churn clips and scenes will win younger eyeballs away from TikTok and YouTube. But the reaction online suggests users distinguish between convenient tech and cheap substitutes. YouTube’s AI channels rack up billions of views, yet polls show people claim they don’t want AI-generated films. That contradiction is why Tubi’s messaging matters: the word “AI” now triggers suspicion as often as curiosity.

Will AI-generated content replace human creators?

I watch creators and studios react the way a neighborhood reacts to a new, flashy chain store: some adapt, some protest. AI can stitch quick clips, fill gaps, and scale short-form funnels that feed ad revenue. But when you dilute creative signals with algorithmic filler, the platform risks feeling like a mall jukebox that only plays yesterday’s hits. Audiences will notice. You will notice. And loyalties shift faster than ad dollars.

I read the Wall Street Journal headline and scrolled through X until the replies became a chorus.

Tubi is owned by Fox Corporation, the same corporate umbrella tied to Rupert Murdoch, and that context amplifies reactions. When a smaller streamer leans on AI in public, the politics and corporate ties get folded into the argument. Critics don’t only debate technical trade-offs; they suspect motive. Are these changes about user experience or a cheaper content pipeline for ad-supported growth? The answers influence whether viewers stay or jump to Netflix, Paramount+, or free, familiar feeds on YouTube.

Is Tubi owned by Fox?

Yes—Tubi operates under Fox Corporation, which colors how people read its moves. That corporate connection matters because viewers often conflate platform choices with broader media dynamics, and because brands like Netflix and YouTube are the comparison set when recommendations go wrong.

Here’s what I think: improving search and recommendations would have been quieter and more effective if Tubi had treated AI as plumbing, not a marquee. There is value in conversational discovery—OpenAI, ChatGPT and similar tools can make obscure films findable—but the moment a product feels engineered to cash in on an anxiety or novelty, the backlash grows. Gen Z uses AI, but polls indicate skepticism about its creative output, and public sentiment is a brittle currency.

Tubi’s gamble isn’t purely technical. It’s a branding problem and a timing problem. The company wants to compete with TikTok and YouTube for attention, yet it’s announcing experiments that many users interpret as downgrades. If you build recommendations that sound like a friendly assistant but behave like targeted ads, you end up sounding like a foghorn in a library.

So what should Tubi do next? Stop shouting “AI” from the marquee. Quietly refine relevance models, test small creator programs that highlight human voices, and let good recommendations do the persuasion. If the platform wants trust, give viewers fewer surprises that feel engineered and more surprises that feel earned.

Content recommendation have been around a long time. We just called it machine learning before. Will Tubi keep calling attention to the same old trick until viewers leave, or will it quietly make something better and let you discover it without the label?