Canva Apologizes After AI Removed ‘Palestine’ – Free Fix

Canva Apologizes After AI Removed 'Palestine' - Free Fix

You open a canvas, type three words, and the design returns a sentence you didn’t write. I watched “Cats for Palestine” become “Cats for Ukraine” inside Canva’s new Magic Layers and my first thought was disbelief. That sudden swap landed like an editor with a political pen.

I want to walk you through what happened, what Canva says it fixed, and why this matters beyond a single misplaced word. Read fast or slow—just follow the breadcrumbs. You’ll see patterns you already suspected and questions the company still needs to answer.

User on X spotted ‘Palestine’ replaced by ‘Ukraine’ — the moment it was noticed

The thread started on X (formerly Twitter): a screenshot of Canva’s Magic Layers auto-editing text from “Cats for Palestine” to “Cats for Ukraine.” Other users quickly posted their own attempts and some said they could reproduce the swap, while others could not. That inconsistency is important: it suggests a bug or a brittle set of rules, not a simple UI glitch.

Why did Canva change “Palestine” to “Ukraine”?

Canva told Gizmodo it found an issue in its Magic Layers feature, moved to investigate, and has now fixed the behavior. A spokesperson said the company is auditing how the output was produced and adding checks to prevent repeats. That explanation addresses the symptom—unexpected text replacement—but it leaves open why the word “Palestine” was repeatedly targeted and why the replacement consistently read “Ukraine.”

Graphic editors are meant to keep what you put in — sometimes they don’t

Magic Layers was introduced to break flat images into editable components so you can tweak individual elements as if you had created them from scratch. Instead, the feature edited copy without instruction, which is what alarmed designers and activists alike. When an AI edits political content without a prompt, you stop asking if the tool is helpful and start asking whose rules it follows.

Was the issue only in Magic Layers?

Reports indicate the behavior was isolated to Magic Layers. Canva said the problem didn’t broadly affect designs and that it has fixed the specific fault. Still, some users could reproduce the swap while others couldn’t, which points to variability in model responses or test coverage gaps in rollout. The company is running an audit and changing internal tests to catch similar surprises.

Other platforms have tripped on the same fault line — a pattern emerges

We’ve seen related behavior before: Meta’s generative tools in WhatsApp reportedly produced an image of a boy with a gun when asked to create a Palestinian, and activists found ChatGPT was reluctant to answer “Should Palestinians be free?” affirmatively while answering similarly for other groups. These incidents map to the same landscape where training data, guardrails, and policy instructions intersect.

That history suggests the Magic Layers incident isn’t isolated to a single product team. It fits a broader difficulty: AI systems trained on messy internet data and complex policy layers can produce outputs that reflect more than user intent. You can think of those models like a sieve that filters out a single word—sometimes intentionally, sometimes accidentally.

How did Canva respond and what next steps did it promise?

Canva’s spokesperson apologized for any distress caused and said the bug has been resolved. The company is auditing the feature’s development and adding more checks to its testing process. Those are standard crisis responses: identify the fault, patch the code, and promise stronger QA. The harder work is transparency—explaining what the model was trained on, what rules it followed, and how similar risks will be caught before public rollout.

Designers felt the sting immediately — the human reaction

Creators complained not only about a wrong word but about control and voice: if your tool rewrites your message, who is speaking for you? For nonprofits, journalists, and grassroots groups the stakes are reputational and sometimes life-and-death. When an AI quietly changes a label tied to identity and politics, users lose trust quickly.

As someone who writes and tests tech stories, I watch how platforms patch and narrate. Companies like Canva, OpenAI, Meta, and others have to balance safety filters and freedom to create. Users deserve to know where those lines are drawn and who drew them.

We should expect audits and fixes, but we should also demand clearer disclosure about training data, moderation heuristics, and regression tests for political and identity terms. Without that, you and I are left asking whether design tools are amplifiers of expression or silent editors of it. Do you want an algorithm to decide what names can exist on your canvas?