X Users: Change This New Setting Now to Block Deepfakes

Indonesia Blocks Grok: Temporary Ban?

I saw it on my feed and felt my stomach knot—someone had tagged @Grok under a photo of a friend and asked the bot to “improve” it. You froze for a second, scrolling back through replies to find the original poster’s consent nowhere to be seen. I clicked through and realized how small and hidden the supposed fix really is.

Earlier this year, some users discovered they could push xAI’s Grok to generate sexualized edits of people on X (formerly Twitter), sometimes without consent. Now X has added an option in its iOS app that promises to “block modifications by Grok” when you upload images or video. The move is visible only to a subset of users and is buried behind the tiny paintbrush icon in the upload flow.

On my feed right now: a tiny paintbrush icon hides a new toggle

I tapped it and found a toggle labeled to stop Grok from modifying the content you upload. That sounds good—except the toggle only blocks @Grok mentions inside X threads. It does not stop someone from long-pressing an image in the X app and choosing “Edit image with Grok,” which opens the image in the Grok app and lets them alter it. Nor does it stop someone from saving and reuploading your photo and tagging Grok there.

This is a meek, almost cosmetic change: more of a token shield than a fortress. It’s a flimsy umbrella in a hurricane.

How do I stop Grok from editing my images on X?

If you use X on iOS, when you attach an image or video you should see the paintbrush icon. Tap it and flip the block modifications by Grok toggle. That will stop people in-thread from tagging @Grok to modify that exact upload. But remember: it only works within that upload flow, only on iOS, and only against in-thread mentions—other routes remain open.

At least two stories: a thread of abuse and a regulator’s inbox filling up

I watched a thread where anonymous accounts used Grok to make explicit images of people, and within days several regulators opened probes. Those incidents are why this matters beyond outrage: authorities are already sniffing around xAI and X because the tech enabled non-consensual edits, including sexualized images of minors in some reported cases.

X previously put some editing features behind its subscription paywall—charging for premium features that gate AI editing tools—turning a safety control into an access point. That decision earned criticism as effectively monetizing a tool that can be misused. Regulators and safety advocates are likely to view the new toggle as a small, reactive fix rather than a policy shift. The protection feels like a screen door on a submarine.

Can Grok edit images without consent?

Yes—practically. The toggle only blocks @Grok mentions in the same post. It does not stop someone from using the Grok app itself or from saving and reuploading an image to tag Grok. Reports from The Verge and Social Media Today show several simple workflows that bypass the new setting, so the bot can still be used to create harmful edits.

At eye level: what you can actually do right now

You probably don’t want your face edited into something you never agreed to. If you’re on iOS, flip the paintbrush toggle when you upload; it’s worth the two seconds. Also tighten account privacy, avoid posting images of minors, and call out edits that abuse people. Report any non-consensual edits to X and document them—screenshots and timestamps matter.

Understand this: the toggle reduces one obvious route, but it does not stop determined abuse. Platforms, regulators, and creators will need stronger controls, clearer policy, and better enforcement to make real progress.

Does X prevent AI from editing images?

No—at least not comprehensively. The new setting addresses a narrow case: in-thread mentions of @Grok for that specific upload on iOS. It doesn’t stop edits through the Grok app, third-party tools, or reposted files. If you care about resisting deepfakes or non-consensual edits, this helps with the lowest-hanging problem but won’t protect you from the rest.

I recommend flipping the toggle if you can, watching your uploads, and asking X and xAI for clearer, enforceable protections—because if we don’t press them, who will?

Will a tiny setting buried behind an icon be enough to stop people from weaponizing your image, or will it only invite louder calls for regulation and accountability?