Imagine scrolling through your feed and seeing an image of someone you know, manipulated, sexualized, and shared without their consent. Then imagine that image is of a child. California Attorney General Rob Bonta is now investigating whether X and xAI allowed this nightmare to become reality, potentially breaking the law in the process.
California Takes on X and Grok Over AI-Generated Images
It started subtly, like a dark joke gone too far. Weeks after reports flooded in, California’s AG announced a formal investigation into X and xAI, Elon Musk’s AI company, over the proliferation of AI-generated images depicting individuals, including children, in explicit ways without their consent. Bonta didn’t mince words, stating the volume of “non-consensual, sexually explicit material that xAI has produced” is truly shocking.
He’s demanding immediate action from xAI to prevent the creation and spread of such content. The investigation aims to determine if X (formerly Twitter) and xAI, the creator of Grok (the AI chatbot allegedly used to generate the images), violated any laws. Bonta is under pressure to act, as he becomes the first state-level official to directly challenge Musk on AI ethics.
Is X responsible for how people use Grok?
Consider this: a tool is only as good as the hands that wield it. A recent YouGov poll highlights public sentiment: a staggering 97% believe AI tools should *not* generate sexually explicit content involving children, and 96% oppose the “undressing” of minors in images. These numbers aren’t just statistics; they represent a collective moral stance. The core issue hinges on accountability: where does the responsibility lie when AI is used to create harmful content?
The investigation focuses on how users exploited Grok to modify images, stripping people of their clothes, sometimes depicting children in underwear or bikinis. The detail about users adding “donut glaze” to faces adds a layer of disturbing specificity. One AI content analysis firm, Copyleaks, estimated Grok was generating a nonconsensually sexualized image *every minute*.
Musk’s Response: Denial and Deflection?
Musk, the CEO of both X (where the images were shared) and xAI (the maker of Grok), appears to be playing a dangerous game of denial. Before the investigation was announced, he claimed, “I not aware of any naked underage images generated by Grok. Literally zero.” But the wiggle room is obvious.
His statement addresses only “naked underage images,” conveniently sidestepping the issue of images depicting undressed minors or sexualized situations in general. More crucially, it ignores the lack of consent, the very heart of the problem. These images have been weaponized, used to harass individuals across X.
Musk’s defense? Blame the users, not the AI or the platform. He argues Grok only generates images based on user prompts and refuses illegal content, adhering to the laws of any given country. Any unexpected outcomes are chalked up to “adversarial hacking,” quickly fixed with a “bug fix.”
X Safety echoed this sentiment, stating that users generating illegal content with Grok will face the same consequences as those uploading illegal content, while taking no responsibility. This feels like passing the buck. Adding insult to injury, Musk reportedly reposted content from this trend, including AI-generated images of a toaster and a rocket in bikinis.
What are the implications for AI regulation?
Think about this situation as a pressure cooker. California is the first state to initiate an investigation, but international authorities, including France, Ireland, the United Kingdom, and India, are also scrutinizing Grok’s output. These countries are considering potential charges against X and xAI. The Take It Down Act, while aiming to address nonconsensual images, doesn’t mandate notice and removal systems on platforms like X until 2026.
What’s next for X and xAI?
This investigation could serve as a litmus test, potentially setting precedents for AI regulation and platform accountability. It’s about to be a fight. It may be the first of many. Regardless, are current laws sufficient to address the rapid evolution of AI and its potential for misuse, or do we need a new framework altogether?