I watched a 15-second clip go viral and felt my stomach drop. It wasn’t a performance — it was a fake, stitched from someone else’s voice and a crowd’s roar. If you care about control over your image, that moment feels like a line you don’t want crossed.
On the USPTO site, three filings appeared under Taylor Swift’s name. Her team is using trademark law the way Matthew McConaughey’s did — to make it harder for generative AI to borrow her face and voice.
I’ve followed these legal skirmishes for years, and you should know what they mean. TAS Rights Management — Swift’s IP arm — quietly lodged three trademark applications with the US Patent and Trademark Office. Variety first reported the filings; the paperwork lists two short audio clips of Swift saying “Hey, it’s Taylor Swift” and “Hey, it’s Taylor.” The third application is a visual mark: a describe-every-detail stage photo of her in a pink guitar, multicolored bodysuit, and silver boots.
This is not performance theatre. It’s a legal tactic copied from Matthew McConaughey’s team, which trademarked his audio catchphrase “Alright, alright, alright.” The idea is blunt: take away the easy building blocks an AI model would reach for and create a clearer path for lawsuits if a generative system spits out a fake Taylor.
In social feeds, the fallout from AI deepfakes still smolders. Swift was a headline example of why this matters.
You remember the 2024 debacle on X — hundreds of nonconsensual images of Swift circulated, some viewed millions of times. TechCrunch and Time covered it; the company formerly known as Twitter became ground zero. Then came an AI-generated campaign that falsely showed her endorsing a political candidate. Those are not hypotheticals to me, and they shouldn’t be to you.
Trademark filings are a different posture from public outrage: they’re an attempt to build a legal handle. Think of the filings as a padlock on a gate — one small, visible barrier that might make a company pause before they train models on her likeness.
A clerk at the USPTO can stamp a file, but courts stamp precedent. So will these trademarks hold up?
This is where the theory hits friction. The legal test will likely require proof that the trademarked audio or image existed in the training data that produced the offending AI output. That’s a heavy evidentiary lift against opaque models from firms like OpenAI or Stability AI. You can sue, but you still have to show the needle in the haystack.
And yet the practical effect may be as important as the legal one: the mere threat of a lawsuit — backed by high-profile filings and celebrity attention — can be a deterrent. Some companies might steer clear rather than risk a multi-million dollar fight (or a PR mess that costs them €(equiv.)). The math here is political as much as legal.
Can celebrities trademark their voice?
Yes, they can apply for trademarks on audio and visual elements, and courts have recognized personality rights in many jurisdictions. What’s untested is using this route specifically to police generative AI outputs. You and I have seen how celebrities like McConaughey and now Swift are trying to convert cultural signals into legal tools.
A reporter’s inbox fills with corporate statements and redacted filings. The strategy’s mechanics are messy.
I checked the filings on the USPTO database and read the language: specific phrases, precise visual descriptions, categories of goods and services. That’s how you convert a cultural moment into a legal claim. It’s also why big artists can tilt the system — they can afford the lawyers, the filings, and the follow-through.
For everyone else, name-image-likeness (NIL) protections are uneven. Most people don’t have TAS Rights Management or a legal team on speed dial. That gap raises the policy question: do we want IP law to be the primary shield against AI misuse, or should lawmakers step in with baseline rights?
Can trademarks stop AI deepfakes?
They can help, but they won’t be a silver bullet. Trademarks create a basis for claims; they don’t automatically make an AI company liable. Enforcement will depend on discovery, proof of training data, and whether a court accepts that a trademark applies to generative outputs. Still, from a strategic perspective, the filings change the risk calculus for platforms and model builders.
A newsroom timeline shows the pattern: scandal, outrage, filings. What comes next is messy and public.
Variety, TechCrunch, Time — the coverage amplifies the move. You’ll see AI firms respond with policies, explainers, or silence. You’ll see civil liberties groups ask how far a celebrity can extend control over public images. And you’ll see courts asked to draw lines around identity in a digital age.
The legal argument Swift is testing is a small flashlight in a dark warehouse: it might reveal enough to win a case, or it could expose how little control anyone has over model training today. Either way, the tactic adds another obstacle for anyone thinking of cloning a celebrity for profit or mischief.
How do you prove an AI used specific training audio?
Proving it usually requires technical discovery: model weights, training data manifests, or logs showing ingestion of specific files. Companies like OpenAI and other large model providers keep that information close, which makes courtroom fights expensive and slow. You should expect layered battles — legal, technical, and PR — before any clear rule emerges.
I’m watching this story because it sets the blueprint for how fame and technology collide. You can side with celebrities defending their image, or with advocates who want broad limits on corporate control of models — either way, this will shape the next chapter of AI law. So tell me: should trademark maneuvers be the default defense against deepfakes, or do we need new laws that protect everyone equally?