Do AI-Written Apologies Hold Up in Court? Judge Says No

Do AI-Written Apologies Hold Up in Court? Judge Says No

I watched a judge stop mid-sentence and scroll back through a supposedly contrite letter. You felt the air thin when he said he had fed the text into two AI tools and recognized the pattern. The woman on the dock had burned a house and bitten a police officer.

I want to be direct with you: this case is a test of how we read remorse in the age of generative text. You and I both rely on small cues—tone, error, hesitation—to tell whether an apology was earned. When a machine helps you craft that apology, those cues change.

At the sentencing last week the judge said he ‘punched into’ two AI tools and suspected the letters were machine-made

That single observation shifted the whole proceeding. Judge Tom Gilbert, according to reporting by the New Zealand Herald and the New York Times, tested the apology letters and found the fingerprints of generative models—phrasing, cadence, tidy remorse. When the engine writes your sorrow, what part of you is left visible?

I want you to notice the stakes: the defendant, Michae Ngaire Win, faced charges for arson, burglary, assault and resisting police. A pre-sentencing report had suggested home detention; the judge imposed 27 months in prison. The material facts are harsh. The question the judge raised was oddly intimate: was that apology hers?

On the bench he argued that a computer-generated letter does not prove genuine remorse

Here’s the practical problem: remorse matters because it predicts future behavior. A judge uses words, tone and personal detail to assess risk and rehabilitation. An AI can mimic those signals without living them. An AI apology can be a well-polished mask that reflects light but hides the face beneath it.

That doesn’t mean every AI-assisted note is fake. Sometimes a defendant will use a tool to order their thoughts, then add the messy, human details that prove ownership. Other times, the output is lifted wholesale and presented as personal. The difference is visible if you know where to look.

Can an AI-written apology affect sentencing?

Yes—because the court evaluates sincerity. If a judge believes a letter is machine-authored and not materially owned by the person, it can reduce the weight of mitigation. The New Zealand case shows how a single discovery can tilt a sentence away from leniency.

In the wider world, people already use ChatGPT, Bard and other tools to write for them

College students use ChatGPT for homework; professionals run press releases or draft client emails on these platforms. The U.S. Copyright Office says fully AI-generated works can’t be registered, which signals how institutions treat authorship. When you let a model speak for you, you trade a slice of authenticity for convenience.

That trade has ripple effects. Employers will question where your voice stops and a model’s begins. Teachers will flag work with ChatGPT fingerprints. Judges will test letters the way the Wellington court did. Social norms are forming in real time, and the tests are public.

Is it possible to truly own something an AI wrote if you only supplied prompts?

Ownership depends on contribution. If you supply facts, feelings, specific scenes and then edit the output heavily, you can reasonably claim authorship. If you press a few short prompts and paste out-of-the-box prose into court, the claim is weaker. You don’t get the moral credit of an apology by proxy.

I sat through hearings like this and watched small details decide outcomes

Here’s where I ask you to be practical. If you are drafting important communications—legal letters, medical directives, or apologies delivered in court—use AI as a drafting tool, not a substitute for your voice. Add the jagged, human specifics: the names, the missteps, the concrete plan to repair harm.

Think of an AI letter like a photograph of a person rather than a painting; it can capture surface features quickly but it won’t show the brushstrokes of a life. Courts, employers and loved ones are starting to look for those brushstrokes.

The New Zealand case casts a long shadow across professions and private life

Reporters at the New Zealand Herald and the New York Times gave the story wide circulation because it crystallizes a new question: how do we trust words in the machine age? The practical fallout will touch lawyers, teachers, HR teams and anyone who relies on written statements as evidence of intent.

I’ll be blunt: if you hand an AI a sensitive task and fail to mark it with your own errors and specifics, you’ve handed someone else the story of you. That can matter when consequences are real—and sometimes brutal.

So what should you do now? Use tools like ChatGPT, Google Bard or other assistants to draft, but then do what only you can do: add the mistakes, the odd memory, the private detail that proves you were there. Otherwise, when the letter reaches a skeptical judge, you might find your apology reads as if performed on your behalf.

When a court can tell the difference, will the rest of us be able to pretend?