I remember the moment I saw the brief: a list of cases that vanished under a quick search, each citation falling away like a mirage in a desert. You feel the floor shift when an entire argument depends on paper ghosts. The Oregon Court of Appeals didn’t wait to see whether the mirage would turn into rain.
A clerk’s red pen uncovered 15 phantom citations and nine false quotes — the court called them fabricated and costly
I’ve read the Oregonian and the court filings: Salem lawyer Bill Ghiorso submitted a brief to the Oregon Court of Appeals that included 15 fake citations and nine made-up quotations. The judges applied a penalty formula they had recently set: $500 (€460) per fake citation and $1,000 (€920) per false quotation, adding up to an initial tally of $16,500 (€15,180). The panel capped the sanction at $10,000 (€9,200), the largest fine yet in Oregon for AI-driven fabrications.
Can AI create fake legal cases?
You probably guessed the answer before reading the ruling: yes. Generative tools—from ChatGPT and Bard to Google’s AI search responses—can produce citations that look authoritative but point to nothing. Ghiorso’s defense blamed a paralegal and Google’s AI answers, claiming the search engine confirmed the cases were “in fact real.” The court found that explanation inadequate.
A judge’s gavel recited an old obligation — attorneys must be truthful to the court
I’ve seen judges lose patience when the record is littered with inventions. The three-judge panel reminded counsel that submitting unchecked material can violate duties of professionalism, truthfulness, and candor. This ruling follows earlier Oregon and national examples: one attorney was fined $500 (€460) last month for a single fake citation, and a New Orleans panel ordered a $2,500 (€2,300) payment in February for similar misconduct, according to Reuters.
How do courts punish AI hallucinations?
Sanctions vary. Many lawyers still walk away with warnings, but courts are increasingly leveling financial penalties and formal reprimands when fabrications affect the proceedings. The Oregon panel’s math—$500 per fake case, $1,000 per false quote—gives litigators a predictable risk calculation. That predictability changes behavior faster than a lecture ever could.
A paralegal typed questions into Google — the answers looked authoritative but were wrong
You and I both know how easy it is to ask a chatbot or search box for verification and accept the reply at face value. The judges explicitly warned that querying Google’s AI search or a chatbot is not a sufficient fact-check. Generative models can invent case names, reporter citations, and party combinations that read like precedent but have no legal existence.
I want to be clear: you can use tools like ChatGPT, Google, or Bard for drafting and brainstorming, but you must corroborate every authority in a primary source database such as FindLaw, Westlaw, or LexisNexis. Relying on an AI’s confirmation is a professional gamble with real penalties attached.
Can I trust Google to verify cases?
The short answer is no. Google’s AI summaries and other chat-based outputs can and do present invented authorities as if they were real. The court’s opinion calls that out, and the practical advice is simple: treat AI as a drafting assistant, not a citator.
A courtroom’s patience frays when a record is built on fiction — judges are starting to punish that more often
I track these rulings because they set a map of acceptable behavior. There have been countless reports of attorneys submitting documents with AI hallucinations; some judges respond with warnings, others with fines. The Oregon sanction is a signal: the safety net is shrinking.
This moment feels like a house of cards—one false citation can topple an argument and cost an attorney thousands of dollars. If you practice law, you should treat that risk like malpractice shorthand: verify, cite primary sources, and document your checks.
Platforms and publications cited in the court opinion and press coverage include the Oregonian, FindLaw, and Reuters. The tools implicated include Google, ChatGPT, and other generative engines from OpenAI and Alphabet.
If you’re advising clients or supervising staff, make a checklist: confirm each citation in a primary database, keep a record of the verification step, and train support staff not to accept AI confirmations as final. That administrative friction is cheap compared with a sanction of $10,000 (€9,200).
I’ve watched capable lawyers assume an AI answer was good enough and pay for it. Will courts keep fine-tuning penalties until every brief is footnoted with human verification and a timestamped search log?