UK Judge Finds Witness Used Smart Glasses to Feed Answers

UK Judge Finds Witness Used Smart Glasses to Feed Answers

I watched the courtroom go quiet the moment the interpreter paused and the witness reached for his phone. For three minutes the room held a simple question: who was whispering answers into that man’s ear? You could feel the ordinary rules of testimony fold like paper.

I’m Raquel Agnello’s ruling-reader today — and I want you to follow the trail she laid out. This wasn’t a tech thriller; it was a civil hearing in London where a businessman’s fate hinged on whether his words were his own.

In a London hearing, the quiet before an answer became the first clue

The case involved Laimonas Jakstys, a Lithuanian claimant asking to be restored as owner and director of a property development company in the Insolvency and Companies Court. I’ve covered cross-examinations where pauses mean thinking; this pause meant something else.

During cross-examination the interpreter said she could hear interference. The defense lawyer told the judge the same. Jakstys was wearing smart glasses linked to his mobile. When the judge asked him to remove them, his phone suddenly began broadcasting a voice out loud — someone speaking to him while he sat in the witness box.

Can smart glasses coach witnesses in real time?

Yes — and the judge accepted that the witness was being assisted. Evidence included audible interference heard by court participants, a phone that showed numerous calls that morning to a single number, and written statements that the judge described as prepared by others. Jakstys said the calls were to a taxi driver and later claimed the voice was ChatGPT, then said his phone was stolen but could not produce a police report. The judge found those explanations unconvincing and rejected his evidence as dishonest.

On the bench, a careful unravelling of small details built a clear pattern

When a trial hinges on credibility, every small inconsistency is a thread. I’ve seen judges pick up one thread and follow it until the whole sweater comes apart.

Judge Raquel Agnello KC wrote that Jakstys hesitated “quite a bit” before answering through his interpreter. She ordered the smart glasses and the phone handed to his solicitor. A photograph of the phone screen showed repeated calls to the same number that morning, including one moments before he entered the box. Agnello declined to find exactly who was coaching him; she simply accepted that he was receiving assistance and called his testimony “untruthful.”

Are Meta Ray‑Ban smart glasses allowed in courtrooms?

Courtrooms set their own rules. A Los Angeles judge recently warned that anyone recording with AI-enabled glasses could be held in contempt after Meta CEO Mark Zuckerberg testified and members of his team wore smart glasses. That case showed courts are already treating such devices as potential threats to procedure and privacy.

In the public sphere, the technology is already far ahead of etiquette

Users buying smart glasses don’t always come with social guardrails. I spoke to technologists who say adoption outpaces policy, and the headlines back them up.

Meta’s Ray‑Ban maker, EssilorLuxottica, reported millions of units sold; smart glasses have become a household product and a public-policy headache. Apps like Nearby Glasses exist to warn people when AI-enabled wearables are nearby after reports of secret filming in private spaces and sightings of Customs and Border Protection agents wearing them on duty. Investigations have also shown that some footage captured by smart glasses can be reviewed by contractors to train AI models, which raises privacy alarms.

In this London case the device wasn’t just a privacy risk — it was literal courtroom sabotage. The judge found written statements “clearly prepared by others,” and she rejected Jakstys’s entire account. The Roman theater of cross-examination expects spontaneity; when answers arrive through a speaker, testimony loses its value.

The moment the phone broadcast a voice, the courtroom felt like a stage being fed lines from the wings, a puppet master whispering through a keyhole.

During and after the ruling, the legal and ethical questions widened

At the hearing, the court concentrated on credibility. In the weeks since, the broader question is enforcement: will judges ban wearables, seize them, or trust parties to police themselves?

This case suggests courts will take an active role. Agnello’s ruling did not invent a new rule but it set a practical precedent: if a device is used to influence testimony, a judge can and will exclude the evidence and reject the witness’s credibility. That has consequences for lawyers, litigants, tech companies and regulators.

Could AI tools like ChatGPT actually speak through a phone during testimony?

Technically yes: text-to-speech, connected assistants, or call-forwarding services can vocalize prompts in real time. Jakstys claimed the audible voice was ChatGPT; the judge didn’t accept his account but the claim illustrates how AI tools can be weaponized in a courtroom. You should expect legal teams and courts to start thinking about how to detect and prevent hidden prompts.

Meta, lawyers, and courts will now have to answer practical questions: do you ban all recording-capable wearables from the public gallery? Do you require devices to be surrendered before testimony? And who bears responsibility when evidence is shown to be coached?

The technology is a fast-moving tide, and the law is being asked to lay down stones in its path. Like a faint radio signal bleeding through static, these devices can turn private assistance into public deception. What will stop the next witness from slipping answers through their lenses into court — and who will be brave enough to try to stop them?