I opened a court filing and the room narrowed. You read that a suspect asked ChatGPT what happens if someone is put in a dumpster. The Florida attorney general says that exchange helped expand a criminal investigation into OpenAI’s chatbot.
A police report names a chatbot prompt as a thread in a larger case.
I’ve tracked violent-crime files before; this one feels different because the alleged digital breadcrumbs are conversational. Attorney General James Uthmeier announced a criminal probe into OpenAI after prosecutors said a suspect in the University of South Florida murders used ChatGPT, and that probe now includes a separate April 2025 mass shooting at Florida State University where two people were killed and six injured.
The change in scope came publicly on X when Uthmeier posted that his office would expand its investigation after learning the primary USF suspect used ChatGPT. That move elevates a chat log from an odd detail to the kind of lead that can reshape how law enforcement and courts treat AI platforms.
Can ChatGPT be held legally responsible for a crime?
You’ll see this question in headlines and comment threads. The short legal reality is that a piece of software doesn’t wear a defendant’s suit; companies and operators do. Lawsuits now allege OpenAI could share liability if its responses “aided” or “advised” violent acts, and plaintiffs’ lawyers are already using civil filings to test those boundaries.
That legal push draws on precedent from tech liability debates over platforms such as Meta, Twitter/X, and Google; the difference here is conversational context—AI that generates tailored answers rather than hosting third‑party posts.
An Axios review of court records reveals specific prompts; one mentions a dumpster.
I read those filings and then ran my own test with the free ChatGPT while logged out to compare. According to Axios, the suspect, identified as Hisham Abugharbieh, allegedly asked on April 13 what happens if a person is put “in a black garbage bag and thrown in a dumpster.” On April 19 he asked about whether Apple would know a new iPhone user’s identity after a previous user.
When I submitted the dumpster question to ChatGPT, the model focused on health risks—suffocation and the need to contact authorities—rather than offering procedural tips. That pattern matters: responses that prioritize safety and reporting are central to OpenAI’s defense, while prosecutors will highlight any interaction that could read as operational guidance.
Did ChatGPT instruct a Florida suspect to commit murder?
That’s the question prosecutors want answered and the one the public demands. So far, public documents show only fragments: the dumpster prompt, an iPhone privacy query, and a rephrasing of the term “missing endangered adult.” None, on their face, is a step‑by‑step playbook.
Still, counsel for victims in the FSU shooting has argued the suspect was in “constant communication” with the chatbot and suggested the software may have advised how to commit crimes. Whether “may” translates to legal causation will be litigated hard, with technical experts from platforms like OpenAI and independent researchers in the witness box.
The ChatGPT tests I ran returned safety-focused answers and prompts to seek help.
I typed the same trio of prompts the filings reference and watched the model steer toward warnings and resources. For the dumpster query it flagged suffocation risk; for the iPhone question it gave a privacy-oriented technical explanation; for the missing‑person phrase it mirrored law‑enforcement definitions.
Those replies don’t resolve the prosecution’s case, but they do frame the core dispute: is a safety-first reply enough to break a chain of alleged wrongdoing, or did the broader chat context—what the suspect typed before and after—change the model’s utility to a criminal user?
How does ChatGPT respond to violent prompts?
You probably expect moralizing or refusal. In many recent versions the model responds with harm-reduction guidance, legal warnings, or redirective questions—“Did you witness this? Call emergency services.” Those guardrails are part product design, part policy enforcement, and part public relations shield.
A public relations quote and a pile of questions now guide the story forward.
OpenAI told Gizmodo, “This is a terrible crime, and our thoughts are with everyone affected. We’re looking into these reports and will do whatever we can to support law enforcement in their investigation.” That statement is standard—but it also frames how the company will be treated in court and in the court of public opinion.
Investigators will want access to logs, account metadata, timestamps, and any pattern detection the company used. Journalists and lawyers will parse language differences between the free ChatGPT I used and enterprise or fine‑tuned models that might have different behaviors.
Think of this moment as two colliding blueprints: criminal procedure on one side and product design on the other. The conversation will move across courtrooms, boardrooms, and legislative hearings, and it may change how platforms like OpenAI, Apple, and other tech firms document and defend automated responses.
Two metaphors: the chat log that may be evidence is like a single spark on damp tinder; if it ignites litigation, the legal landscape could change as quickly as a storm rearranges a coastline.
I’ll keep watching the filings, the AG’s statements, and OpenAI’s disclosures. You should ask: when a chatbot answers a violent question, who carries the moral and legal burden—user, creator, or both?