Professor’s ChatGPT “Deleted” Work: Internet Reacts

Professor's ChatGPT "Deleted" Work: Internet Reacts

The screen went blank. A cold sweat. Two years of research, teaching materials, emails, grant applications – all gone with a click. A German professor recently shared his tale of woe in Nature, detailing how a settings change in OpenAI’s ChatGPT vaporized a significant chunk of his academic life, and the internet responded…with laughter.

Marcel Bucher, a plant sciences professor at the University of Cologne, isn’t some Luddite. He embraced the AI revolution, subscribing to ChatGPT Plus and integrating it into his daily workflow.

“Having signed up for OpenAI’s subscription plan, ChatGPT Plus, I used it as an assistant every day — to write e-mails, draft course descriptions, structure grant applications, revise publications, prepare lectures, create exams and analyse student responses, and even as an interactive tool as part of my teaching,” wrote Bucher.

Like any responsible user, Bucher knew the limitations of large language models. However, the appeal was strong: the context-aware memory and consistent workspace felt like a genuine aid. Then came the fateful decision to tinker with data consent settings.

According to Nature:

But in August, I temporarily disabled the ‘data consent’ option because I wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data. At that moment, all of my chats were permanently deleted and the project folders were emptied — two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page. Fortunately, I had saved partial copies of some conversations and materials, but large parts of my work were lost forever.

Bucher’s initial hope that this was a simple error quickly evaporated. Reinstalling the app, switching browsers, and even navigating the labyrinthine OpenAI support system proved fruitless. The data, it seemed, had vanished into the digital ether.

Screenshot of a Bluesky post dunking on a professor who used AI. It features a tardigrade playing a tiny violin.
© Screenshot from Bluesky

Schadenfreude in the Age of AI

Remember when data loss was a tragedy that united us? These days, the response is…different. In 2026, where AI tools are often perceived as black boxes churning out inaccuracies and questionable content, some are reveling in Bucher’s misfortune. The digital town square has spoken, and it’s not offering sympathy. Instead, the professor is getting roasted.

“Amazing sob story: ‘ChatGPT deleted all the work I hadn’t done’,” one Bluesky user wrote.

“Maybe next time, actually do the work you are paid to do *yourself*, instead of outsourcing it to the climate-killing, suicide-encouraging plagiarism machine,” wrote another. Some even questioned the essay’s authenticity, suggesting Bucher didn’t write it himself.

“This is the dumbest shit I’ve read in a quite a while,” a Bluesky user wrote. “(But, in his defense: there is no particular reason to assume that the guy who published this actually wrote it himself.)”

If I delete my ChatGPT account, is the data really gone?

The short answer: probably. But that’s cold comfort when years of work vanish. It raises deeper questions about data ownership and the perceived safety of cloud-based AI tools. Bucher himself argued he was simply following institutional encouragement to integrate AI.

Caught Between Two Worlds

We’re witnessing a clash of ideologies. On one side, institutions are pushing for AI adoption, often under the guise of inevitable progress. On the other, workers are increasingly skeptical, wary of handing over critical tasks to systems they don’t fully trust.

Bucher highlights this tension:

We are increasingly being encouraged to integrate generative AI into research and teaching. Individuals use it for writing, planning and teaching; universities are experimenting with embedding it into curricula. However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.

If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use.

For Bucher, the allure of AI turned into a digital quicksand, swallowing his productivity whole. The experience served as a harsh lesson: placing complete faith in these technologies can be a gamble.

What are the alternatives to ChatGPT for research and writing?

Dependence on any single tool is a risk. Consider local LLMs like Llama, or other models like Cohere or AI21 Labs. Diversifying tools and strategies is the only defense against algorithmic volatility.

The Future of Work, Written by AI?

It remains to be seen whether generative AI will genuinely revolutionize the workplace, or simply add another layer of complexity and potential failure points. As AI skeptics stand ready to celebrate every stumble, the underlying question is: are we building a future where humans are truly augmented, or simply automated out of meaningful work?

One thing is clear: Bucher’s experience serves as a cautionary tale. The promise of AI is seductive, but blind faith can lead to disaster. His lost data is a stark reminder that the shiny new tools are still imperfect, and perhaps, just perhaps, those “climate-killing, suicide-encouraging plagiarism machines” aren’t ready to take over just yet. The world of AI is a Wild West town where data is gold, and trust is a loaded gun. Who polices the sheriff?