A professor's accidental data loss reveals a fundamental risk in relying on cloud-based AI platforms for critical work, prompting a necessary re-evaluation of how professionals should manage their digital assets when using tools like ChatGPT.
The capabilities of large language models have transformed workflows for many professionals, but a recent incident underscores a critical vulnerability: the potential for catastrophic data loss with a single click. Professor Marcel Bucher from the University of Cologne reported losing two years of academic work—including grant applications, teaching materials, and publication drafts—due to a settings change he intended to make. The incident, detailed in a Nature article, serves as a stark warning about the risks of treating cloud-based AI platforms as permanent repositories for critical work.

What Happened: A Simple Setting, A Devastating Result
Bucher's goal was straightforward. He wanted to deactivate the option that allows ChatGPT to use his conversation data for model training. According to his account, this action inadvertently triggered the deletion of his entire chat history. The data, which included years of accumulated academic work, was permanently erased. Attempts to contact OpenAI for recovery proved futile. The company cited its "Privacy by Design" principle, which means data is deleted without a trace once the user initiates the action.
Bucher's conclusion is unequivocal: "If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use."
The Backup Solution That Exists (But Isn't Always Used)
This incident highlights a critical gap between capability and user practice. ChatGPT does offer a built-in backup function. Users can download all their chats and data by navigating to Settings > Data controls > Export data. The system compiles a ZIP file and sends a download link via email. The link remains valid for 24 hours, and the compilation time depends on the volume of data stored.
However, the existence of a backup feature doesn't guarantee its use. Many users, especially those new to AI tools or focused on immediate productivity, may not think to perform regular backups. The professor's experience suggests that for professional use, this oversight can be catastrophic.
A Shift in Security Measures?
It's important to note that the incident described by Professor Bucher occurred in August 2025. A recent self-test by the publication could not replicate the scenario. When deactivating data sharing for training, existing chats remained accessible. Furthermore, selecting the option to delete all chats now triggers an explicit warning message requiring confirmation. This suggests OpenAI may have implemented UI and security adjustments post-incident to prevent accidental deletion.
While these changes are a positive step, they don't eliminate the core risk. Cloud-based platforms are subject to change, and user error remains a constant variable. The fundamental lesson is that data stored on a third-party service is never fully under your control.
The Broader Implication for AI Tool Users
This event is not just about ChatGPT; it's a case study in digital responsibility. As AI tools become deeply integrated into professional workflows—from writing and research to coding and design—the question of data ownership and preservation becomes paramount.
Key takeaways for professionals:
- Treat AI Platforms as Ephemeral Tools, Not Archives: Use them for generation and iteration, but assume the conversation history is not a permanent record unless you actively back it up.
- Implement a Backup Routine: Regularly export your data. For critical projects, consider this a mandatory step, not an optional one. The ChatGPT data export guide provides the official steps.
- Understand Platform Policies: Familiarize yourself with the data retention and deletion policies of any service you rely on. The OpenAI Privacy Policy outlines their data handling practices.
Conclusion: A Necessary Reckoning
Professor Bucher's loss is a sobering reminder that the convenience of AI comes with inherent risks. While platforms like ChatGPT are powerful assistants, they are not designed to be primary storage solutions for critical work. The incident should prompt a broader conversation in academic and professional circles about digital asset management in the age of AI.
The solution isn't to abandon these tools, but to use them with a clearer understanding of their limitations. Regular backups, diversified storage, and a healthy skepticism of cloud permanence are no longer just IT best practices—they are essential components of modern professional diligence.

Comments
Please log in or register to join the discussion