u/semiramist

Claude Code deleted my entire 202GB archive after I explicitly said "do not remove any data"
🔥 Hot ▲ 419 r/ClaudeCode

Claude Code deleted my entire 202GB archive after I explicitly said "do not remove any data"

I almost didn't write this because honestly, even typing it out makes me feel stupid. But that's exactly why I'm posting it. If I don't, someone else is going to learn this the same way I did.

I had a 2TB external NVMe connected to my Mac Studio with two APFS volumes. One empty, one holding 202GB of my entire archive from my old Mac Mini. Projects, documents, screenshots, personal files, years of accumulated work.

I asked Claude Code to remove the empty volume and let the other one expand to the full 2TB. I explicitly said "do not remove any data."

It ran diskutil apfs deleteVolume on the volume WITH my data. It even labeled its own tool call "NO don't do this, it would delete data" and still executed it.

The drive has TRIM enabled. By the time I got to recovery tools, the SSD controller had already zeroed the blocks. Gone. Years of documents, screenshots, project files, downloads. Everything I had archived from my previous machine. One command. The exact command I told it not to run.

The part that actually bothers me: I know better. I've been aware of the risks of letting LLMs run destructive operations. But convenience is a hell of a drug. You get used to delegating things, the tool handles it well 99 times, and on the 100th time it nukes your archive. I got lazy. I could have done this myself in 30 seconds with Disk Utility. Instead I handed a loaded command line to a model that clearly does not understand "do not."

So this post is a reminder, mostly for the version of you that's about to let an AI touch something irreversible because "it'll be fine." The guardrails are not reliable. "Do not remove any data" meant nothing. If it's destructive and it matters, do it yourself. That is a kindly reminder.

https://imgur.com/a/RPm3cSo

Edit: Thanks to everyone sharing hooks, deny permissions, docker sandboxing, and backup strategies. A lot of genuinely useful advice in the comments. To be clear, yes I should have had backups, yes I should have sandboxed the operation, yes I could have done it in 30 seconds myself. I know. That's the whole point of the post.

Edit 2: I want to thank everyone who commented, even those who were harsh about my philosophical fluff about trusting humans. You were right, wrong subreddit for that one. But honestly, writing and answering comments here shifted something. It pulled me out of staring at the loss and made me look forward instead. So thanks for that, genuinely.

Also want to be clear: I'm not trying to discredit Claude Code or say it's the worst model out there. These are all probabilistic models, trained and fine-tuned differently, and any of them can have flaws or degradation scenarios. This could have happened with any model in any harness. The post was about my mistake and a reminder about guardrails, not a hit piece.

Edit 3: For those asking about backups: my old Mac Mini had 256GB internal storage, so I was using that external drive as my primary storage for desktop files, documents, screenshots, and personal files. Git projects are safe, those weren't on it. When I bought the Mac Studio, I reset the Mac Mini and turned it into a server. The external SSD became a loose archive drive that I kept meaning to organize and properly back up, but I kept postponing it because it needed time to sort through. I'm fully aware of backup best practices, the context here was just a transitional setup that I never got around to cleaning up.

Final Edit: This post got way bigger than I expected. I wrote it feeling stupid, and honestly I still do.
Yes, I made a mistake. I let an LLM run something destructive I could have done myself in 30 seconds.

But this only happened because we’re in a transition phase where these tools feel reliable enough to trust, but aren’t actually reliable enough to deserve it. That gap is where mistakes like this happen.

Someday this post won't make sense. Someone's kid is going to ask a LLM to reorganize their entire drive and it'll just work. A future generation that grows up with this technology won't understand what we were even worried about. But right now, today, we're not there yet. So until we are, be your own guardrail.

Thanks to everyone who commented. This post ended up doing more for me than I expected.

u/semiramist — 1 day ago