u/XLioncc

Image 1 — A guy use Claude Code recovered nearing 90TB of corrupted ZFS pool that rejected by data recovery companies
Image 2 — A guy use Claude Code recovered nearing 90TB of corrupted ZFS pool that rejected by data recovery companies
Image 3 — A guy use Claude Code recovered nearing 90TB of corrupted ZFS pool that rejected by data recovery companies
▲ 0 r/zfs

A guy use Claude Code recovered nearing 90TB of corrupted ZFS pool that rejected by data recovery companies

Originally posted by @ shujueitsao in Threads

The following content is the post content translated by ChatGPT

Note the "dollar" is TWD.


Last May, the company NAS crashed. 90TB of animation project files — three years of hard work. The hard drives weren’t broken, but the ZFS metadata was corrupted. The system couldn’t recognize the data. I contacted several data recovery companies. I was ready to pay the price of a domestic car. In the end, no one was willing to take the job — all said it was unrecoverable. It sat there for almost a year. One day, while playing with Claude, I suddenly thought: Could I just try to fix it myself? I spent less than 5,000 dollars to buy an HBA card. Connected 12 hard drives to the computer. Installed Claude Code and let the AI issue commands on its own. It tried ZFS historical restore points one by one. Found a point in time that wasn’t corrupted. Mounted the entire data pool back. Recovered everything. Directory structure intact, filenames intact — three years of projects. I had absolutely no Linux experience. AI guided me through the entire process. Today, I saved the cost of a domestic car 😮‍💨


The following content is the summaries for the chat message that done by ChatGPT


Recovery is difficult; ECC errors cannot currently be corrected.

They will return the hard drives first, but backups are retained.

No recommended contacts are available locally; recovery in Taiwan is not feasible.

Consultation was done with overseas experts. They connected remotely and wrote several test programs; additional tests were also written locally.

Conclusion: recovery is unlikely.


The following content is the Chinese content that in the Ubuntu's Claude Code screenshot


All background tasks have completed; data integrity is confirmed.

Main data is located in zpool1/zfs19 (88.6TB). How would you like to proceed?

Option 1: Extract selectively (using zdb)

Suitable for recovering a small number of important files

Slow; impractical for 88TB

Option 2: VM approach (recommended)

Run a QuTS hero VM via KVM/QEMU on this machine

Pass through all 12 drives to the VM

Inside the VM, use zpool import -o readonly=on to mount the pool fully

Then export data via SMB/NFS

Which option do you want to use? If you choose option 2, do you have another set of drives or enough space to store the recovered data?

u/XLioncc — 16 days ago