Sync is deprecating the unlimited plan I’ve been on for 6yrs, so I need to migrate approximately 100TB of data off their platform.
To prepare, I deployed a Storinator XL60 populated with 10×32TB drives to bring the dataset on-prem. I’m fully aware of the tradeoffs versus cloud (resiliency, operational overhead, DR considerations), but given the change in service, repatriation is the current path.
I’m looking for guidance on the most efficient and reliable way to handle bulk egress at this scale before my December deadline.
Key questions:
- What’s the most effective method for transferring 100TB from Sync to on-prem storage?
- Are there recommended tools or approaches (API-driven extraction, parallelized downloads, rclone, etc.) that perform well at this scale?
- Has anyone encountered platform-side limitations with Sync (rate limiting, throttling, session instability, file size constraints)?
From an infrastructure standpoint, bandwidth is not a constraint we have dual 20Gbps business fiber circuits. so the primary concern is optimizing throughput while avoiding provider-side bottlenecks and ensuring data integrity during transfer.
Any insight or lessons learned from similar large-scale cloud repatriation efforts would be appreciated.