u/Public_Jellyfish2328

Workflows for handling large bioinformatics datasets without high cloud/egress costs?

Hi all — I used to work in bioinformatics at the Broad Institute and MIT, and recently started working on a project around improving access to large public datasets.

One thing I kept running into was how much time and cost goes into just getting the data locally (especially with S3/egress), before you can even start analyzing.

I’ve been experimenting with ways to access and work with these datasets in-place (without downloading), and would love to sanity check whether this is actually a pain point for others here.

Curious:

  • how are people currently handling large public datasets?
  • are you mostly downloading locally, or working directly in the cloud?
  • any workflows you’ve found that reduce friction/cost?

Happy to share more about what I’ve been building if useful — mainly just trying to learn from how others are approaching this.

reddit.com
u/Public_Jellyfish2328 — 14 hours ago