Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A way to manage data transfer to GCP (zenko.io)
12 points by dashaGurova on June 5, 2019 | hide | past | favorite | 6 comments



the article implies you can't do "gsutil sync", but you can, so I don't really understand why you wouldn't use that for syncing data.


The article mentions gsutil being slow and not resuming transfers, maybe thats their main issue?


That is correct. gsutil will be a one-time thing. If you want to ensure, constant replication with metadata tracking with parallel workers for transfer(fast), this will be good. And I think they support have AWS, Azure and Ceph along with GCP


Zenko supports replication from Scality's RING, AWS S3, and other S3-compatible hosts (Wasabi, CEPH, and Scality S3 Connector) to AWS-compatible (Wasabi, DigitalOcean, RADOS Gateway), Azure, and GCP target clouds.

Azure sourcing is in development.


gsutil sync is not slow (worked at Google for years and transferred petabytes quickly with it). what does that mean not resuming transfers? It picks up where you last ran. Do you mean for an individual file?


If you're replicating a big bucket and the replication fails, it picks up where it broke, not from where you last ran it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: