The animation studio I worked for had almost a petabyte of data. It may be cheap to buy the storage but transferring is costly. It's very easy to saturate a MPLS circuit with data, even rSync on a 10Gbit internal connection takes a long while.
Really Gandi should of had backups from day one. If your hosting data you should always have backups ready and tested on day one.
rsync went quite fine while transferring data (in the same situation as you describe), when taken care of some important bottlenecks (not running it over SSH, disabling compression on files that don't compress well, disabling full checksums, TCP sockopts, ...)
what it might leave you hanging with for a long time is before an actual transfer, while it builds and compares the lists on both sending and receiving side, when you have big filesystems (hundreds of millions of files).
if you have a strategy to select beforehand which files to transfer (for example from a DB which tracks what has been created or changed, direct from worker or production input) you have a good headstart and can minimize rsync on complete filesystems -- and rather run it on a selection, which is tiny compared to the complete project(s) most of the time.