

The Bandwidth of Sneakernet to the Cloud - lmacvittie
http://devcentral.f5.com/weblogs/macvittie/archive/2009/08/21/the-bandwidth-of-sneakernet-to-the-cloud.aspx

======
yellowbkpk
This calculation only includes getting the data physically near to the cloud.
What about uploading it into the cloud?Surely transferring 3 million GB at
around 40 MB/sec would lower the average?

Edit: My calculations say it'll take 2.7 years to read all that data if you go
at 40MBps. Hopefully they can parallelize that across a few dozen machines...

~~~
fhars
You'd actualy put the disks into a Sun X4500 which you can just put into a
rack at the destination site: [http://www.c0t0d0s0.org/archives/4260-Less-
known-Solaris-Fea...](http://www.c0t0d0s0.org/archives/4260-Less-known-
Solaris-Features-Remote-Mirror-with-AVS-Part-7-Truck-based-
synchronisation.html)

------
burke
The way I see it, the bandwidth is astronomically high, with 4.6 hour latency.

Maybe they're using TCP/HiP (Highway Protocol).

------
gdp
It brings a whole new danger to the idea of a "disk crash" though, don't you
think?

~~~
kevindication
Consider also that the weight of just the hard drives is about 9,300 pounds
(7.2oz * 20704). Expect a spectacular impact if that van is even capable of
carrying such a load. ;-)

But this does beg the question, why choose 160GB drives? Even doubling it to
320GB shouldn't hurt the bottom line much. Density per dollar is going to be
very important for the Van-based Data Packet.

~~~
masklinn
> Density per dollar is going to be very important for the Van-based Data
> Packet.

The dollar part actually isn't that important, given you can reuse your drives
for later transfers (TeraScale's boxes zip around, AWS sends back your storage
devices when the import is done) so unless you're hitting a density/dollar
cliff, you're probably going to go all density with TB drives and such.

FWIW, in 2002 Terascale was already packing its sneakernet boxes with 300GB
drives (7/box)

