
What’s (real) the cost of a Gigabyte in the Cloud? - jnoller
http://www.nasuni.com/news/nasuni-blog/whats-the-cost-of-a-gb-in-the-cloud/
======
patrickgzill
The wholesale costs of transferring a gigabyte of data are (worst case, buying
2Gbps or more of transit) about 1-2 cents.

This is the very most that any vendor pays, and they probably pay far far less
than that.

For storage costs, you could take the storage capacity of a 2TB drive, triple
the cost to account for servers, RAID and other overhead, then come up with a
per-GB charge per year (drives aren't replaced every year, but then there are
power/cooling and other costs).

So a 2TB drive is say $200, triple that is $600, say 1800GB formatted divided
by $600 = 33 cents per GB per year.

------
grandalf
I'm wondering if we'll see prices drop now that ZFS for linux is more mature.

It's pretty much the ideal FS for this sort of application and the only thing
that's been holding it back has been its tie to Solaris.

~~~
wazoox
I'm tired of this ZFS hype. ZFS for linux is absolutely not mature. First, you
can't distribute binaries. Second, the version is currently 0.5.x, which is
alpha; and I can tell you after trying to compile it proved largely enough
that it is, indeed, alpha quality. Third, it misses a quite useful feature,
the posix interface, i. e. the ability to mount the filesystem. Slight
limitation isn't it?

Then I can mention a couple of other small problems. ZFS isn't at all cluster-
aware. In fact, it radically sucks as a cluster filesystem. I know quite an
important storage platform who sent back 2PB of Sun storage because you know,
aggregating 2PB by stacking iSCSI volumes and using RAID-Z simply cannot work
and doesn't scale, though that's precisely what Sun tried to sell them. That
pretty much assures that ZFS isn't really a perfect fit for the cloud, you
know. At least not for the people actually running it.

~~~
grandalf
interesting.

Why doesn't it scale?

~~~
wazoox
Because it's implemented as a one-system filesystem, there is no cluster
capabilities built-in at the moment. The problem is that building the sort of
huge fiesystems the cloud demands means clustering, generally. Of course you
can go buy a DataDirect S2A 9900 and you'll have a nice 1PB fs in a rack,
however the problem will be there again when you'll need to extend further.

------
rlpb
Can't Nasuni bundle files together? Given that for a 1 KB file the majority of
the time to fetch it will be latency, if small files were bundled into 10 KB
chunks (for example) then the transaction cost would go down by an order of
magnitude without affecting UX, surely? It seems unlikely that someone would
hit a large number of separate 10 KB chunks for 1 KB each time without a
significant number of cache hits.

------
jimfl
Shouldn't we start talking in terms of gigabyte hours (GbH)? Cloud storage
needs can fluctuate over time in most applications.

