
Amazon S3 Price Reduction - timf
http://aws.typepad.com/aws/2010/11/what-can-i-say-another-amazon-s3-price-reduction.html
======
jnoller
This is fantastic. Amazon is _the_ leader, and having them drop prices is a
boon to the market as a whole. Companies who resell or add services on top of
Amazon's services see an immediate benefit, as do their customers. I can't
imagine how much money this saves companies like DropBox who build on top of
it [ _cperciva is right, it's exactly 15245/month, since they're in the 1PB+
range_ ].

I'm definitely wondering if and when we'll see 10 cents/gb - prices like that
(minus the data transfer charges - see: [http://www.nasuni.com/news/nasuni-
blog/whats-the-cost-of-a-g...](http://www.nasuni.com/news/nasuni-blog/whats-
the-cost-of-a-gb-in-the-cloud/)) put it within striking range of high
availability spinning disk in your local data center.

Disclaimer: I work for a company building on top of/reselling S3 in addition
to other providers.

~~~
cperciva
_I can't imagine how much money this saves companies like DropBox who build on
top of it._

Based on the (very safe, I think) assumption that DropBox uses at least 1 PB
of S3 storage, it will save them exactly $15245 per month.

~~~
bconway
Wouldn't a company that is such a significant user (by volume) have already
negotiated a rate of their own? It would seem silly to be in the 1+ PB range
and paying the going market rate.

~~~
cperciva
I would guess that the point of Amazon's pricing tiers is so they can say
"look, you've already getting a discounted rate" and not waste time
negotiating with each customer separately.

~~~
jnoller
Actually, I'm pretty sure they do offer custom pricing to some customers, but
I've never heard any details.

~~~
d2viant
I would guess they do to. Netflix recently moved to the AWS infrastructure for
their streaming and it was recently revealed that Netflix accounts for 20
percent of all internet traffic during peak times. I highly doubt Amazon
simply said "go read our webpage" in a situation like this.

------
colinhostert
What I have learned from building out the @Grooveshark infrastructure:

Their pricing on bandwidth is still 3x to 4x more than what it costs to buy
transit above the 10Gbps level and still noticeably more expensive at the
1Gbps level. A 3 to 4 times increase in bandwidth means millions of extra
dollars a year to run on AWS.

~~~
wmf
Do we put @ in front of all proper nouns now?

~~~
colinhostert
@yes

~~~
antonioe
@colinhostert agreed #newgrammar

------
wmf
The size of the volume discount (up to 60%) is pretty surprising. If they're
making any money at 5.5 cents/GB then they must be making a _lot_ of money at
14 cents.

~~~
psadauskas
I wonder what kinds of things people are using this for, that they even
explicitly mention a tier for 5 Petabytes. Thats like $275,000/month, not
counting transfers.

~~~
cperciva
Isn't DropBox in the 5 PB+ range? I have a vague recollection of concluding
that they were in S3's top tier a few months ago, but I can't remember if it
was based on them announcing how much data they were storing, based on an
estimate from their burn rate, or based on an estimate from their number of
users.

------
wccrawford
Wait... eleven nines? Cripes!

~~~
timf
Just remember this is _durability_ not availability. Jeff states this clearly:
"If you store 10,000 objects with us, on average we may lose one of them every
10 million years or so."

Eleven nines _availability_ is the service not working/responding only 0.0003
seconds per year ... indistinguishable from perfect.

~~~
jluxenberg
I was actually thinking about this claim. Seems kind of unreasonable; seems
that the amount of data lost should be proportional to the size of the data,
not the number of objects it is split into.

~~~
ceejayoz
I just figured they're going off an average object size stat. Data size can be
equally meaningless - a single bit of data loss might be catastrophic in a 10
GB file, or it might not be noticeable in a 1 KB file.

~~~
xtacy
I am not sure if what you claim is meaningful. A single bit of data loss is a
data loss, no matter what the file size is.

If you meant you could fix the 1-bit error easily in the 1KB case, as you have
just 8K bits to flip through, then it makes much more sense. If you split the
big 10GB file into smaller chunks of 1KB (at which error detection/correction
is done), then the fault becomes much more manageable.

~~~
ceejayoz
> A single bit of data loss is a data loss, no matter what the file size is.

Sure, but in a lossy JPEG, or a heavily compressed video file, a single bit in
a single frame of a two hour movie really isn't going to matter much.

------
LabSlice
In the last couple of months they also created new 'Micro' EC2 instances,
billed at 2-3 cents/hr. They also had a recent promotion to give annual access
to a single Micro instance for free for a full year. It seems like price wars
will be on the horizon for 2011. Great news for startups using EC2 (like
myself).

~~~
mikeyur
Hopefully Rackspace starts getting competitive sometime soon.

------
cvg
They even dropped the reduced redundancy storage rate.

I wonder if with reduced redundancy and export to your own hardware (using
Amazon's sneakernet) you can mimic the standard durability at a lower cost?
Maybe too much work.

------
saurik
Unfortunately, almost all of my cost is of the per-request variety, and is not
scaling per-GB. :(

