Hacker News new | past | comments | ask | show | jobs | submit login
Amazon S3 Price Reduction (aws.typepad.com)
141 points by timf on Nov 1, 2010 | hide | past | favorite | 37 comments



This is fantastic. Amazon is the leader, and having them drop prices is a boon to the market as a whole. Companies who resell or add services on top of Amazon's services see an immediate benefit, as do their customers. I can't imagine how much money this saves companies like DropBox who build on top of it [cperciva is right, it's exactly 15245/month, since they're in the 1PB+ range].

I'm definitely wondering if and when we'll see 10 cents/gb - prices like that (minus the data transfer charges - see: http://www.nasuni.com/news/nasuni-blog/whats-the-cost-of-a-g...) put it within striking range of high availability spinning disk in your local data center.

Disclaimer: I work for a company building on top of/reselling S3 in addition to other providers.


I can't imagine how much money this saves companies like DropBox who build on top of it.

Based on the (very safe, I think) assumption that DropBox uses at least 1 PB of S3 storage, it will save them exactly $15245 per month.


Or $182,940 per year. Depending on benefits and payroll taxes, that's somewhere between one and two very qualified engineering hires right there, for free.


This probably doesn't effect big accounts like that. Based on their pricing table anything over 4000 TB is the same, already discounted, price.


Wouldn't a company that is such a significant user (by volume) have already negotiated a rate of their own? It would seem silly to be in the 1+ PB range and paying the going market rate.


I would guess that the point of Amazon's pricing tiers is so they can say "look, you've already getting a discounted rate" and not waste time negotiating with each customer separately.


Actually, I'm pretty sure they do offer custom pricing to some customers, but I've never heard any details.


I would guess they do to. Netflix recently moved to the AWS infrastructure for their streaming and it was recently revealed that Netflix accounts for 20 percent of all internet traffic during peak times. I highly doubt Amazon simply said "go read our webpage" in a situation like this.


They have to. I've never negotiated w/ amazon, but their list bandwidth prices were 3 - 4 times more expensive than what i got negotiating with CDNs and carriers directly.

That's to say nothing about cogent, who will sell you bandwidth at $4/mbps, AWS's cheapest pricing (at 150tb/mo) still comes out to $28/mbps, which is a joke.


What I have learned from building out the @Grooveshark infrastructure:

Their pricing on bandwidth is still 3x to 4x more than what it costs to buy transit above the 10Gbps level and still noticeably more expensive at the 1Gbps level. A 3 to 4 times increase in bandwidth means millions of extra dollars a year to run on AWS.


Do we put @ in front of all proper nouns now?


Only if those proper nouns have twitter accounts. ;)


Only if we want a Named-Entity Web Spider to do the right thing.


@yes


@colinhostert agreed #newgrammar


Does that "3x to 4x more" figure take into account the associated costs of handling 10GigE drops versus having your stack on AWS? (Routing/switching gear, server hardware, rack space/power, network engineers, etc.)


yeah its an all in cost. Includes power, space, network gear (juniper ex8200), optics, cables, salaries of employees.


The size of the volume discount (up to 60%) is pretty surprising. If they're making any money at 5.5 cents/GB then they must be making a lot of money at 14 cents.


I think once you factor in the costs related to creating and maintaining accounts (including support and billing) they aren't making much more off the small users than large ones, percentage-wise.


I wonder what kinds of things people are using this for, that they even explicitly mention a tier for 5 Petabytes. Thats like $275,000/month, not counting transfers.


Isn't DropBox in the 5 PB+ range? I have a vague recollection of concluding that they were in S3's top tier a few months ago, but I can't remember if it was based on them announcing how much data they were storing, based on an estimate from their burn rate, or based on an estimate from their number of users.


I worked out the first 5 PiB to be $462,336/month at the older pricing, and $433,433.60/month at the new rate.

Edit: fat-fingered a couple of columns. New figures are $449,536 and $433,423.36, for $16112.64/month savings, or 3.6%. All the savings come in the first 1 PiB.

https://spreadsheets.google.com/ccc?key=0AskGKJcfVVjPdFJKdTN...


It could be a loss-leader to get high-volume S3 customers using EC2 for the free transfer.


for the free transfer

Just in case I'm not the only person who was confused by this at first: stellar678 is referring to the free bandwidth between EC2 and S3, not the free AWS upload bandwidth (which no longer exists).

Marginally related trivia: Thanks to said free EC2-S3 bandwidth, if you want to move more than 1 MB between EC2 nodes in different availability zones, it's cheaper to PUT the data to S3 from one node and then GET and DELETE it from the other node than it is to transfer it directly.


Wait... eleven nines? Cripes!


Just remember this is durability not availability. Jeff states this clearly: "If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so."

Eleven nines availability is the service not working/responding only 0.0003 seconds per year ... indistinguishable from perfect.


I was actually thinking about this claim. Seems kind of unreasonable; seems that the amount of data lost should be proportional to the size of the data, not the number of objects it is split into.


I just figured they're going off an average object size stat. Data size can be equally meaningless - a single bit of data loss might be catastrophic in a 10 GB file, or it might not be noticeable in a 1 KB file.


I am not sure if what you claim is meaningful. A single bit of data loss is a data loss, no matter what the file size is.

If you meant you could fix the 1-bit error easily in the 1KB case, as you have just 8K bits to flip through, then it makes much more sense. If you split the big 10GB file into smaller chunks of 1KB (at which error detection/correction is done), then the fault becomes much more manageable.


> A single bit of data loss is a data loss, no matter what the file size is.

Sure, but in a lossy JPEG, or a heavily compressed video file, a single bit in a single frame of a two hour movie really isn't going to matter much.


Not all bits are created equal. A one bit change in a mailmerge datafile could put a lawsuit from a Mr. Whitman in your hands.


That's not actually news, but I am still happy to see that you are impressed. Here's a blog post with more info on what this means:

http://aws.typepad.com/aws/2010/05/new-amazon-s3-reduced-red...


More importantly, did you see that you can pay less (significantly less) for less nines?


In the last couple of months they also created new 'Micro' EC2 instances, billed at 2-3 cents/hr. They also had a recent promotion to give annual access to a single Micro instance for free for a full year. It seems like price wars will be on the horizon for 2011. Great news for startups using EC2 (like myself).


Hopefully Rackspace starts getting competitive sometime soon.


They even dropped the reduced redundancy storage rate.

I wonder if with reduced redundancy and export to your own hardware (using Amazon's sneakernet) you can mimic the standard durability at a lower cost? Maybe too much work.


Unfortunately, almost all of my cost is of the per-request variety, and is not scaling per-GB. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: