1. This can be cheaper on AWS. We've been meaning to move to reserve instances, paying a year at a time, for a while and simply haven't done it yet.
2. Fastly has already donate CDN usage to us, but we haven't fully utilized it yet as we're (slowly) sort out some issues between primary gem serving and the bundler APIs.
3. RubyCentral pays the bill and can afford to do so via the proceeds generated from RubyConf and RailsConf.
4. The administration is an all volunteer (myself included) effort. Because of that, paying a premium to use AWS has it's advantages because it allows more volunteers have help out given the well traveled platform. In the past, RubyGems was hosted on dedicated hardware within Rackspace. While this was certainly cheaper, it created administrative issues. Granted those can be solved without using AWS, but we get back to again desiring to have as low of friction on the administration as possible.
Any other questions?
If Rackspace can be of assistance in the future, feel free to reach out (firstname.lastname@example.org). We currently donate hosting to many open source projects, including ones in a similar space, like the Python Package Index.
I assume this was posted because it's an enormous bill :) but obviously if you're happy with it, carry on!
Edit: Found a post  calling for a rubygems mirror network. Otherwise there is lots of information about setting up local mirrors of the repository.
If most of the installs are on servers, have you considered talking to server providers about setting up internal mirrors on their networks? That might save everyone a lot of bandwidth.
Of course, people shouldn't really be installing their gems from ruby gems on servers anyway, is there any way to prod bundler to make it default to package gems and do a local install where possible, rather than downloading them every time there is a deploy (the current default)? At present you use double bandwidth from people downloading once on their local machine, and once on their server to update.
Fetching the ruby gems index with bundler/rubygems still takes a while every time I bundle update, have you looked at optimising that part of the process further (at least it doesn't fetch a list of all gems now, but it still fetches a list of all versions of each gem doesn't it?), say caching older gem results? The list of gem versions available should not change for old ones, so you should really only need to fetch a very small list of latest versions. The memory usage and bandwidth usage is still quite high there.
I built S3stat (https://www.s3stat.com/) to fix this opaqueness that comes with using Cloudfront as a CDN and get you at least back to the level of analytics you'd get if you were hosting files from one of your own servers.
RubyGems guys, if you have logging set up already, I'd be happy to run reports for all your old logs (gratis, naturally) so you can get a better idea of which files (and as another commenter wondered about, which sources) are costing you the most.
And pretty much "all" dedicated server providers these days also have cloud offerings if you need to spin up some instances quickly to handle traffic spikes etc., or for dev/testing purposes.
*edit and I believe not if you end up using the public IP address instead of the internal ip address.
To be fair, a lot of maintenance value goes into the software that is never quantified. Broken software breaks hard, not partially, so maintenance is even more crucial.
Software controls everything from nuclear power stations to missles to dams to radiation therapy machines (where, again, software killed 3 people - http://en.wikipedia.org/wiki/Therac-25)
Proper software engineering is increasingly more important and, I'd posit, likely to become even more important than civil engineering for public safety as time goes on.
http://en.wikipedia.org/wiki/Cluster_(spacecraft) cost $370 million when an overflow caused a rocket to explode.
I'd imagine there's some mission critical software running nuclear plants, aircraft, cars, etc.
If you include other costs, like the office space and equipment used by the employee, it starts to sound pretty reasonable.
There's no spot or even reserved pricing, just a bunch of on-demand instances that were up 24/7 for all 28 days in February.
Seems like a genuine dedicated host, reserved instances or an architecture that leverages the elastic in elastic compute cloud would be worth considering.
(Although, actually, while I verified their total dollars spent is greater than what would be required to get a fundamentally better deal on bandwidth, I didn't take into consideration that once you slash their costs the amount they would be paying might no longer be ;P.)
You can negotiate with AWS to get the same Cloudfront pricing as you would with Akamai. I know because I'm in the the process right now.
More importantly, they could be running on 2-3 dedicated servers at OVH or Hetzer, and have Cloudflare in front of them instead of Cloudfront. Or, if they insist on Cloudfront, switch to Price Class 100 (US and EU only). Its cheaper, and latency isn't that much higher vs serving out of all Cloudfront locations.
As long as most of your content is static, and you have a solid CDN, your origin doesn't have to be highly reliable or scalable. Its just an object store to persist data for the CDN.
This is nonsense. They have more edge locations than most. I didn't try all comparators in the list, but out of half of them I tried, none had more than Cloudfront: http://www.cdnplanet.com/compare/cloudfront/maxcdn/
So if Cloudfront has 'not many', who has 'many', and how many is that?
To look at something more reasonable: CDNetworks is realistic competition; they are strong in Asia, and were the people I was comparing the pricing to (so they aren't going to be horribly expensive). According to the comparison website you are using, they have almost four times as many edge locations.
Honestly, though, the reality is that the really great CDNs don't even have data on this website (even for CDNetworks I think this data is not accurate: looks like an approximation): the leaders in this space are Akamai and Limelight, and both just show "Not Available" for the number of edge nodes they have.
Even going a little lower on the CDN pecking list, though: Level3, which according to this website you are using is mostly "competitive" with CloudFront (sometimes actually worse) in the regions CloudFront bothers to cover, is clearly covering entire subcontinents where CloudFront has nothing.
The reality is that CloudFront is still trying to grow out a network: they have poor coverage in Europe (which is pretty key), a few nodes in Japan/Singapore, and then next to no coverage anywhere else. Yet, they insist on pricing their product as if they were a big player (12c/GB is Akamai-level expensive).
(So, do I get to condescendingly say "this is nonsense" now? I mean, seriously: you clearly didn't spend much time using this website and you didn't look into who the leaders are to verify you weren't comparing low-end to low-end... also, I think you are not appreciating that 0->2 is infinitely better ;P.)
I still think you're mischaracterising AWS as being a bit player - they have a decent presence with Cloudfront, it's just that there are a couple that are bigger. Like I originally said, 'more than most'. CDNetworks certainly does pound them in numbers, though.
I'm not on the server team, so I don't know exactly what contributes most to it. But part of me really thinks it could be reduced!
Some games require massive amounts of compute, but the bandwidth to deliver the assets is generally paid by Apple.
I can guarantee you, your company is paying a metric fuck-ton more. It is called Apple's 30% cut.
Your company is paying AWS $200k to pass json messages around for analytics and social aspects of the game. You are paying Apple something like $1 million per week to distribute, market, and collect payments for the game.
I am not saying your company is dumb, or Apple is evil. I am saying your experience and anecdote isn't relevant to Ruby Gems, and offering a different way to think about the games industry vs. the open source software distribution world.
Though you mention delivering the assets. Actually (like a lot of games) we make a big effort in getting under 50MB over-the-air limit on the App Store. The total content for retina iPhone is ~300MB, delivered in parts as you progress in the game. That's kept on S3, downloaded through CloudFront.
But yes! You're right, it's mostly a hell of a lot of JSON flying around.
We're managing to squeeze our apps into this at the moment, but will likely need a similar solution using S3/CloudFront in the near future.
Haven't looked at it since iOS 7 launch though, do you know if it was iOS 6 too?
If anyone ever ends up doing something like this; ask them upfront!
When I've hit them, I've usually had a response to the "raise my limit" form within an hour or two.
That said, early on I chose Linode because of their generous bandwidth that is included with the boxes. For the price of less than 1TB of AWS bandwidth, I get 8TB, plus a decent box. The bigger boxes have an even bigger proportion.
I'm not posting this to give any suggestions for RubyGems - I know nothing of the complexity of that setup. Mostly just figured I'd share the research I did for finding reasonably priced bandwidth.
The bias towards AWS for this type of application is ridiculous and a big waste of money.
In particular, have you ever run a site that consistently serves over 25 Terabytes of traffic/month, or have you worked with someone who has?
I guarantee you that no company I have worked for in the last 15 years, could have ever run this type of infrastructure for $7K/month. Its absolutely amazing.
$60/mo for a dedicated server, $20/mo for CloudFlare. The dedicated server only serves 1 TB of it, the other 24 TB is static assets cached and served directly by CloudFlare.
Here's a screenshot of CloudFlare Analytics for the last 30 days: http://d.pr/i/6Z8S/5GU2Ni8t
So, what this really comes down to (after a good nights sleep) - is what type of traffic/transactions are you running on your back end infrastructure.
If the data is static, then you can probably (these days) cut your costs for 25 Terabytes/month from $8K to $800 (or, in your extraordinary case, $80), simply by being a bit intelligent as to how you make use of VPS/CDN/CloudFlare Transfer allocations.
On the flip side, if much of the data you are transferring out is the result of dynamic back end transactions, queries, and generation, then it's unclear to me that you can (easily) recognize the savings that you might see when generating static content.
I'm interested in knowing if CloudFlare will start throttling/shutting down people who pay $20 and use 25 TBytes in the long term though - that alone, for some organizations, will cost them more than the extra $8K they would pay to AWS (who, have zero problem with you using 25TB, 250TB, 2.5PB, etc...)
Funny thing - back when I was using 10 TB/mo, my site was hosted entirely on DreamHost's $9/mo shared hosting. I moved mostly because I was starting to get several hours a month of downtime - presumably, they were gently nudging me off their service.
I've seen plenty of $60-$100 dedicated servers come with unlimited-use 100Mbit connections, which work out to 16ish TB/mo before you start getting to 50% saturation. Of course, those are still subsidized in that that pricing is possible only because most people who buy it don't max out a 100Mbit connection.
Still, though, S3's 9-12¢/GB bandwidth pricing seems a bit high. Bandwidth at DigitalOcean (presumably unsubsidized) is 2¢/GB, which comes out to a much more manageable $500 for 25 TB.
With dynamic content, CloudFlare has Railgun, which takes advantage of the fact that dynamic content is usually mostly static. Still, though, if you have 25 TB of dynamic content, I presume bandwidth stops becoming the limiting factor in your cost of operation.
True, but lets not compare offerings of the past to now. There is still room for practical efficiency gains.
I.E. The people who bought the servers, racked the servers, went down to the CoLo at night, set up the virtualization environment, hooked up the routers, configured the routers, the switches, the firewalls, the vlans -- those people I am including.
I'm not including the DBAs who manage the schema, people who push the code, do the design, etc...
I've currently got active accounts with all three of those VPS providers - I love them, and use them every day - particularly Linode, but also Slicehost/Rackspace, and DigitalOcean. I even have a bare metal server at ServerBeach - which I realize I need to shut down...
At this exact instant I have six terminal windows open across DO/Linode. I host a moderately popular California Food Blog, and have about 15 years experience in various companies that have had hosting responsibilities.
I'm not saying you can't do great things with the VPS providers - I'm just suggesting that the tradeoff between saving $2-$3k (at most) with Digital Ocean, would be more than made up by the technology risk, hassle of having to re-invent a lot of the services that you get automatically from AWS.
That could change sometime in the (near) future - but right now, AWS is an easy (and honestly, all things considered, relatively cheap) solution for this type of application.
What technology risk is there in setting up Varnish and nginx on Digital Ocean? Or better yet some kind of out-of-the-box open source CDN. You would save a lot more than $2-3k.
Note - there is another option - Deploy on multiple Platforms and be smart with your DNS balancing (http://www.dnsmadeeasy.com/services/global-traffic-director/) when serving content. Particularly now that Digital Ocean is in Singapore/Amsterdam/NewYork I can think of some useful things I could do with $10/month droplet (2 Terabytes of Transfer) $300/month, in theory, gets me 20 Terabytes in Asia, 20 Terabytes in Europe, 20 Terabytes in North America. Now, whether DO would shut me down if I actually started using that Transfer is another question altogether...
Would love to see any recent input.
You could simply serve the content out of nginx, but you wouldn't see the performance benefits of keeping your content closest to the end user.
They could get an even better deal by just going through a dedicated server provider (or even better, colocating).
There's little advantage with choosing DO versus going with a dedicated server provider (and again, colocating). I guess the advantage would be the control panel that they wouldn't use, having a few one-click stacks that they won't use, stuff like that.
If someone can afford a $7,000 AWS bill they can afford to put some money towards hardware and an onApp license if they want "cloudy" stuff. To colocate their hardware it would probably run them anywhere from $400-$800 a month depending on where they go. Their total bill would be decreased by $5500 a month. The upfront investment of the hardware wouldn't be more than $12,000 either. LOE? Probably two weeks with a competent sysadmin.
Yes you can have issues with your hardware and stuff and then you have to take care of that, but if you're good with your DC, they're great to you.
I don't know what datacentres tend to charge for data transfer, but as that's the largest item on the bill, it's the more salient point.
Also, just because it's not on the bill doesn't mean they're not using other AWS services; there are several free ones.
For one datacenter, but CloudFront gets you 40+.
Think of it another way - what would be more valuble - RubyGems hosted on CDN, or RubyGems on DO and give a couple of grants for talent hackers to work on their gems fulltime for a few months. (ala GSoC)
Even if you ARE concerned about latency, have one download server in the US (E.g. DO), one in Europe (e.g. Hetzner) and one in SE Asia (Not sure who's cheap and good-ish there), and you'd still be a 1/4 the cost of AWS bandwidth or less.
A self set up Linode CDN with all six location
Would have provided 48TB of Pooled bandwidth at a very decent speed and cost around $480. Linode's Network are great, much better then DO. I am not sure if it match CloudFront, which isn't exactly the fastest CDN anyway.
I have 10 fingers, so that is definitely not "countless" hours of work. And No, Maintenance are minimal or non existent. You could even get smaller VPS behind each node balancer as HA. Since Linode VPS ( Unlike DO ) are deployed on physically different hardware.
While i say it is fair enough to use AWS because money doesn't matter, i thought there are definitely some better alternative for the same price( if you really cared about latency ) or cheaper options.
Everything needed to build the rubygems.org stack can be found at https://github.com/rubygems/rubygems-aws
If the bill remained relatively consistent they could host Rubygems.org for ~28 months with 200K.
Data Transfer $3,597
S3 $ 228
While "bandwidth" costs equate to ~$4,668/month, only $1,071 is CDN (CloudFront), with the balance just raw Data Transfer.
Since lots of folks are commenting, and not everyone realizes the difference it's also a good time to point out the CloudFront vs. Data Transfer distinction.
Using Amazon's terms... Data Transfer means anything directly served/coming from EC2 or S3 (or a few other services which aren't relevant here), but NOT anything for CloudFront (which is, obviously, a separate line item, as shown above).
The bulk of CDN (CloudFront) usage ($735 worth or 69%) is US.
The bulk of Data raw bandwidth (Data Transfer) usage ($2,931 ~80%) is US East.
And sometimes the hosting costs simply don't matter. It's easy for us engineers - siting here on HN - to sit at our keyboards and play around with hypothetical ways to save money. This isn't necessarily a bad thing, but there are numerous things in IT that it doesn't make sense to optimize. Why? Because the ROI on the engineering time, CapEx, and OpEx (and the time, energy, and focus of ANYONE involved or impacted at all) to do the optimization doesn't outweigh the opportunity cost.
Sometimes there are simply better uses of our limited capital and time.
Not everything needs to be optimized. And the argument gets stronger when there are other factors more difficult to factor in: adopting a platform that isn't as widely known or isn't backed by a similar level of maturity (even with it's quirks, at least they are well known), etc.
The risks/concerns not only vary between organizations, but often from one period of an organization's growth to the next. The beauty is every organization gets to make their own decision ...and none of them have to give a damn if the HN community agrees or not. :-)
The startup whose backend I co-created racks up an AWS bill that hovers around a half million dollars a month. We make use of all of the ways to save with Amazon: pre-paid reserved instances, negotiated deals, etc. And we're not even that big; imagine what Netflix's AWS bill must cost?
We've tried other providers, toyed with co-locating, but at the end of the day the flexibility and cost benefit of IaaS outweighed the lower base price of CPU cycles when you roll it yourself.
Can only guess at why folks like any post, but it's not necessarily how large the bill is. Maybe it's how low it is for a service that's widely relied on, or maybe it's the level of transparency, which turned out to include evanphx above showing up to answer questions about the project.
Compared to npm asking for $300,000 in donations to keep the thing running. I'm glad RubyGems can run for relatively so little, and be transparent in doing so.
As for the CDN, switching to something like Cloudflare might make more sense rather than relying on Cloudfront. At the least, there's a "US and EU only" option for edge locations to use which si considerably cheaper than the default option of all edge locations.
It's possible RubyGems.org would be classified under one of the "not really allowed here" terms.
That's just replacing bandwidth costs with build-and-run-your-own-CDN costs.
I saw a talk at Ruby/RailsConf about the work spent building and maintaining rubygems.org. It smelled a bit martyrish. "Look at the thankless work we perform behind the scenes".
Well, if help is required building or operating rubygems.org, please just say so. As a seasoned Ruby developer I'd be more than happy to contribute development time, and as a daily user I'd be willing to commit financially in a small way towards operating costs. Not that that is required - given all the offers of free hosting this post received in response.
If we don't know about a problem, we can't help. Just ask if help is what you want. It's not like the Ruby community doesn't have great communication channels.
3 year heavy EC2 reservations pay for themselves in ~7 months, cloudfront reserved bandwidth is just a 12 month agreement so that costs nothing up front. You might want to experiment with some different instance types though, depending on your resource utilization. Personally I really like using the new c3.large instances for my web servers and anything else that needs more CPU than memory, proportionately. If the standard instances suit your needs better you still might want to move to the m3 class.
Aside from those two items it looks like you are sending out a considerable amount of stuff from EC2->internet (27 TB transfer out from US-East to internet). I'd recommend looking at whether you could set up a cloudfront distribution with your EC2 servers as its origin.
The website says that hosting is provided by BlueBox?