1. This can be cheaper on AWS. We've been meaning to move to reserve instances, paying a year at a time, for a while and simply haven't done it yet.
2. Fastly has already donate CDN usage to us, but we haven't fully utilized it yet as we're (slowly) sort out some issues between primary gem serving and the bundler APIs.
3. RubyCentral pays the bill and can afford to do so via the proceeds generated from RubyConf and RailsConf.
4. The administration is an all volunteer (myself included) effort. Because of that, paying a premium to use AWS has it's advantages because it allows more volunteers have help out given the well traveled platform. In the past, RubyGems was hosted on dedicated hardware within Rackspace. While this was certainly cheaper, it created administrative issues. Granted those can be solved without using AWS, but we get back to again desiring to have as low of friction on the administration as possible.
Any other questions?
If Rackspace can be of assistance in the future, feel free to reach out (email@example.com). We currently donate hosting to many open source projects, including ones in a similar space, like the Python Package Index.
I assume this was posted because it's an enormous bill :) but obviously if you're happy with it, carry on!
Edit: Found a post  calling for a rubygems mirror network. Otherwise there is lots of information about setting up local mirrors of the repository.
If most of the installs are on servers, have you considered talking to server providers about setting up internal mirrors on their networks? That might save everyone a lot of bandwidth.
Of course, people shouldn't really be installing their gems from ruby gems on servers anyway, is there any way to prod bundler to make it default to package gems and do a local install where possible, rather than downloading them every time there is a deploy (the current default)? At present you use double bandwidth from people downloading once on their local machine, and once on their server to update.
Fetching the ruby gems index with bundler/rubygems still takes a while every time I bundle update, have you looked at optimising that part of the process further (at least it doesn't fetch a list of all gems now, but it still fetches a list of all versions of each gem doesn't it?), say caching older gem results? The list of gem versions available should not change for old ones, so you should really only need to fetch a very small list of latest versions. The memory usage and bandwidth usage is still quite high there.
I built S3stat (https://www.s3stat.com/) to fix this opaqueness that comes with using Cloudfront as a CDN and get you at least back to the level of analytics you'd get if you were hosting files from one of your own servers.
RubyGems guys, if you have logging set up already, I'd be happy to run reports for all your old logs (gratis, naturally) so you can get a better idea of which files (and as another commenter wondered about, which sources) are costing you the most.
And pretty much "all" dedicated server providers these days also have cloud offerings if you need to spin up some instances quickly to handle traffic spikes etc., or for dev/testing purposes.
*edit and I believe not if you end up using the public IP address instead of the internal ip address.