
My Amazon S3 Mistake - DevFactor
http://www.devfactor.net/2014/12/30/2375-amazon-mistake/
======
cheald
There are a few lessons here:

1\. Use IAM roles. AWS keys should only have access to the specific
functionality they need. Best practice is to _never_ use your root credentials
for _anything_ ; always use IAM users and use unique keys for each use case,
so that you can invalidate and replace them easily. Every time documentation
or a tutorial asks you to insert AWS keys, your response should be "let me go
create a role for that" rather than "let me go look those up".

2\. More importantly, if you ever accidentally publish or leak credentials,
don't try to clean it up by deleting those commits. _Invalidate the
credentials immediately_ and re-issue them.

3\. Always, _always_ `git diff HEAD` before committing. Know what you're about
to push up. This isn't just a security concern - the number of small, stupid
things you'll catch that you'd otherwise end up fixing 15 seconds later is
substantial. As a bonus, this incentivizes you to keep your commits small and
atomic.

You might talk to Amazon directly - they've been known to forgive debts like
these in these kinds of circumstances.

~~~
JoshTriplett
> Always, always `git diff HEAD` before committing.

I prefer to always "git commit -v", which shows me the diff while I edit the
commit message.

~~~
dperfect
Call me unsophisticated, but I personally find it extremely helpful to use a
GUI (currently using SourceTree) to interactively review changes before
committing. I use the command line for almost everything else, but this is one
of those cases where a nice GUI really seems to shine.

~~~
fletchowns
SourceTree seems like such a complicated mess. `git gui` and `gitk` are my
preference.

~~~
madeofpalk
Try the GitHub app (which works with non-github git repos as well). It's
feature list is TINY compared to SourceTree, its pretty and has a very easy to
use commit interface.

[https://mac.github.com](https://mac.github.com) or
[https://windows.github.com](https://windows.github.com)

~~~
fletchowns
I'm on Linux

~~~
Sammi
No Sourcetree either on Linux :~(

------
riquito
> I learned a valuable lesson here though. Don’t trust .gitignores and gems
> like Firago for keeping your data safe. Open source is awesome, but if you
> are dealing with anything that can be scaled up to thousands of dollars per
> hour – at least store it in a private repo if not on your local machine.

You learned the wrong thing :-P The lesson here is "always check what you're
committing"

~~~
fein
As an additional layer of safety, you can also commit to a local repo, or a
private bitbucket repo, and give it all a nice once over before pushing
everything to GH. At least this keeps your screw ups out of public scope.

~~~
zo1
Isn't that what you're doing anyways when you use a DCVS? Granted, I use
mercurial. It has two components on your side, your working directory, and the
actual repo. Committing moves changes from your working directory, and saves
them to a branch on your local repo. Then you get the local repo to push that
change to a remote one.

In essence, "committing" then is the act of pushing your changes to your local
repo. Or you could, you know, just look at the changes on disk if really
required.

~~~
coldpie
Yes, that's how Git works. I'm not sure what fein meant.

------
SwellJoe
Hell, even without making the mistake of publishing keys, I've accidentally
run up quite large bills for Amazon services; backups that failed to remove
old copies was a big one. Instances that were supposed to have been shut down,
but for some reason it didn't happen (I don't know if this is my mistake or a
bug at Amazon...probably my mistake).

I've simply stopped using Amazon for anything tinkery, because the costs of
making mistakes can be tremendous. At least when I make mistakes on my own
colocated server, I know it can never cost me more than the $100/month I pay
to host it. And, storage is practically infinite (4TB hard disks), and I can
spin up more VMs than I would ever need for tinkering in 32GB of RAM on our
"spare" server.

And, when I have needed to use VM with some cloud host...Linode and Digital
Ocean and similar may have dramatically smaller toolsets for managing virtual
resources than Amazon (probably unworkably so for large deployments), but my
mind has a much easier time predicting costs than with Amazon. After being
surprised on more than one occasion with a ~$300 bill from Amazon (for running
nothing but personal pet projects with no economic return), I turned
everything off.

~~~
mhink
Something similar has happened to me, on a smaller scale. I was recently
checking my bank account, and noticed a charge from AWS for something like
$15. After further investigation, I had apparently been charged this $15 for
about 6 months.

Hmm.

Logging into AWS and checking the Cost Explorer showed that I had been charged
for a t1.small EC2 instance I had running, but when I logged into the AWS
console, all I saw was a single "stopped" instance. There's no way I was being
charged for cycles I'm not using, right?

Turns out that the instance I was being charged for was in us-west, and I had
only been looking at us-east- and there is no indication within the actual EC2
panel that you have instances in other regions!

I was a bit perturbed, to say the least...

~~~
fletchowns
Not sure why you didn't click on "Billing & Cost Management" where you can see
a complete breakdown (by resource and region) of exactly what you are being
charged for.

~~~
SwellJoe
I don't know about the previous commenter, and I don't know if the tools have
improved, but when I tried to use the billing and cost management page, I
couldn't figure out what was actually costing so much. It broke it down into
(EC2, S3, etc.), but I couldn't figure out how that mapped out to the bill
(i.e. was it the snapshots of my instances, the storage for the not running
instances, the old backups in S3, etc.). It didn't seem like it was telling me
"exactly" what I was being charged for.

For me, when I saw a $300 bill for _one_ running EC2 instance, a couple of
halted ones, and a few hundred GB of storage for something I thought of as a
"toy" project that I didn't want to invest serious time or money in, I knew I
was done with AWS. A small colocated server could readily provide those
resources for vastly less money (and I work on tools to manage cloud and VM
resources, including a reasonably good API for spinning them up and down and
such, so I don't really miss the Amazon API or UI). This was after I'd already
gotten a shock from an automated backup gone wrong that ran up a huge bill.
So, it took me a couple of times getting burned.

Of course, it's always been my fault for not understanding how Amazon bills
for things, what services cost money (i.e. a down instance still costs money),
how much things cost, and sometimes how the API works (or at least confirming
that it's doing what I think it's doing, in the case of removing old backups).
I'm not blaming Amazon. I'm just saying, I don't trust myself to use Amazon
for anything that I'm not going to spend a lot of time and energy on, because
I'm obviously not capable of using it without making mistakes when I treat it
like a toy. I readily admit I shot myself in the foot; Amazon just provided
the guns.

~~~
bjt
The whole billing/cost section got big improvements last year.

------
mdnormy
PSA

If you're running AWS, I highly recommend for all developers and sysadmin to
attend AWS free training[0][1]. While you're at it might as well get yourself
certified[2].

You might have senior expertise with system operation and application
deployment but sometimes, AWS approach things differently. The essence of the
training is to always implement best practices, not just solving problem.

Also use the opportunity of AWS event to network with their Solution
Architect. Trust me on this one. This worth more than AWS Enterprise Support.

[0] [http://aws.amazon.com/training/course-
descriptions/architect...](http://aws.amazon.com/training/course-
descriptions/architect/)

[1] [http://aws.amazon.com/training/course-
descriptions/architect...](http://aws.amazon.com/training/course-
descriptions/architecting-advanced/)

[2]
[http://aws.amazon.com/certification/](http://aws.amazon.com/certification/)

~~~
pen2l
Erm, is it really free? I just followed your link, which lead me to
[https://www.aws.training/home?courseid=3&language=en-
us&sour...](https://www.aws.training/home?courseid=3&language=en-
us&source=web_en_course-descriptions_architect) ... which lead me to
[http://www.globalknowledge.com/training/course.asp?pageid=9&...](http://www.globalknowledge.com/training/course.asp?pageid=9&courseid=24526&country=United+States)
... and there I find out it's $2095 USD.

Hopefully I'm mistaken... because I'd love to take it if it's free!

~~~
mdnormy
I just noticed that free training session seems to be limited to certain
location sold by "Amazon Web Services". For example Kuala Lumpur and Bangkok
are free, but other location have varying fees.

I attended 2 free session in Kuala Lumpur and Singapore back in June. I always
assume its free for all. Sorry

~~~
astine
Huh, a round trip to Kuala Lumpur is less than the course in the States.[1]
Best excuse for an overseas flight I've seen lately.

[https://www.hipmunk.com/flights/WAS-to-
KUL#!dates=Jan12,Jan1...](https://www.hipmunk.com/flights/WAS-to-
KUL#!dates=Jan12,Jan14&pax=1)

------
matteotom
You can use AWS IAM
([https://aws.amazon.com/iam/](https://aws.amazon.com/iam/)) to help prevent
something like this. AFAIK you can create a sub-account that only has access
to specific resources, such as S3, and use the keys from that sub-account.

I haven't used it much, but it looks like you can be very specific in what you
allow, such as only allowing access to a single bucket with S3, or a single
domain with SES.

~~~
tekromancr
You totally can be very specific about what actions and services an IAM key
has access to. The permissions model takes a bit to get your head around (It
did me, at least) but once you do, you can have very strict, precise
permissions for your keys.

------
dilap
Yeah, I did that, though in my case it was a bitbucket repo I thought was
private, but somehow ended up public (obviously stupid to ever be checking in
the keys at all).

All I had setup was a micro S3 instance I'd been using for some toy craigslist
scraping & hadn't touched for months.

Then out of the blue I get an "urgent please check your account" email from
amazon. Go check the AWS console, and what do you know -- maximum number of
maxed-out instances instances churning away with 100% CPU usage in every
region on earth. The charges were already about to $50,000 when I turned
everything off.

I wrote a very, very apologetic email to amazon, and they forgave all the
charges, for which I was very grateful.

Definitely a learning experience.

------
hox
And please for the love of all that is secure, use IAM roles. Even for your
personal things. it's not that hard and you can stop things like this from
happening even with the auth credentials.

------
smsm42
If the criminals can create a bot to scan for AWS keys, I wonder if github
can't create a plugin to detect the same and warn the committer or maybe limit
access to this data to original committer only. It won't be 100% but I bet the
bots aren't 100% either, so if it covers most of the cases it would still be
useful.

Or maybe just have a script on local github pre-commit hook?

~~~
fletchowns
I don't think it's github's responsibility to prevent you from shooting
yourself in the foot.

~~~
hamburglar
Nobody said it was their responsibility. If adding an optional feature that
prevents you from shooting yourself in the foot makes people like github more,
maybe it's worth it to them. "Responsibility" has nothing to do with it.

~~~
balls187
Would have to make it an option, because we check in various AWS keys into
private repos.

Also how do you differentiate an AWS root key (bad) vs an IAM Key (good)?

~~~
mod
Maybe you should rethink your policy either way.

In my circles, at least, it's standard practice to use environment variables.

But I would think clearly it'd be an option.

~~~
balls187
> Maybe you should rethink your policy either way.

Do you have articles discussing the cons of AWS keys in private repos?

We deploy our systems on vanilla EC2 instances, which are configured by using
a server orchestration system (Ansible). So for any env variables to get set,
we'd have to put them in config scripts, which are currently checked into
github.

To make it clear, we only check in our IAM keys that are AWS service specific,
like SES.

------
moe
Lesson 1: Don't publish your passwords.

Lesson 2: When using AWS then use Billing Alarms[1].

It takes about 1 minute to setup and enables e-mail or SMS notifications at
dollar-thresholds of your choice.

[1]
[http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/...](http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/create-
billing-alarm.html)

~~~
FireBeyond
The corollary to that is the first thing such people/scripts do when they have
access to your account is disable Billing Alarms.

~~~
rifung
Are you sure you can disable billing alarms with just your access/secret key?
I thought you would need to know the user name + password for that

~~~
TheDong
Having root credentials is sufficient, and based on the fact that this guy
said "apparently the s3 api lets you spin up ec2 instances" it looks like he
didn't touch IAM.

Root credentials are deprecated, but can still be used and if this guy used
them then yes, his billing alarms could have been disabled.

There's a difference between an admin iam user (can't do billing stuff) and
the root credentials (just as powerful as username / password).

------
cortesoft
Also, if you ever accidentally leak your keys... CHANGE THEM!

------
bigsassy
The same thing happened to my brother. He's been teaching himself software
development for a while, and learned the same painful lesson about securing
your keys. This was despite the things he did right, like:

1\. Used IAM roles (but he didn't lock down it's permissions nearly enough
clearly)

2\. Used two factor authentication.

[http://snaketrials.com/2014/11/11/espionage/](http://snaketrials.com/2014/11/11/espionage/)

[http://snaketrials.com/2014/11/12/espionage-
update/](http://snaketrials.com/2014/11/12/espionage-update/)

I should note Amazon forgave the money his account owed, given it was his
first time making this mistake on their systems. Amazon told him the servers
were probably being used to mine bitcoins.

edit - Oh, and on his blog he says it was $2000 over 12 hours. It turned out
to be $5000 over 2 hours!

------
aakilfernandes
This seems to be bad design on Amazon's part as well. There should be some
kind of manual thresholding in place to make sure this doesn't happen.

~~~
hox
There is, hence the emails and phone calls that were missed.

~~~
ryanfitz
Amazon also places limits on all of its services. I would think one of the
reasons for these default limits is to help with cases of abuse. You can learn
more about them at
[http://docs.aws.amazon.com/general/latest/gr/aws_service_lim...](http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html)

------
wazoox
I must be very old or very paranoid, but I wouldn't for one second think of
developing from scratch using only public infrastructure.

If I want to learn rail, I'd start by installing it and running it on my local
PC. What's the point of relying on heroku, aws, etc for just everything?
What's the point of using github as the main hub for everything? I'm
synchronizing stuff to github explicitly and carefully, my casual, day-to-day
operations are all running on my own hardware, network and services. Even my
most simple websites run first at home, then are sync'ed to the web after I've
checked everything works fine.

------
fat0wl
Some old man Java rambling for you. :) (I am ex-Rails actually)

one of the big things about Java that at first I thought was tedious but now
have come to love is the build process. By creating the "built" project in a
new folder, it is like you are guaranteeing that only stuff you manually
specify to be in the production code actually makes it in.

i know .gitignore is supposed to handle this but it just seems dangerous to
rely on source-control related filters when there is really sensitive security
info involved.

in node i've used grunt & really like that model. The idea being that you have
the project run a build then check that into a "build" branch in git & deploy
from there. it's also nice because i can uglify etc. if i'm not worried about
preserving the source.

I guess for distributed coding this is tough because you want your peers to
have access to the raw source + build instructions, but if I were working on a
really security-sensitive project (I do enterprise so I'm starting to
understand the need for extra precautions) I would probably only distribute
code that goes through a (perhaps even thin/transparent) build process & then
figure out a way to integrate changes back into the source branch.

I know it must sound like overkill to the open source world & especially for
Rails projects, where there ordinarily is no build stage, but it seems like
there is some level of danger in distributing a source control repo where you
are relying on you + any collaborators to properly configure .gitignore and
never accidentally check anything sensitive into the repo. I am decent with
git but no expert, i still sometimes accidentally stage things I shouldn't
when initially checking in a .gitignore -- and i have much more faith in
myself than some of my peers, hah.

~~~
not_kurt_godel
> The idea being that you have the project run a build then check that into a
> "build" branch in git & deploy from there.

This doesn't at all solve the OP's problem, you shouldn't be putting your
build under source control in lieu of your source code.

~~~
fat0wl
well the way that it works in Grunt is actually that you have your source
branch and then there is a separate build branch -- so the source is still
versioned, but only the artifacts from the "build" branch are presented to
public / pushed to your server, whatever.

i just appreciate the security builds add. In enterprise coding the paradigm
is very simple -- pretty much keep whatever u want in source, but only built
artifacts are ever deployed to the servers. In a closed-source setting this is
perfect.

I understand what you mean though it doesn't really solve the issue of
distributed, open-source development. (I guess no one would really make a
public repo unless they wanted to share, huh?)

------
nadams
I think a lot of the comments here are good - but a thing to keep in mind with
github is that their free repo hosting is by default - public. Which means
that your code is indexed and searchable - by anyone visiting github including
anonymous users. They do give you free private repos if you are a college
student or professor.

I am a big advocate of self hosting - not because services like github suck
(in fact I'm a big fan of github, bitbucket and other services) but you have
more control over your own code (that and you can setup private repos).

Here is a small list of self hosted solutions (I forked indefero into srchub -
many don't like the google code feel but I personally like it for the
simplicity and the fact I can easily fix/modify/add on to it):
[http://softwarerecs.stackexchange.com/questions/3506/self-
ho...](http://softwarerecs.stackexchange.com/questions/3506/self-hosted-
replacements-for-mercurial)

~~~
fragmede
FYI, bitbucket and possibly other services offer private repos for free (in
order to compete with GitHub).

~~~
nadams
tl;dr; free services are fine - just make sure you have backups of all your
data

For most individual projects - they are probably good enough. However, the
thing to keep in mind is that for free they usually limit features (I think
bitbucket limits the number of developers) or remove features all together
(google code removing downloads support, github did at one time but
reintroduced it). I'm not saying that I think they should be offering
everything for free and shouldn't take away features - they have to make money
- but in my opinion always have backups and a plan B in case you need to jump
ship.

I was a huge google code fan (the design isn't Web 2.0 with social
integrations - but I prefer functional over design) until they pulled the plug
on downloads support (which for most Linux people isn't a big problem - a make
&& make install and it's compiled AND installed - for Windows it's not that
easy). Which forced me to self host - I do mirror many of my public projects
on github but in the event github management decides to remove/reduce a
feature I won't be struggling to find a new "home" for my ~100+ projects.

Also - if you look at the issue tracker for google code it's pretty obvious
that Google is no longer supporting or even monitoring it (several spam issues
have been appearing). I don't have a crystal ball - but my best guess is that
Google will pull the plug on Google Code as well. And it would make sense -
most people have already "fled" Google Code for github/bitbucket etc.

With that said - let me put on my tin foil hat and say that even if we ignore
any potential future issues there could be potential privacy issues if you
upload your code to a private repo hosted by someone else. I'm personally not
the type to snoop, but that might not stop some lowly intern from getting
curious. Or even some bug on their site that makes private repos public (for a
short time anyways). Did you hear about the time that Dropbox made passwords
optional for four hours [1]?

[1] - [http://techcrunch.com/2011/06/20/dropbox-security-bug-
made-p...](http://techcrunch.com/2011/06/20/dropbox-security-bug-made-
passwords-optional-for-four-hours/)

------
amluto
There really ought to be a way to _cap_ your bill. Alarms aren't enough when
large bills can be run up overnight.

~~~
donavanm
And what would the action be when that limit is reached? Terminate all your
EC2 instances? Delete your objects from S3/Glacier? Destroy all your
RDS/DynamoDB/SDB tables? Disable your CloudFront CDN assets? Stop returning
DNS answers from Route 53?

This blog genre would then be "Amazon deleted all my stuff and ruined my
business because they didnt think I'd pay $26". As the story notes they tried
to reach out multiple times using multiple contacts. There are also existing
tools for setting up specific billing alerts if you really want to take some
action at $26.

------
vertis
I documented a similar experience just over a year ago[1].

I ended up helping give a talk about my experience at the Amazon Summit in
Sydney. I hope I made a good cautionary tale to the devs/ops/managers
attending.

[1][https://news.ycombinator.com/item?id=6911908](https://news.ycombinator.com/item?id=6911908)

------
theGimp
In addition to the steps mentioned above, one thing that has kept me safe from
my own heedlessness is to never, ever store credentials in a source code tree.

If your project is reading credentials from a file, rewrite it so it reads
them from environment variables.

Most IDEs make it very easy to do that, and Python's virtual environments can
do that work for you. Yes, it takes more effort, and sometimes it will be a
little convoluted[1].

However, it's well worth the effort as you will have a system that you can put
your faith in rather than having to double check every time in order to make
sure you're not about to inadvertently commit your API keys.

[1] Example of my own: Pycharm does not read variables stored in a virtual
environment's configuration, so I have to set them twice.

------
hisyam
> Unfortunately the Rails tutorial is pretty bland so about half way through I
> decided to snoop around to see what my options where. Somehow, almost by
> chance I ended up a subscriber on Lynda. Lynda only has one Rails tutorial,
> but its pretty in depth and is backed by a five hour Ruby tutorial.

I had the situation in reverse. I found the Lynda.com rails tutorials first
(in 2011) and I found it lacking compared to Rails Tutorial. I'm not sure
what's the situation now, from Lynda.com website it seems the author (Kevin
Skoglund) has updated the tutorials for Rails 4.

------
Xorlev
I thought new AWS accounts came by default with a 20 instance limit...

~~~
t0mas88
Yes, but 20x the largest instance option is expensive. The limits at Rackspace
Cloud are a bit more sane, they limit the total "size" instead of number of
instances. So you can run many small ones without hitting the default limit,
but not as many large (expensive) ones.

~~~
Xorlev
The author just mentioned that there were 140 servers in his account. I do
know how expensive the servers can be.

------
lxe
> Actually hooking into the API was very straightforwards, and didn’t take
> more than an hour to set up.

I have learned that if this is the case, I'm doing something wrong with my AWS
setup.

~~~
hyperliner
Yours is a great observation! With great power comes great responsibility. If
you find that you got great power but don't feel you got great responsibility,
then chances are you are doing it wrong.

------
zatkin
I used some app to upload a 250GB archive to Amazon Glacier. Turns out the app
was having difficulties uploading it all at once, and its queue functionality
sucked. I explained this to Amazon, since they were suggesting that I use that
app to begin with. Turns out I ended up using terabytes of data usage just
getting everything up to Glacier because the app was faulty. They tried to
bill me something like $1500, but I never paid for it and just had my data
removed immediately.

------
plasma
Maybe Github can reject commits by default that contain phrases of magic
keys/known strings and file combinations that contain passphrases.

~~~
logn
I'd rather Github not become the Gmail of version control.

~~~
jagger27
Github is already the Gmail of version control. Isn't this the reason the
attack is possible? Every file is automatically indexed for search.

~~~
logn
Yes. However I meant that I don't want Github to start developing features
that involves scanning and applying AI to codebases and their users' behavior,
because I think it's a temptation they start marketing that data.

E.g., "Buy our engineer intelligence subscription! Find out if your job
applicant is a ninja or noob!"

I suppose there's an argument that the data is already public and anyone could
apply that analysis, which is true. But the difference is that Github is not
profiting from this; their feature development will continue to focus on
private repo owners as their customers (whose interests are pretty much
aligned with public repo owners).

------
coralreef
Is it not possible to setup some sort of dollar value or bandwidth maximum and
freeze the account upon reaching that value?

~~~
DevFactor
I wish!

$100/mo cap would save a lot of these accounts from being hacked. My mistake
was not unique, and if you browse around on Google you will find other authors
who have had the same issues.

------
mylons
mildly sensational conclusion. his real lesson should be "this is how i
learned how to use github."

~~~
freeall
Agreed. But the discussion here on Github is pretty good and tells us that
it's happened to quite a lot of people. So the headline might be a bit
sensational, but it's a real problem.

~~~
mylons
definitely. but he's blaming it on learning rails in a way. this is coming
from someone who did this at one of their jobs. i accidentally committed AWS
keys, and we had dozens of servers launched on our account in minutes. was
insane.

------
CJefferson
Is there an easy way of putting a strict limit on the amount of money I'm
willing to spend on AWS?

~~~
eastbayjake
I haven't seen anything about limits, but they do let you set alerts when you
pass a user-specified spending threshold.

------
kephra
The expensive mistake was to use Amazon at all. An own domain name costs about
$1/month, an OVH root server less then $10. Install a minimal Debian and Linux
Containers on it, and expand your own cloud, if you need it, e.g. by extending
with cheap Hetzner servers.

~~~
vertis
Regardless of what service (AWS, Linode, Rackspace, OVH) if you leak keys or
passwords you have the potential for costs incurred by malicious users.

I have not used OVH, but a quick perusal of the OVH API[1] suggests that you
could invoke plenty of commands that would incur costs.

[1] [https://api.ovh.com/console/#/order](https://api.ovh.com/console/#/order)

~~~
Dylan16807
But your app would not have any keys. You would give it a server and that
would be it. There's no API use in a toy project, and so no way to leak
account authentication.

------
jedberg
Amazon has been really proactive in protecting against these kinds of things.
They seem to be searching the web constantly for API keys, because they'll
send you emails that say "hey we found your key here, you better do something
about that".

------
projectramo
The same thing happened to me. Almost exactly. Rails again. S3 bucket for
images. Following along with 1monthrails this time. Pushed the key, fell
asleep, awoke to Amazon warnings and a $2000+ bill. Also removed.

I wonder how often this happens. Are they mining bitcoins?

~~~
Buge
It would be really stupid to mine bitcoins. I assume they were mining
litecoins or some other more GPU/CPU friendly coin.

~~~
jmnicolas
Why ? It's slow but it cost them nothing ...

~~~
Buge
Because they can make money much faster with litecoin.

To illustrate this, EC2 GPU instances have a NVIDIA Kepler GK104 with 1536
cores, so that is like the GTX680. It gets 120 MHash/s on bitcoin, which
translates to 2 cents per month.

It gets 207 kHash/s on litecoin, which translates to 32 cents per month.

I think on vertcoin you could get 90 cents per month.

~~~
jmnicolas
Wow the effectiveness of GPU for mining Bitcoin has dropped dramatically since
last time I looked at it (admittedly a couple years ago).

If I remember well you could make a few hundred bucks a month with just one
high end ATI graphic card.

------
mattkrea
There seems to be some serious confusion here if this dev thinks you can spin
up EC2 instances with the S3 API.

Perhaps one lesson here aside from keeping keys outside of public repositories
is to learn how an API works (IAM, arns, etc) before using it.

------
iancarroll
I've said this before, but AWS billing support is usually quite sympathetic to
your situation if you've made a mistake. They've dropped $120 in spot instance
charges I had which went way over what I expected.

------
Illniyar
"Turns out through the S3 API you can actually spin up EC2 instances"

Can you really use the S3 API to spin up EC2 instances? or is the guy just
mixing the fact that the credentials he used for S3 can be used for other AWS
APIs ?

~~~
mdnormy
> Can you really use the S3 API to spin up EC2 instances?

No. My guess would be credential with allow:* service access.

------
jakejake
I'm not at all surprised to learn of bots cruising github looking for keys. I
think a good lesson is that if you ever accidentally expose your API key -
revoke or delete them immediately and generate new key.

------
driverdan
I'm surprised no one else mentioned the Heroku mistake of putting config in a
file instead of an environment variables. Settings like API keys should always
be in env vars on Heroku, not in a config file.

~~~
theGimp
If I understand you correctly, Heroku does make that possible. You can use
Heroku Toolbelt or their web dashboard to add environment variables.

Adding environment variables to a file is just a convenience.

------
jorgecastillo
Damn! I know one thing for sure after reading this, I am not even trying AWS
until I really really know what I am doing. For now I'll stick with VirtualBox
and DigitalOcean.

------
ams6110
Had a very similar thing happen to a co-worker... accidentally pushed an AWS
key to github, and over $7,500 in charges before it was noticed.

Fortunately it was forgiven.

------
lexalizer
This happens very often. Like many others have recommended, disable the global
AWS keys and use roles.

------
fragmede
The github help page* literally states "Danger.. If you committed a key,
generate a new one." In a box. In Red.

I'm not sure how much clearer GitHub could make that.

*[https://help.github.com/articles/remove-sensitive-data/](https://help.github.com/articles/remove-sensitive-data/)

~~~
mikeryan
I use github every day.

I've never read nor seen that page.

------
bbcbasic
I wonder how much crypto they mined? Probably $1 worth but still worth it for
them!

------
i386
Always put your API keys in an environment variable or ~/.

------
je42
mmmh. Using s3 key/secret to launch ec2 instances ? Does anybody have the
pointer to the docs for that ?

~~~
je42
found it.
[http://docs.aws.amazon.com/AWSSecurityCredentials/1.0/AboutA...](http://docs.aws.amazon.com/AWSSecurityCredentials/1.0/AboutAWSCredentials.html)

~~~
je42
mmh. i think the author mentioned this as well: weren't these in the past just
for S3 ?

------
ljk
> _Over the holidays, I opted to try to teach myself Ruby & it’s companion
> Rails._

shouldn't it be "its"?

------
mc_hammer
Why do people use this? for $3000 you can run like 6 4ghz machines with 12 gb
ram out of your house with like 10mbs up and 50mbs down. I seriously believe
the % of customers of amazon computing that actually need it is like 1%.

~~~
shortstuffsushi
Why do people use Amazon? A number of reasons.

Do I want to buy and manage servers myself? Would I pay for an additional
computer to host my little toy Rails app? Will my ISP allow me to route
network traffic to my personal IP with a standard plan? Most, at least that
I've seen, prevent you from hosting.

For a little project something like the author of this article described,
using virtual hosting is significantly easier (and more realistic). If you're
going to run an enterprise operation, bringing that sort of hardware in house
_might_ make sense, but a lot of larger companies are still pushing that
hosting out to places like Amazon as well.

Now, if you're asking why Amazon would allow (new) customers to scale up to
$3000 worth of hosting overnight, maybe that's a separate issue. But how
should Amazon judge that? There are probably users who _want_ to jump straight
up to that sort of scale. And evidently they did detect that the author didn't
seem to fit that use case -- they actually called him about it pretty much
right away.

------
logicallee
why doesn't Amazon just ask you on sign-up if you're going to be mining coins,
and if you say no, require specific authorization from you (outside of the
usual key) to start doing so? Then until you authorize it, they can just not
run mining instances on your behalf. surely it's easy for them to tell when
this is being done?

EDIT: this got downvoted, but I stand by it, plus it's a question, so you
could reply and answer it. In my thinking it's the same reason there's a daily
ATM withdrawal limit _set by default_. You can lift it, but it's there to
reduce incentive (payoff from trying to see your PIN and then stealing your
ATM card.) the current policy is like the bank calling you and saying, "ummm,
I hope you know you've already withdrawn $7,000 and seem to be continuing."
Given that bitcoin is (literally) cash, it seems to me saner to not run
bitcoin mining instances by default, unless you authorize it specifically. or
can they not tell?

~~~
cthalupa
Bitcoin mining is not a service offered on AWS - they are spawning up EC2
instances, which are virtual machines you have root access to. From there you
can mine bitcoins or do whatever else you want.

Amazon doesn't have access to the data on the instance, or a list of processes
running inside of it, or similar.

~~~
logicallee
I know, but even without root isn't it trivial for Amazon to unambiguously
tell that this is what the VM's are doing, without looking at anything else on
the instances? How can you run virtual machine instances without knowing what
the CPU's and GPU's are doing? (There is no mathematical 'private computing'
or secret computing over untrusted hardware, i.e. hardware run by other
people, that is used in practice anywhere in the world, where for fancy
mathematical reasons the operator has no idea what the CPU or GPU of the
instance they're running for someone is actually computing. It's not even a
cryptographic primitive people know about, and certainly not something
performed by hypervisors.)

The CPU or GPU pattern of bitcoin mining must be completely unambiguous and
trivial for Amazon to detect on EC2 instances. Or am I wrong for some reason?

~~~
freeall
I would think that looking at patterns on the CPU level is almost impossible.
The overhead would be enormous.

For instance if you look for a magic number. Instead of putting two numbers on
the stack, and perform an addition you now have to have a conditional jump
before the addition.

Maybe I'm wrong, but at least I think it would be virtually impossible.

~~~
logicallee
You're right, but I meant for bitcoin mining specifically. it's incredibly
resource intensive, so it's not something you have to check for all the time,
just when something seems to be behaving this way suddenly (and spinning up
lots of instances, etc). When there's an obvious huge resource spike, you can
just check to see if it's bitcoin mining.

If they want to be neutral and not do introspection like this without
permission, they can ask the user on sign-up if they want "Protection from
mining processes" where sudden activity spikes will cause them to do
introspection and shut down an instance if it seems to be bitcoin mining.

EDIT: plus, bitcoin mining is the same operation over, and over, and over, and
over again. You don't have to "catch" it doing a particular operation. A brief
sample taken at any time will show the VM doing the same exact thing. (tons of
SHA hashes.)

I'm not an expert though so perhaps I'm missing something!

