Hacker News new | past | comments | ask | show | jobs | submit login
Using a single AWS account is a serious risk (cloudonaut.io)
322 points by hellomichibye on Aug 7, 2015 | hide | past | favorite | 138 comments



For Tarsnap, I have several completely independent AWS accounts. This is partly because I needed privilege separation before IAM existed; but I keep this setup mainly because it's dead simple and completely avoids the risk of user error: When I'm doing development work, I don't even have access to the Tarsnap production accounts.


Yeah, this is the way to go. For when I have to use the AWS console, I use separate Chrome profiles for each of my environments, so I don't even have cookies, saved passwords, etc. that might relate to other accounts. (I also use separate Chrome themes in each environment and use a CSS manipulator such that my prod console is in a bright-freaking-red Chrome window with a red background. Blue for test, green for dev.)

For my own AWS accounts, I use a read-only account in the profile and, on the very rare occasion I want to get clicky rather than use awscli to frob something, I'll log out and log back in as my writable-to-IAM user. I have a third user that only has IAM read/write privileges if I have to deal with that service. (I haven't sold my clients on this approach, though.)


> use a CSS manipulator such that my prod console is in a bright-freaking-red Chrome window with a red background. Blue for test, green for dev

Ha, I do the same thing. I learned about this from a previous job I had where prod/test webapps were functionally similar but had obnoxious coloring in the header. Very effective.


Visual indications are so great, aren't they? I do it with terminals, too; prod servers get obnoxious red-background prompts. (Root terms on all my machines look different from unprivileged ones, too.)


Wow, I can't remember the last time I ran a shell directly on a prod machine. The very thought feels so dirty now.


When you inherit your environment and are whipping it into shape, shit happens.


Whether it's shell literally, or "shell" as in, the window from which you ultimately perform shell-like things, there's still a place you perform these tasks and knowing when you are in simulation or actual is an important state to track.


How do you know what's really going on on those machines and in their respective environments assuming they are geographically distinct?


Any advice/tutorials on how to set this up? Ideally I'd like just that ssh session to change the text to red for prod boxes but then go back to my normal iTerm theme after exit...


In the .bashrc/.zshrc of the remote machine add this:

    echo '\033]11;#440000\007'
On the local machine add this to reset the terminal color:

    ssh () {/usr/bin/ssh "$@"; echo '\033]11;#000000\007'}


You can muck around with your bashrc, if you use that shell.


Hmm good idea. I haven't had any issue with it but it's trivial to change background colors. Probably not a bad idea. Thanks for the tip!


Any links to how to do this? My google-fu seems to be failing me. I'm assuming you somehow set things up so it automatically detects the page / content (based on the environment name or instance tags??) and changes the background?


Easy way to do this in e.g. Rails:

  <body style="foo bar baz env-#{Rails.environment}">
Then use CSS like you would normally on body.env-production, body.env-development, etc.


You meant class, not style. But great tip though:

      <body class="foo bar baz env-#{Rails.environment}">


One way to do this is to look at the "awsc-login-display-name-account" div, which contains your account ID. You can then alter the look of the page (perhaps making the top header bright red) depending on which account ID is in use.

Could make a nice Chrome extension...


You can use an extension like Greasemonkey, with a simple script which will extract aws account id and will apply some css based on it.


I log into different Chrome profiles that style the same pages differently. Dumb and easy.


By using IAM roles for EC2 you can also move more of the key management to Amazon. When you attach an IAM role to an EC2 instance Amazon will provide temporary credentials through the local metadata server and automatically rotate them for you. This avoids storing and copying your credentials, and makes it easy to separate your environments.


IAM roles for EC2 terrify me. They completely break OS privilege separation, since every user has access to the keys.


The servers I have set up this way aren't really multiuser machines, but your point is a good one.


Not necessarily; both windows and linux allow firewall rules based on user.


IAM instance profiles are dangerous.

Let's say you have a bug or SQL injection in a web app (or choose any server): the user running the web app and/or database now has access to your AWS account.

Instance profiles are available to every user and process on the server for the life of the server.

Layering multiple least-privilege roles is impossible: you can't assign multiple IAM roles to an instance.

You can't separate what process receives access to those credentials.

The old-school way of embedding credentials in a config file and make it readable only by root and/or a specific user account on the system is currently the best solution. Better, but more challenging at scale, are SE-Linux, AppArmor, etc.


> Layering multiple least-privilege roles is impossible: you can't assign multiple IAM roles to an instance.

You can assign multiple IAM roles to an instance profile, which is what is associated with an instance.

See e.g. [0]; you can add IAM roles to instance profiles without destroying the instance.

[0] https://docs.aws.amazon.com/cli/latest/reference/iam/add-rol...


> You can assign multiple IAM roles to an instance profile, which is what is associated with an instance.

Perhaps there is some contradiction in the IAM docs, but I couldn't find that reference. This seems to indicate that only one role can be assigned to an instance profile:

"Note that only one role can be assigned to an Amazon EC2 at a time, and all applications on the instance share the same role and permissions." (first paragraph, last sentence)

http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-usingr...


Fyi (and don't take this as legal advice) such separation is a really good idea in terms of PII/PCI compliance.


Have you ever thought of separating your dev and production environments into VPCs? That's what we do at my job and it works out very well for us. Though of course, you're line of work involves much more data security than mine.

The main takeaway here is to always use IAM accounts when doing stuff with AWS, and make sure each IAM account is only permitted to do the things you want it to do. It might be a pain in the ass to learn how the IAM policy syntax works, but believe me it works out for you in the long run!

(And, of course, use multi-factor auth. But you should be doing that anyway...)


Separate VPCs wouldn't help in terms of AWS keys. I don't just want to keep dev and production servers separate; I want to keep the AWS services which they access (e.g., S3) separate too.


Genius insight that could be applied to not just AWS but Parse, etc.


Hey gargarplex - on the off chance that you see this, could you send me an email? Wanted to get in touch with you regarding one-to-many supplier registration. I've saved it in the about section of my user page. Apologies in advance for the unrelated comment.


About three weeks ago I started working on a new open source project meant to interact with AWS. Coding fast and dumb, I cut and pasted my personal AWS credentials into my source code, committed it, and pushed it to Github.

The next day I got an email from Amazon, alerting me to the problem. Apparently, they scrape github looking for just that kind of stupidity. I instantly deleted the project, but it was too late.

Amazon ended up waving the nearly $3k in EC2 charges I incurred, thankfully. I'm now a zealous advocate for making sure a person never even HAS AWS credentials. Instead, make a new user without a password for each use case, and manually select the privileges that account has.

If you have a password to AWS, you shouldn't have credentials.


I posted to the AWS forum and accidentally copy/pasted my secret key. Within 24 hours $11k of charges. I called them, they wiped them. It's amazing how quickly people find and use these things.

What kills me is there is no easy way to stop all the instances for an account, in a region. It took me hours to kill all the instances. They had maxed out the number of instances in every single region. Very, very annoying.


You can automate the stopping via the CLI interface. While I don't think there is a single command to stop all instances, you should be able to whip up a script to get all id's then call the stop-instances command on each of them.


Any idea what they were doing with them? Mining bitcoins? Hosting CnC for a botnet?


It's always Bitcoin mining. The attacker spins up a bunch of GPU EC2 instances and mines as long as they can. I don't think the profit ends up being very large (those GPU instances put out a puny amount of hashing power compared to modern mining ASICs)...


It's not that bad. The smart attackers don't do Bitcoin mining, they do altcoin mining, and they pick altcoins that don't have ASICs out. Then they are competing with other GPUs.

I'm not sure what the profit is, but I'd guess it's between 30% and 70% of the bill incurred.


ROI for BTC mining on standard AWS pricing is about -90%. So attackers get 10% free BTC on spend.


So, out of $11k, they make $1k. That's the kind of money they need to make between once and three times a month minimum depending on where they live. Doesn't Amazon notice patterns in terms of source, scripts which are uploaded and scaling profile: Who uses 2 medium instances for a year then spins off 2,000 in 20 minutes?


Certainly there's a pattern there, but it's not THAT far away from people who intermittantly scale stuff for short bursts of huge processing. False positives in those cases for people that really intended to spend $10-50k in a short time might mean a HUGE loss of revenue and/or incurred customer support and service costs.


You must lose a fair amount of water when you're greeted with a surprise bill of $11k.


Yeah, it's a rare moment in life. Especially fun since it happened on a Saturday morning (the notification, that is).


When people find the accounts, what do they use them for? Mining bitcoin?


> Amazon ended up waving the nearly $3k in EC2 charges I incurred, thankfully

For what it's worth, the word is "waived". :-)

I'm glad Amazon dealt with you well!


Github provides a public near-realtime stream of events, I'm sure blackhats are constantly monitoring it looking for private keys all the time. Even if you undo a commit within seconds, it may be too late.

https://api.github.com/events


If Amazon (and my coworker in a 3 person dev shop) can automate the scanning of common API key patterns as a pre-commit filter, I wonder why Github itself doesn't flag for this sort of thing before publishing.


What should they do? Reject a "git push"?


That's fairly reasonable. At least they can prompt you "Did you really mean to do this?" -- they have enough infrastructure to update your project page with a "Make pull request" button after you push, they can repurpose that for a "Approve next push without asking questions."

Alternatively, if they supported custom pre-receive hooks via e.g. webhooks, someone could publish a script or stand up a web server to check for such things.


Supporting git hooks on github would be really great. Always wanted to have them for rejecting debug statements, or as a rudimentary poor-man's-ci / linter


IIRC, a regex that matches a generic AWS access key will also match a git hash.


Not sure what I said that warrants multiple downvotes. Git commits are identified by SHA1 hashes, so they would be caught by the same regex that would catch AWS keys.

According to the AWS security blog, this is the regex you should use for secret keys: `(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])`. To match git hashes, you can use this one: `[0-9a-f]{40}`.

See the likeness?


I did not know that existed and that is pretty awesome!


The real mistake here is using your account owner API key in your code, when you shouldn't use that key for anything ... In fact you should delete the account owner access key!!! Otherwise you may have used the key for a Power User or Administrator level IAM user in your code instead of a custom user with least privileges.

Always use least privileged IAM keys in code if you must, preferably use EC2 roles instead of access keys, and never give your IAM users permission to run instances ... That way if the key is leaked then evil hacker can't do much with it.


I agree. I haven't used AWS for a while but IIRC for a small side project I was looking into with a couple of friends I set up a low privilege IAM access for them but I logged directly into the VPS using a password if I needed to update something.


> I cut and pasted my personal AWS credentials into my source code, committed it, and pushed it to Github.

This paradigm is very puzzling to me. Why do people feel the need to publish every small project to the public? Is it because github is so easy to create a new repo? Why don't more people use private repos like self hosted or bitbucket?

The whole reason why I spawned my own source code hosting service is so that I can work on projects in private without worrying about random people looking at what I'm working on (some of my C++ projects would give even Stroustrup a heart attack...). Note - this isn't saying bitbucket or gitlab or any of the others aren't good (oh I have my own opinions and comments about them...) - I've become rather bitter/paranoid/resentment of offers of free hosting.


> Why do people feel the need to publish every small project to the public?

Github acts as a portfolio site for programmers.

The usual refrain when doing hiring is "check a candidate's Github." Thus every potential employee makes sure there is something other than cobwebs on their pages. This is especially explicit in startups around San Francisco. Hell I'm guilty of it as well.

Likewise, many language communities actively encourage library development. Ruby and Javascript are great examples of this.

Then there are entire tools built around git paradigms. Heroku's push to deploy a perfect example. Many services have easy application hooks into git actions. Github has many built in. Pushing code and then kicking off a build system with automated tests is worth every penny I spend. Other services have this as well. However, it's harder (read: more than 5 minutes work) to build these hooks on a server you are running yourself. Paying Github for the work is a no brainer.

I've used Bitbucket for private repos, but usually the user wants their code public. They want to show off. But if I wanted to keep some code private (like my latest app) I'm all for private hosting.


> Github acts as a portfolio site for programmers.

No [1] [2]. It is not your resume or your CV. You should be able to highlight projects or accomplishments on your CV - github gives you no control over the layout of your profile.

> The usual refrain when doing hiring is "check a candidate's Github."

(I'm assuming you mean something that they do rather than something they don't do)

I'm not saying you can't look at be like "oh those are some cool projects he is working on" - but actually using it to say "man this guy is a loser coder - we can't hire him!" I think you should just step outside for some fresh air and just relax and listen to the birds for awhile.

If you need reasons [3] why you shouldn't [4] - there are plenty [5].

> However, it's harder (read: more than 5 minutes work) to build these hooks on a server you are running yourself.

You should check out Jenkins. Within a couple of mouse clicks I can ask it to automatically build, run tests, archive the binaries, and send them somewhere. And even email me if it fails.

[1] https://blog.jcoglan.com/2013/11/15/why-github-is-not-your-c...

[2] https://tommcfarlin.com/github-is-not-your-cv/

[3] https://github.com/gelstudios/gitfiti

[4] https://github.com/will/githubprofilecheat

[5] http://mikeboers.com/blog/2014/10/26/the-evils-of-gamifying-...


Portfolio != CV


I guess it really depends on your definition.

For example Mahara claims to be an online portfolio creator - but you can also build your CV with it [1].

I just wish someone would come out and say "this is the format your resume/CV should be in" - not just for my own sanity but so that when applying for jobs. A lot of companies now they have their crappy resume reader that attempts to read your resume into normalized text boxes - and usually fail miserably forcing you to retype your resume.

[1] http://manual.mahara.org/en/1.8/content/resume.html


I'm surprised that of the quoted comment you focus on the "paradigm" of open sourcing code, and not the "paradigm" of mixing credentials with code.

Trying to protect against leaking creds by not open sourcing is a bit of misdirected effort.


I pointed this out because I've seen this over and over again with the single theme of "I was working on some silly project using AWS and I pushed my credentials to github".

I don't think I've ever seen a professional team push AWS credentials (or any other credentials for that matter) - and if they did it's very rare to the point where I don't remember.

As far as mixing credentials with code - that happens all the time. I'm not saying it's right but I have much bigger concerns - such as why people keep silly projects to github with their credentials. The first time I saw an article about it - I found out that there are bad people who are monitoring github for exactly that - and they will use it even before you realize it what you did. And not just AWS credentials mind you - MySQL, SSH, anything they can get their hands on...

I think my comments are getting lost in translation -

I'm not saying I'm against people pushing their silly projects. Push your 1 line GPL code projects all day long - I don't care. That's why github is there. What I find puzzling is not that people want to create new projects - but they feel like they must create a git repo for EVERYTHING they do and push it up to a public github like it's going to change the face of computer science as we know it or something. I've seen people keep documents and even bash profiles as public repos. I'm not saying it isn't a good idea to use git for those purposes - but I wouldn't want my bash profile common knowledge (especially if it contains commands or functions that I use at my day job).


One reason why I like to publish even frivolous things to Github is because it forces a bit of rigor upon whatever I'm publishing. I'm more likely to clean it up and keep it tidy, and I can use the same workflow for it that I use for everything else, like using Github Issues as a TODO list.

But I suppose that might be more of a backsplanation.

More likely is that I do it as a backup strategy, because I have limited private repos, and because I'd rather just keep everything on Github than use multiple services.

It turns out that unless you commit AWS creds, nobody cares about your repos.


> It turns out that unless you commit AWS creds, nobody cares about your repos.

Oh there are people who care - just not the people you want to care. Imagine a wolf circling it's prey.


Why does it matter? I think your mistake is assuming that people believe the "change the face of computer science" bit. And perhaps this person had a particular reason for putting this up (wanted to show someone), and doesn't actually feel the need to push every line of code they write. I feel like very few people are as obnoxious about it as you describe, but even then - again, so what?

You're more guarded about your code, that's perfectly fine, but not everyone feels that way. Code that's shared has the potential to benefit and inspire others (even stuff some might see as trivial or silly), code that's hidden doesn't. I see nothing wrong with erring on the side of openness (except with account credentials of course!).


You can use jgit with S3 for cheap and private git hosting.


Or BitBucket or GitLab.


Or AWS CodeCommit.


> Coding fast and dumb,

And that's why you should use Bibucket first and foremost. And then when you vet your stuff and you know it's clean, you can publish it on publicly Github.

If you rush pushing code on github public repos that's exactly what will happen. If you start with a private one then move to a public one you have more time to think things through.

Now one could imagine a third party service warning users of potential issues with files pushed,based on their name/extension/folder name. But privacy first , then open the code to the public.


You could just pay for GitHub to get private repos. If it's something this important to you, it's probably worth a few bucks a month.

This solution is also strange because if you ever committed anything private it will be in the history. So to make this work you either have to rebase over some history, or lose all the history.


> You could just pay for GitHub to get private repos. If it's something this important to you, it's probably worth a few bucks a month.

Github doesn't have a personal tier big enough for my private repos. (And I'd host my own before paying $50/month.)

I do pay Bitbucket, though, because I have a consistent and small-enough group of collaborators that it makes sense.


I have a lot of private repos. One for just about every little project I've done. Some larger projects have more than one. It would be pretty expensive to keep all of this in Github that way. The alternative is to combine a bunch of unrelated stuff into fewer repos (or not keep all of my projects in source control), neither of which is especially appealing.

I'd be happy to pay Github something for my usage and to support development. I believe in paying people for work that I find useful. But the pricing model makes the cost disproportionate with the value for me.


Or you could, you know, just take 20 seconds to set up a gitolite on some random server you have somewhere.


You are advocating for security through obscurity.


No, i'm advocating not to throw every weekend projects on a public github repo. This has nothing to do with security through obscurity. Nobody is supposed to have access to a bitbucket private repo but the owner and vetted collaborators.


I created a small tool[1] to help continuously audit public github commits for secrets, like aws keys. It uses the AWS provided regexes[2] to do to.

[1] - https://github.com/jfalken/github_commit_crawler [2] - http://blogs.aws.amazon.com/security/blog/tag/key+rotation


I'm as close to a "casual" user of AWS as you can get. I find IAM incredibly difficult to use. Any pointers?


"If someone gets access to your AWS access credentials, you’re in trouble."

I know we're not supposed to post negative comments that don't "add value" to a discussion, but the only thing that comes to mind is "really?". Your setup is as secure as you make it. How you use your API access is up to you. Putting all your eggs in one basket is not insecure. This article doesn't actually bring to light anything important. There is no risk involved so long as you pay attention to what you are doing.

Any set of credentials, if leaked, destroys security. So... don't set yourself up to leak your credentials? I mean, come on, seriously?

*edit: I unfairly used the word "incompetent". Change to a phrase about paying attention to what you are doing.


Pay attention to what you're doing, and never make a mistake. Or have anyone in your team/company make a mistake. While you're at it, please make sure you don't put any bugs in the code also.

We're humans. We're imperfect. Sometimes a safety net isn't a bad thing.


Except for the fact that you can explicitly contain the potential damage by controlling which accounts can do what. It's not like you have to have one god account with no MFA that has the keys to the city.


This is easy advice to a single DevOp who completely controls their own AWS account, but often there is a team of devs and ops with elevated access to the same AWS account and not all of them understand AWS IAM access control, so they put admin level keys in their code. This article gives advice for mitigating the risk of access key leakage.


//Somewhat off topic

Using any kind of AWS account (in a personal capacity) is a serious risk in my eyes.

Its much like I'm happy to take the risk of buying shares, but don't like leveraged derivatives.

I don't want to deal with something that is not my area of expertise and has very real potential to blow up in my face.

I wish Amazon allowed me to cap spending to say 200USD...that I can plan for much easier than X thousands.


>I wish Amazon allowed me to cap spending to say 200USD...

Agreed. We use Google App Engine for our recommendation engine (https://recent.io/) and daily budgets are a core feature -- just tap on the "Application settings" tab.

We use AWS for some additional components of our service that aren't part of the core recommendation engine, and I've been surprised by the lack of a maximum-dollars-per-day setting. Or if there is one, I'm not aware of it.


Do you know if the daily budget works for App Engine Managed VMs as well? I want to use them for a personal project, but was unsure if the daily budget would stop overcharging since they are using Compute Engine underneath.


If you look around (or spend a few minutes writing one), you can use a scram switch that goes off of AWS billing alerts.


I actually had a billing alert set, and I did get an alert, but it looked like this:

"You are receiving this email because your estimated charges are greater than the limit you set for the alarm "awsbilling-AWS-Service-Charges-total" in AWS Account XXXXXXXX.

The alarm limit you set was $ 10.00 USD. Your total estimated charges accrued for this billing period are currently $ 1050.95 USD as of Saturday 18 July, 2015 17:34:36 UTC."

So, it came a bit too late to take action.


What if I'm asleep? Or it fails?

Seems so much easier, and safer, for Amazon to just have a hard "don't charge more than this".

Increasingly, I wish this could be fixed at the banking end -- I'd like to be able to get my bank to enforce that I won't give more than £X to some company (obviously the company should be able to find this fact out, and not give me products worth more than £X).


Why would it matter if you're asleep? It's the cloud, if you're not automating things you're doin' it wrong. I wouldn't do that, but you've already put costs ahead of uptime, so burning it down should be fine, yeah?

Amazon's market is largely not people who are worried about those kinds of overages, and I can't really blame them for not worrying about these kinds of edge cases.


Use prepaid cards you load with a fixed sum.


Considered that - you'd still be legally liable for the full amount even if your payment method bombs out.

That said I do use this in a personal capacity - all my finance flows through a "burn" account that I keep cruising at a low-ish fund level. So if the sht hits the fan (fraud, overcharge etc)...so be it...I'll fix it once the smoke clears.


I imagine Amazon would still come after you in court for the rest of the money?


Do you authorize Amazon to charge an unlimited amount? You do not.


>Do you authorize Amazon to charge an unlimited amount? You do not.

You do. That exactly is the problem & why I started this off-topic angle thread bitching about the lack of limits. You can limit the actual charge on the credit card, but I see nothing capping the legal liability. i.e. They could in theory reposes your home because of an unpaid AWS account. In theory - I'd love to have someone prove me I'm wrong in this thinking. Until someone does though...this seems way too risky for a hobby/toy.


> They could in theory reposes your home because of an unpaid AWS account.

Technically, no. I doubt Amazon would risk the publicity and attempt to garnish your wages or pursue your assets; you could, of course, declare bankruptcy to protect your home from their judgement if they went so far as to obtain one.


>Technically, no. I doubt Amazon would risk the publicity

No? There is a big difference between "Technically no" and "uh they probably won't" - especially when it comes to legal matters.

Remember the home repossession was an example - I don't even fall under US law. The house thing was code for "legally the can wipe you off the map".


Yes you do. That's how their billing works and there are several legal clauses in their terms as well as notices in the console. You are responsible for the resources you use.

It's not on them to check your credit limit and only spend that much while you get access to resources first. They will have a legal claim to get paid and will pursue it, especially if its 5 figures or higher. The collections industry exists just for this purpose.


>AWS billing alerts

I don't want alerts, I want a fixed amount limit. If watching this sht was my day job then sure I'd be OK with billing alerts. I'm busy fighting other fires in my day job though & often won't see mails for days. By that time Amazon might have bankrupted me.

Amazon has some decent engineers & this is not technically difficult. Its very obviously a concious decision to not provide this very useful feature as any kind of capping feature will negatively impact revenue...but its also literally the only reason why I don't have a paid AWS account for personal use.


You're in AWS. Why do you think alerts mean you need to be watching for them? Alerts come from CloudWatch, to SNS, and from that can go wherever you want them to. So do that, with the tools they've already given you. And this lets you kill only what you want to kill--I mean, do you want them to dump all your S3 buckets? Or just, say, down all your EC2 nodes?

It's not about capping features being negative for revenue, it's that the customers who want it don't matter; if it's not something a company dropping $20K/month is going to use, I doubt it's really worth the time of day. You are a Linode or DigitalOcean customer, and that's not a bad thing, but I think that AWS is not the right place for you and I think that they know that.


>from that can go wherever you want them to

You miss my point - its not a technical problem. As I said this isn't my day job.

Sure a hacker/sysadmin/whatever could re-route the AWS warning to whatever system. For me this would be a hobby/toy though and I can't have a toy that exposes me to a potential multi-thousand bill out of the blue. Thats not a hobby - thats a gamble.

As I said in my initial post - its like leveraged derivatives. If I were a day trader living that reality then that kind of uncertain would be acceptable. I'm not a day trader though so I don't want that kind of blank cheque responsibility on something I'm not actively watching.


Complaining that they don't cater to people who don't make them even couch-cushions money and won't use the tools they have to do exactly what they want is really silly.

Not all services are intended for all types of customer. Right now in their business lifecycle, you're the kind of customer AWS should fire.


>won't use the tools they have to do exactly what they want

What I want is to reduce my legal liability from unlimited to something less than unlimited.

Show me how to do that and I'll concede that you're right & I'm wrong.


I believe his point is that you aren't their target customer. It's like walking into a freight distribution center and asking to buy $0.50 worth of chewing gum. Sure, somebody might accommodate you, but complaining about them not doing so doesn't make a lot of sense.

In Amazon's shoes I'm not sure I'd offer a feature like that at all, because someone might set that to be "smart", forget about it as the business grows, and then have a bunch of servers terminated and/or data destroyed right during an excitingly busy period. It would be an extraordinarily bad customer experience. It could be much better for them to do as now and then occasionally credit people who accidentally go over.


> and/or data destroyed right during an excitingly busy period

The idea that service providers destroy data when a cap is reached is just plain weird. Usually your access to the data is simply blocked - then you can pony up some more $$ if you want access to it.

Similarly, it's also weird that people think Amazon doesn't care about 4- and 5-figure accounts. They do. Big accounts bring in money... but so do a lot of small accounts... and there are a hell of a lot more small accounts than big accounts. At my last job our monthly bill was $1-2k, and I'd get regular contacts from the AWS account manager, along with free training days.

I think that the real reason is that AWS is generally not at capacity, and if someone does have a hostile overage event, it's quite short term, and the only money that AWS really loses is from the wattage used by the VMs. There's no extra labour or hardware on their side. Refunding overages is good PR and not all that expensive, as long as it doesn't prevent other paying customers from accessing existing resources.


> The idea that service providers destroy data when a cap is reached is just plain weird.

This fellow is proposing an account that's capped. In AWS-land with by-the-hour servers, that would presumably mean terminating running servers. Servers that have data in RAM. Servers with volatile local disks that get recycled as soon at they're terminated. Maybe you have an idea of how to shut down all of Amazon's 40-odd services with no risk of data loss, but I'm not seeing one.

If they're not going terminate them, then either they're going to keep charging or not. If they keep charging, then it seems like it's more a billing alert plus a service interruption. If they don't charge, then it's more like what they do now, which is waive charges for mistakes, except with a service interruption.

> Similarly, it's also weird that people think Amazon doesn't care about 4- and 5-figure accounts.

I think they do care about them. I'm just saying they won't go out of their way for the $50/month accounts if it means putting serious customers at risk.


You're mixing and matching your incredulity. On the one hand, AWS has 40 services. On the other hand, everything is terminated like an EC2 instance... Which really isn't the case.

So, to begin with, most of their services can either be powered down (with chickenfeed storage costs that they could cover as part of the deal) or are outright free. Most things that you fork over money for in AWS can be disabled with some form of networking block. Have code in Lambda? Well, it doesn't get destroyed, it just gets blocked. Have RDS databases or Elasticache? Block access, and perhaps power them down after X time (which saves them to s3, and block storage for AWS's internal use is very cheap - retail s3 is 3c/GB/month). S3 itself also gets your access cut. SES and SNS just stop processing their queues. Things like VPCs and IAM are free to begin with. The costs of keeping these things running behind a block is trivial compared to the overage charges they already routinely waive.

And then we come to EC2... and the story still isn't 'must be terminated'. EC2 instances can be powered down and ELB access blocked, leaving the config all in place and the instance's drive saved to s3 (which is where AMIs and volume snapshots/unpowered instances live). Yes, you will lose data in RAM, but you just get the account holder to accept that the machines can be shut down by AWS, just like already happens with spot instances. If the client opts in to capped pricing, then they can take that into account and design their system around the sudden downing of the instances.


I certainly didn't say everything is terminated like EC2, so I don't know where you're getting that from.

And it looks like we agree: any attempt on Amazon's part to create something like this would be complicated and would still not remove the risk of data loss. An service interruption is, of course, a certainty.

So as far as I can see, the feature still doesn't make much sense. It's only really useful to people who aren't doing anything in Amazon that matters.


>I believe his point is that you aren't their target customer.

I hear you & understand your argument. They are offering AWS free tier to me for zero USD though, so I don't think my offer of a paid account capped to 200 USD is unreasonable. It is telling though that they won't give free AWS accounts without CC details...

I'm not trying to screw amazon here...I just want a little bit of VM time that I can use to toy around with from a reputable source without signing over the rights to my first-born.

Maybe I expect to much...dunno.


I think small developers are their target customer, as well as big customers. After all, why would the big customer's care about the free tier? Similarly, why open the 'aws pop-up lofts'? https://aws.amazon.com/start-ups/loft/ If you are a big company thinking of using AWS, you probably pay a thousand bucks to send some of your engineers on a course to learn how to use it, or hire a consultant with those skills, you don't shlep down to the pop-up loft for their free 'ask-an-architect' sessions, free coffee and free candy.

It's self-funded startups and folks doing personal projects that those initiatives are aimed at, IMO, and they are the ones who would benefit from this kind of thing. By comparison, Google cloud (or at least appengine) does have billing limits, the daily ones reset each 24 hours, if you hit your billing limit, your instances just don't run until the limits reset.

To be fair though, the AWS customer service are always very quick to allow refunds, for small or large amounts. When I first started, I thought I had switched everything off, but something was still running and I got a couple of bills of around $20 over two months. When I complained, they refunded it, and were very nice about it. The are lots of cases online where they have refunded over $10,000 in fees to people who left their keys on github.

Maybe they figure the refunds are just the cost of doing business? However, I would prefer not to have to rely on the possibility of their post-hoc generosity when I do something equally stupid in future :)


They don't really care about small customers. However, they do care about the big customers that started as small customers, and about the engineers who use AWS for their hobby and choose it for their big company just because that's what they're familiar with.

It's why they have a free tier.


FYI, Azure already has monthly spending limits built in and suspends your account once you hit the cap.

When you go over your limit your compute services are all turned off and your data is rendered inaccessible until your account is resumed (which happens at the end of the month or when you raise your spending limit).

(Full Disclosure: I work for the Azure Web Apps team)


>FYI, Azure already has monthly spending limits built in and suspends your account once you hit the cap.

Perfect. Also seems to support python which suits me. Many thanks friend.


So when you hit the limit you want AWS to stop your spending... how?

You're spending $/hr on compute, so terminate your instances. Plus your EBS volumes, so delete your drives. Plus... S3? So delete all your data?


>So when you hit the limit you want AWS to stop your spending... how?

How? Very abruptly. I can stop the actual spend my side (credit card side) but I need Amazon to implement a legal mechanism that stops me from being legally liable for a (potentially) unlimited amount. Without that its too risky as a personal project...I'll rather do something safer like BASE jumping.

>You're spending $/hr on compute, so terminate your instances.

If everything goes to plan. hn is full of horror stories about people waking up to bills of people hacking their AWS & mining bitcoins. This would be a toy/hobby/project for me though & I'm painfully aware of my ignorance in this regard. Chances are I will screw up & have a bitcoin mining hacker on my account...hence me needing a hard cap.


Yep, doing budgeting at the end of the month and I found a $20 Amazon charge. Turns out it was from a little bit of data I left on their DB service (which took _a lot_ of clicking to find the one I was actually using) after my free year had expired. Certainly not thousands of dollars, but for just experimenting with the system months ago, it was a rude wakeup call. Ideally I would have been able to say "Never spend more than $0.00, kill any services you have to, I'm _only_ using this for evaluation."


For development/personal stuff, yes, that is exactly what I want. I want to be able to put a hard cap on my downside.

Maybe they can make it a separate account type, maybe the cap can be distinguished by resource type, but for mucking around/side projects/etc it's unsettling to use a resource whose cost is potentially arbitrarily high if something goes wrong.


Billing alerts go to SNS, which can deliver to email, SQS, or Lambda.

You can actually automate the response if you want; it doesn't have to be email-read-by-human.


> I don't want alerts, I want a fixed amount limit.

I don't think you want AWS - I think you just want a VPS or a dedicated server. AWS is for people who are going to need to scale at large - and you pay a premium for that. I've priced it out - as an individual you get more value from a VPS provider such as DigitalOcean than you do from AWS.


>I don't think you want AWS - I think you just want a VPS or a dedicated server.

Fair point - and perhaps the core of the issue. Since the start of this discussion a friendly hn soul has pointed out that Azure does actually allow hard-capping expenses though - a definite win for MS in my eyes.


I think people are forgetting how hard this can be... an EC2 server is easy enough to turn off but AWS has dozens of services charged at very fine-grained levels. How exactly are they supposed to stop spend?

The only way would be to completely reset the entire account, otherwise you'd still be accruing charges just for data at rest or incoming requests or anything else passive.

A complete account reset doesn't seem like a feature that's very useful or something they'd want to implement because it would likely cause more trouble then help fix.


Agreed. Wish I could limit cap the monthly amount spent.


Yup - frankly I'm very wary of anything that is an uncertain amount.

Did the same with my cellphone contracts - take expected spend, triple it and make that the limit. Put a number on it (even triple) and I can deal with it...just not liable for "unknown".


Another risk consideration is that anyone getting unauthorized access to your AWS account can delete all your resources and all your backups (snapshots, etc), effectively putting you out of business. [1]

One solution is to backup to a separate backup-only AWS account, with super-serious access controls (MFA and password physically locked away somewhere). Set up a "write-only" link, such that backups can be added, but never removed. This way, in the worst case, your runtime infrastructure can be decimated, but your data backups would be safe.

1 - http://arstechnica.com/security/2014/06/aws-console-breach-l...


I've recently setup Glacier with a Vault policy that prevents deletions.

I really like that layer of protection, but I'm under no illusion as to what a disaster it would still be if the main pass was compromised.


I prefer this method. Having multipe aws dashboards sounds like a nightmare. I would rather use the backup account approach with MFA tied to the administrator accounts on each aws site.


Multiple AWS dashboards is actually really easy with multiple Chrome profiles (not that I go to them often, there are APIs for a reason) and tends to encourage much better, much more isolated application design. I've worked in environments with a single account and in environments with multiple accounts and I can't imagine going backwards.


At Zalando we have an account per team, and have been releasing a bunch of tooling to help do this securely.[1] It's not completely easy (e.g., account creation at Amazon can not be automated), but the security aspects are really nice, and it lets us give teams more or less full access to just their account.

[1] https://stups.io


The MFA Condition is a huge win, and I'm surprised Amazon hasn't built this a tool to make this easier yet.

However, I question the merit of using two separate AWS accounts. While this separation of responsibility sounds nice in theory, doesn't it introduce additional maintenance burden because you now have two accounts to administer? You can't define or manage the roles in the 2nd account without credentials to do so.


Usually those other accounts have much fewer people with access=much lower risk of unauthorized access. For instance, I'm a big proponent of a backup acct that the main acct can push to but not delete from. That backup acct can have very limited and tightly controlled access. It's unfortunate RDS does you no favours in helping out with this; you pretty much have to dump your DB and push it off AWS or into another acct's S3 bucket.

IAM is just a PITA is what this boils down to. Create an IAM policy that allows users to push updates to elastic beanstalk but not touch any other resources in the account.. It's a major, major hassle. AWS has no concept of resource groups and each service has different ways of restricting access(ec2 can do it on tags, other services you kinda have to use naming schemas and wild cards in your policies). So you are often left needing to have users with a little too much access, and/or spending a LOT of time testing and crafting IAM policies..

IAM is a really good idea and powerful in many ways but unfortunately AWS's lack of consistency and UX across individual services really shows through sometimes, and with IAM in particular.


I recently switched to using the MFA service with Google's Authenticator app. I find it more pleasant than the normal send-text-to-device implementation. So far it comes with my recommendation.


I think the comment about using temporary credentials is an interesting one.

Where I work, we have a federated access system that generates temporary credentials so you can access the Console without having to sign in, and it uses our work authentication mechanism to do the initial handshake.

The thing is this process is useless if you want to use it with the AWS command line client, or any other tools that rely on the use of credentials. It would be nice if AWS adopted a plugin system or something where you could plug in a "credentials provider" of some sort, and the command line client queries that for credentials each time you make a request, instead of having to stash keys in ~/.aws/credentials


Is there a good primer on AWS and what it provides?


Is there an easier to use UI for setting up IAM users. I get hopelessly confused every time I attempt it, and worry about losing access to services I've already set up.


I don't think so. Mine is always a painful experience of trial and error using a combination of things cobbled together from the policy generator, manual editing, copy/pasting from AWS documentation, and using the policy simulator.

And then after hours of that I can determine that IAM policies fall way short off what I needed (isolation between different product groups in the org) and that we need to use separate root accounts.


I like the approach that Google cloud takes with projects. Projects are independent and isolated under a single account.


What's cool is that in AWS as a developer they disable the ability to even login with username and password. You can set it up such that the only access is through a site that grants federated access.


Like the solution via MFA & temp credentials! Simple & elegant.


You can also use ssh forwarding to log into your servers through the SSH bastion without dropping your private keys everywhere on the bastion itself, securing access even further.


I seriously question any business running just on AWS. There are plenty of other services that "look like" AWS and give you complete isolation.


aye aye captain obvious!


[deleted]


> Why is there a tophat over "IAM role”?

Presumably because rather than being a person, the IAM role is a “hat” that a person wears.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: