One big "gotcha" for AWS newbies which I cannot tell if this addresses: Does this set or allow the user to set a cost ceiling?
AWS have offered billing alerts since forever. They'll also occasionally refund unexpected expenses (one time thing). But they've never offered a hard "suspend my account" ceiling that a lot of people with limited budgets have asked for.
They claim this is a competitor for Digital Ocean, but with DO what they say they charge is what they actually charge. I'm already seeing looking through the FAQ various ways for this to exceed the supposed monthly charges listed on the homepage (and no way to stop that).
Why even offer a service like this if you cannot GUARANTEE that the $5 they say they charge is all you'll ever get charged? How is this different from AWS if a $5 VPS can cost $50, or $500?
That's what Amazon is missing. People want ironclad guarantees about how much this can cost under any and all circumstances. I'd welcome an account suspend instead of bill shock.
Via some accidental clicking in the control panel (trying to get an IP address for the instance, I think?) I ended up getting a bill from them for over $100. Which, to me at the time, was a huge amount of money.
It put me off of AWS forever. I don't ever want something that tells me how much they're going to charge me after I have already given them my credit card information.
edit: they did credit me back when I complained, but that doesn't matter. The risk to me wasn't/isn't worth it.
One of my services had a Google BigQuery "budget" set at $100. One of our test machines went haywire and continuously submitted a bunch of jobs. The "budget" turned out only to be an alarm, and even that they sent us 8 hours late, after $1600 of charges had been racked up. I responded in 20 minutes and shut it down. Google insisted we pay the full bill. After I wrote up a blog post on the situation and had the "publish" button warmed up, they finally relented and refunded us for the amount of time their alarm was delayed. Absolutely ridiculous that's not their policy to begin with...
For a company that supposedly puts the customer first, this is appalling.
There are a number of resource types that, simply by existing, will accrue costs. A lot of them, actually. On AWS that includes things like running EC2 instances, EBS volumes, RDS databases and backups, DynamoDB tables, data in S3 buckets, and more. The question is what should happen to these resources upon hitting a billing ceiling?
Should EC2 instances be terminated (which deletes all data on them), DynamoDB tables deleted, S3 data erased, RDS databases deleted? If that was the behavior, it would be an extremely dangerous feature to enable, and could lead to catastrophically bad customer experiences. This is a nonstarter for any serious user.
Conversely, if you expect those resources to continue to exist and continue operating, then that's basically expecting the cloud provider to pay your bill. The provider will then have to recoup those costs from other customers somehow, and so this option sets poor incentives and isn't fair to others. If you expect your account to remain open the following month, you'd have to settle the bill, and we're back to square one.
AWS gives people tools to tackle this problem, such as billing alerts. These can notify you over SMS, email, or programmatically when you hit an "$X this month" billing threshold, and then you can decide what to do. Since these events can be processed programmatically, it's possible to build a system that will automatically take whatever action you'd like AWS to take, such as shutting things down or deleting resources.
If you think all of this through, it's really hard to come up with an approach to billing limits that's fair and a good experience, so I think it's reasonable for cloud providers to give billing threshold alerts while leaving the choice of what to do in the hands of the customer.
Let's take a simplistic example and say you're paying per gigabyte. You decide you're willing to pay up to $X, and Amazon tells you ahead of time how much your $X will buy you, and you accept.
One type of customer will be using that storage to store priceless customer photos. Even if the customer ends up deleting the photos, it has to be your customer who makes that decision - not you, and not Amazon. You tell Amazon that you'd like an alarm at $X-$Y, but that if you hit $X, keep going, at least until you hit $X+$Z.
Another type of customer will be using it to store a cache copy (for quicker retrieval) of data backed up in a data warehouse somewhere. You tell Amazon that you'd like a policy which automatically deletes all the oldest data, to guarantee to stay under the limit.
Yet another type of customer would rather keep their old data and just return an error code to the user for stuffing too much new data into too little storage, so basically, guarantee to stay under the limit, and guarantee never to delete data.
You can't solve billing until you communicate with your customers and ask what they want.
So lets for a moment assume you talked to a large cohort of customers, and found a bunch of "types" including those three you list and many many more (inevitably, at AWS's scale).
You then need to make some business decisions about which of those "types" are most important to you, and which are way less profitable to spend time addressing.
So of course you solve the big pain points for your customers spending tens or hundreds of thousands of dollars per month before you prioritise the customers worried abou going over a tens or hundreds of dollars a month budget.
What would that solution look like? It'd have ways for customers with hundreds or thousands of services (virtual servers, databases, storage, etc) to make all their own decisions about alarms, alerts, cost ceilings - and tools to let them decide how to respond to costs, how to manage their data availability, how to manage capacity, when to shut down services or limit scaling, what can and cannot be deleted from storage. It would also 100% need to allow for practically unbounded capacity/costs for customers who need that (Think AliExpress on their "Single's Day" event where they processed $1 billion in sales in 5 minutes.) All this would need - for the $100k+/month customers - to be machine drivable and automateable, with extensive monitoring and reliable alerting mechanisms - and the ability to build as much reliability and availability into the alerting/reporting/monitoring system and the automated provisioning and deprovisioning systems as each customer needs.
And at least to a first approximation - we've just invented 70% of the AWS ecosystem.
You might think Amazon don't cater to people who want hard $5 or $70 per month upper limits on their spending. You're _mostly_ right. There are many other people playing in that space, and it's _clearly_ not a high priority for Amazon to complete for the pennies a month available in the race-to-the-bottom webhosting that people like GoDaddy sell for $12/year.
The thing to think about is - "who does Amazon consider to be 'their customers'?". I think you'll find for the accounts spending 7 figures a year with AWS - billing _is_ "solved". The rest of us are on the loss-leader path (quite literally for the "free tier" accounts) - because Amazon only need to turn a few tenths or hundredths of a percent of "little accounts" into "their customers" for it all to work out as spectacularly profitably as it is doing right now.
Except that that's what this announcement is.
Which makes me think this may be AZON's fix to runaway billing - if you don't have the resources to pay for mistakes, stay in the per-month kiddie pool and don't play with the heavy machinery.
 I started to add, "or trust yourself not to make them", but that's silly, because mistakes will happen.
Let's assume, based on the evidence at hand, that Amazon is rolling out Amazon Lightsail, and that as such, they're willing to do work (create business plans and write software) to court the $5/month market. In that case, it's a relevant comment for people to write "I can afford $5/month, or even $20, but I can't afford unlimited liability, even with what I know about AWS customer service, so I cannot use this product." It's relevant because it suggests that there's anxiety that is preventing uptake, which can be solved by a combination of writing software and internally committing themselves to eat the loss if the software is imperfect (as others have said, stopping service actually-on-time is actually harder than it sounds, but the provider can always just eat the loss, invisibly to the seller).
Your (probably-correct) observation that Amazon doesn't really care about the penny-ante user's money (in the short term) is beside the point.
When a customer's ceiling is reached, their mix of services goes into limp mode. Things slow down, degrade, maybe become unavailable, depending on each service's "freeze model". Alarms ring. SMS messages are sent to emergency phone numbers. The customer is given a description of the problem and an opportunity to solve it -- raise the cap or cut services.
So wouldn't this cost Amazon money? Sure, but that's a cost of doing business. And as others in the thread have pointed out, the actual costs to Amazon are surely much lower than the "loss" they're incurring by not unquestioningly billing the customer. Especially since Amazon often refunds large surprise bills anyway.
If this were the official policy -- no dickering required -- there's a definite cohort of risk- and uncertainty-averse customers who would be willing to start using Amazon (or switch back).
That's what stopping instances _is_ already. You don't get charged for stopped instances which is a defining feature of Amazon's cloud. Very few providers actually offer this. Most just charge away for the compute even if the instances are powered off, Azure being one exception.
This whole "spin up compute and get charged a minimal amount when not in usage, but keep your working environment" model was pioneered by Amazon.
> So wouldn't this cost Amazon money? Sure, but that's a cost of doing business.
Why would Amazon spend a bunch of money, so that they can charge customers _less_ money, in order to keep customers who are cheapskates, and/or won't take the time to learn the platform properly?
But, they don't don't give us the choice. I need to keep an eye every moment of every day for an alarm, as hundreds or thousands of dollars rack up. That's the ONE THING I DON'T WANT. I'd take anything else (delete my data, lock everything, whatever) over charging me money I can't afford to pay.
I think it would be reasonable to put everything into a no access / deep freeze mode, until I pay up and choose to unfreeze. Would it cost Amazon that much to just keep my data locked for a couple of weeks while I sort out my storage? I'd even be happy for a reserved $100 or so to pay for keeping the storage going.
You know you can make a machine do that for you - right?
In fact all the tools Amazon would use to do this are available to you right now. Cloudwatch, SNS, and Lambda are 98% likely to be all you need - apart from the time to get it set up to do whatever you think is "the right thing".
This seems like the kind of thing you really want to get right, and it will be (I imagine) hard to get right. If it was easy, I would expect some company to offer it (along with, of course, a guarantee that if they mess it up, they will pay my bill).
Nobody rings up Caterpillar and complains about the costs of leasing/running/maintaining a D9 'dozer if they're doing jobs that only need a shovel and a wheelbarrow.
Tools for the job. AWS might not be the tool you need. Or might not be the tool you need _yet_.
Even heavy equipment rentals can result in large unexpected bills if you don't pay attention to what you're doing.
I need "reddit / DDos insurance"
There's nothing "unexpected" or "unagreed beforehand" about Amazon's pricing or costs either. You order a medium EC2 instance and we all know exactly what the bill per hour will be.
There's nothing unexpected or un agreed beforehand about the ordering/provisioning process. You ask AWS to start one, they'll start one. You tell them to stop it, they'll stop it. You get charged the known agreed upon rate for the hours you run it. You ask for 10, you get 10. There's even checks in place - the first time you ask for 50, you hit a limit which you need to speak to them to get raised before you can get a larger than previously seem bill.
Same with your earthmoving gear. You ring up for prices and they'll say "$200/day for a bobcat, $2500/day for a D9 - includes free delivery in The Bay Area!"
If you need one bobcat for one day at 10 Infinite Loop, Cupertino - and click their web order form and say you want 10 D9s for one day at 1 Infinite Loop, Cupertino (and happily click thru all the never-read the web interface confirmations) - you should 100% expect to get a bill for $25k, as well as dealing with clearing up after parking 10 'dozers in Apple's parking lot.
This is not "unexpected". From the vendor's perspective $25k is not "massive". You knew and agreed to the prices and had every opportunity to calculate what your bill was going to be.
If you were only expecting a $200 bill - that's kinda on you. The earthmoving guy has heaps of other customers who spend many times that every single week - and they all started out as some guy who ordered a $200 bobcat or $25k's work of D9's as a one off. You are just another sale and another prospect in the top of the MRR funnel for him.
(Note: See holidayhole.com for a contemporary example of an unbounded earthmoving bill! ;-) )
The problem is someone putting up your hobby website on reddit when it's 2 in the morning your time, and you wake up the next day with a $10,000 bill.
No one running a real business on AWS wants a hard ceiling instead of billing alerts and service by service throttling. Which Amazon has.
So, this is just the nuclear option for people's pet projects. It's not a bad thing to have but I wouldn't expect it to operate any differently than what would happen if you broke the TOS and they suspended your account.
That's absurd. Of course there are businesses that want hard ceilings. Perhaps not on their production website, but on clusters handed over to engineers and whatnot for projects, experimentation, etc.? I've seen these things lay around for months before they were noticed.
 Maybe you don't consider startups 'real' enough, but I can totally imagine early stage startups wanting limits on their prod website, too. You can't save CPU cycles for later consumption.
Are you sure? I'd imagine many startups would rather take a few hours of downtime over billed thousands erroneously. The latter could easily mean the end of the company but the former, when you are just striking out is not the end of the world by far.
I know startups that I could bankrupt with a few lines of code and a ~$60 server somewhere long before they'd be able to react to a billing alert if it wasn't for AWS being reasonably good about forgiving unexpected costs.
I'm not so sure no one running a "real business" would like a harder ceiling to avoid being at the mercy of how charitable AWS feels in those kinds of situations, or when a developer messes up a loop condition, or similar.
Perhaps not a 100% "stop everything costing money" option that'd involve deleting everything, but yes, some risks are existential enough that you want someone to figuratively pull the power plug out of your server on a seconds notice if you have the option.
If you can't afford downtime you probably can afford to wait for the alert and choose your own mitigation strategy. A system that can't tolerate downtime probably has an on-call rotation and these triggers ought to be reasonably fast.
If you can't react or can't afford to react, you probably can afford some downtime / data loss.
So the system doesn't need to have granular user defined controls. Just two modes. That was my point.
I think I triggered people with the phrase "real business" and I apologize for that.
Only a tiny fraction of businesses can't afford downtime. A lot of businesses claim they can't afford downtime, yet don't insure against it, and don't invest enough in high availability to be able to reasonably claim they've put in a decent effort to avoid it.
In most cases I've seen of businesses that claim they "can't afford downtime", they quickly balk if you present them with estimates of what it'd cost to even bring them to four or five nines of availability.
> A system that can't tolerate downtime probably has an on-call rotation and these triggers ought to be reasonably fast.
A lot of such systems can still run up large enough costs quickly enough that it's a major problem.
> If you can't react or can't afford to react, you probably can afford some downtime / data loss.
I'd say it is the opposite: Those who can afford to react are generally those with deep enough pockets to be able to weather an unexpected large bill best. Those who can't afford to react are often those in the worst position to handle both the unexpected bill and the downtime / data loss. But of the two, the potential magnitude of the loss caused by downtime is often far better bounded than the potential loss from a crazily high bill.
You know exactly how much a paused EC2 instance charges you. The ceiling implementation could say, if the total amount charged so far this month, plus the cost of pausing the instance for the rest of the month, exceeds the ceiling, pause it now. So there's no data loss; the worst case is the customer's service is offline for the remainder of the month (or until they approve adding more money). At some point less than this number, start sending angry alerts. But you still have a hard cap that doesn't lose data.
It's not what a serious production user wants, but it's exactly what someone experimenting with AWS wants, either a running service that's looking at a cloud migration, or a new project/startup that hasn't launched yet.
Granted, for a big company, that amount may be so big it's unrealistic to ever hit it.
Most companies will hold onto your data for a time, then delete it afterwards.
This doesn't smell like technical concerns to me. It smells like sneaky Amazon-wants-to-make-more-money concerns.
(<snarky> What's a gallon of milk on the shelf really cost Walmart? And how much of it is opportunity cost? If I usually buy 2 gallons a week - why can't I keep taking home a gallon every few days for a month or so after I stop paying, then cut me off afterwards? Sounds like a sneaky Walmart-wants-to-make-more-money concern.)
If only Walmart would have a process in place to notice that I was ordering a spectacular and unusual amount of milk and save us all the trouble.
So my local Walmart has a Netflix guy who gets 1000 trailers of milk twice a day, and the Dropbox and Yelp guys get a few hundred trailers a week each - and I know these guys from when I see them at the other Walmart in the next town over buying the same sort of amounts there as well. There's people like the Obama campaign who we'd never seen before who fairly quickly ramped up from a gallon a day to a pallet a day, then jumped straight to 50 trailerloads a week for a six months, then stopped buying milk completely one day.
What's considered "normal", "unusual", or "spectacular" - and to whom?
Plenty of companies operate like that, and e.g. require purchase order ids and accompanying maximum spends issued for any expense over X, where X can be very low. I've worked for companies where it was 0 - every expense, no matter how low, needed prior approval from the CEO or finance director. Not just tiny companies either - one of the strictest such policies I've dealt with was with a company of more than a hundred employees.
Amazon AWS's "important customers" are not "fallible human beings" who plan to keep their monthly spend under $100. They'd perfectly happily inconvenience thousands of those users in favour of their customers who _do_ need solar system scalability.b(And, to their credit, there's an abundance of stories around of people on typically 2 digit monthly spends who screw up and get a 4 digit bill shock - which Amazon reverse when called up and pleaded with.)
So they built their thing as "default unlimited". Because of course you would in their position - follow the money. When Netfix wants 10,000 more servers - they want it to "just work", not have them need to call support or uncheck some "cost safety" checkbox.
If you need "default cheap", AWS isn't the right tool for you. You can 100% build "default cheap" platforms on AWS if you've got the time/desire (well, down in the "I can ensure I don't go over ~$100/month - it's not real easy to configure AWS to keep costs down in the $5/month class - the monitoring and response system needs about twice that to keep running reliably).
I sometimes don't think people (especially peope who "grew up" in their dev career with "the cloud") understand just what an amazing tool AWS is - and the fact that they make it available to people like me for hobby projects or half-arsed prototype ideas still amazes me. I remember flying halfway round the world with a stack of several hundred meg hard drives in my carry on - catching a cab from the airport to PAIX so I could open up the servers we owned, and add in the drives with photos of 60,000 hotels and a hardened and tested OS upgrade. Buying those 4 servers and the identical local setup for dev/testing, getting them installed at PAIX, and flying from Sydney to California to upgrade them was probably $30+ thousand bucks and 3 months calendar time. Now I can do all that and more with one Ansible script from my laptop - or by pointing and clicking their web interface.
AWS is an _amazing_ tool - talk to some grey-beards about it some time if you don't remember how it used to get done.But the old saying holds: "With great power comes great responsibility." If you don't want to accept the responsibility, use a tool with less power. Don't for a minute think Amazon are going to put a "Ensure I don't spend as much money with AWS as I might otherwise" option in there - if there's _any_ chance of it meaning a deep-pocketed customer _ever_ gets a false positive denial from it. (WHich, now I think about it - makes this new Lightsail thing make so much more sense...)
Also how our are analogies alike? Milk is a consumable, data is information. Completely different usage pattern.
Finally, every internet service provider I've ever used that held data for some reason granted me a grace period, even if it was never officially stated. Sometimes you just have to ask nicely
On the other hand, based on near-universal industry practice, there doesn't seem to be a huge demand for this. I suspect it may be better for everyone concerned to have heavy-duty users control their costs in various ways and for Amazon to refund money when things go haywire without bringing someone's service down.
I've seen engineering teams hand out accounts to support teams for testing, and since the resources are not under the purview of the dev team things go unnoticed until someone gets the bill. Arguably there are better ways to handle these requirements, but it'd be nice if you could force people down the path of setting billing alerts because these individuals don't always realize that they are spending money.
So maybe a couple of EC2 instances go down, but you pay for and keep S3, Dynamo, etc. At least enough to salvage or implement a contingency. You'd still owe Amazon the money.
It's tempting to wonder why Amazon would incur that risk, but it is a risk already inherent to their post-pay model, and it serves as good faith mitigation to the runaway cost risk that is currently borne by the customer.
Not perfect, but maybe a compromise.
Not saying Jevon's Paradox wouldn't kick in, but the friction of convincing businesses to work on tools to allow their customers to spend _less_ money is high.
This is one of the fundamental things that make any sort of market work. If it's not safe to participate, people won't.
One of the most amazing feats Amazon has pulled off is to convince people that AWS is cheap. They're cheap in the way that Apple are: Only if you need a feature-set (or name recognition..) that excludes the vast majority of the competitors from consideration. If/when you truly need that, then they're the right choice. There are plenty of valid reason to pick AWS.
But they're very rarely the cheap choice.
Yes, but that's a false comparison. It's cheaper to rent dedicated servers at any of several dozens large hosting providers than it is to use EC2 or S3, for example. For most people it's cheaper to rent racks and lease servers too (but depending on your location, renting dedicated servers somewhere else might be cheaper - e.g. racks in London are too expensive to compete with renting servers from Hetzner most of the time, for example).
It's extremely rare, and generally requires very specific needs, that AWS comes out cheap enough to even be witting batting range of dedicates solutions when I price out systems.
When clients pick AWS, it's so far never been because it's been cheap, but because they sometimes value the reputation or value the feature set, and that's a perfectly fine reason to pick AWS.
The point isn't that people shouldn't use AWS, but that if people thing AWS is cheap, in my experience it means they usually haven't costed out alternatives.
It's an amazing testament to the brand building and marketing department of Amazon more than anything else.
E.g. my object storage costs are 1/3 of AWS. My bandwidth costs are 1/50th or so of AWS prices.
There are valid reasons to use AWS depending on what exactly you do, but it's extremely rare for price to be one of them.
The real economic term for this is elastic demand (specifically, relatively elastic demand). For example, microprocessor cost reductions make new applications possible, thus demand increased so much that the total amount spent on microprocessors went up for decades. Example of inelastic demand is radial tires. They last four times as long as bias ply tires. But since this didn't cause people to drive four times further, the tire industry collapsed on the introduction of radial tires.
Does anyone know an example of an actual paradox? I've never found one, and I'm curious if they really exist.
Jevons's Paradox is about demand increasing for a resource when it becomes more efficient to use, e.g., someone invents an engine which can go twice as far with the same amount of fuel but instead of halving the demand for fuel the demand actually increases.
If I recall, elasticity of demand has to do with the relationship to supply. A very inelastic demand will cause people to consume the same rate no matter how much _supply_ is available. It doesn't have to do with the efficiency at which the resource is consumed like stated above. It's a subtle difference but I think they're actually quite distinct concepts.
Actual paradoxes are common. Just consider the classic: "This sentence is false".
As for your example, most sentences are neither true nor false. Nothing interesting has a probability of 0.000 or 1.000.
"This sentence is false" is clever use of language, may be interesting to sophomore philosophy students while smoking weed, but its not useful and there's nothing paradoxical about it.
> most sentences are neither true nor false. Nothing interesting has a probability of 0.000 or 1.000.
I'll start by observing that surely you're talking about propositions, not sentences, nor utterances. Or at least you ought to be.
But more significantly, I'll note that most propositions are either true or false (under a given interpretive framework), but that as epistemologically-unprivileged observers, we must assign empirical propositions probabilities that are higher than 0 and lower than 1. Propositions like "I am a fish" or "You hate meat" or "If Rosa hates meat then Alexis is a fish" are either true or false, under any given set of meanings for the constituent words (objects, predicates, etc). I'm curious what probability you think applies to propositions like "2 + 2 = 4" and "All triangles have 3 sides" and "All triangles have less than 11 sides". I think there are very many interesting propositions that differ from these only in degree of complexity (e.g. propositions about whether or not certain code, run on certain hardware, under certain enumerable assumptions about the runtime, will do certain things).
Based on your very strange claim that all interesting sentences have non-zero non-unity probability, perhaps you're saying that you find theorems uninteresting, and moreover are only interested in statements of empirical belief, such as "I put the odds of the sun failing to rise tomorrow lower than one in a billion." In that case, I cannot imagine what statement interest would qualify as a paradox, except perhaps insofar as some empirical statements of belief are "beyond strange".
"This sentence is false" is a paradox under pretty much everyone's notion of a paradox.
Those are great examples, thanks. All true, and there's nothing interesting about them.
If you need to dig this hard to find something interesting with a probability of 1, that's pretty good evidence that the vast majority of interesting statements are not of the true/false variety.
Although I don't find it interesting, I am open-minded enough to ... embrace.. the .. uh.. diversity of the world, that allows some people, to find that interesting.
The language that contains all Turing Machines that halt on all inputs is not decidable.
e^(iy) = sin(y) + i * cos(y)
Are those uninteresting trivialities to you?
And these things are exactly the sort of thing that "differ from [trivialities about triangles] only in degree of complexity".
Note that "all triangles have 3 sides" is probably an axiom, but "all triangles have less than 11 sides" is a trivial theorem.
My hypothesis is that they don't really have it nailed down but given big margins they have they can afford to let you use more resources than you pay for in the end.
There are a few Azure services to which the spending limit does not apply, but as long as you know what they are then you can choose to use them on your own volition.
One time we had literally 1 million cloudwatch metrics get created because we were monitoring mongodb databases and a CPAN test was creating test DBs and not deleting them and we were not ignoring dbs created with names like test_* when creating the metrics.
Another time an outside developer committed a root credential on a public repo in github to a (basically) unused amazon account.
Both times they refunded the costs. Not sure if that was because we were paying tens of thousands a month though to get this service though!
> Important: Spending limits are not supported in the App Engine flexible environment
> You may still be charged for usage of other Google Cloud Platform resources beyond the spending limit.
Can it incur charges even if you've set up the server to only be a free tier?
Yes, you can incur charges if you exceed what's covered by the free tier. Not all AWS services even have a free tier, and those that do are severely limited (1 micro instance, 5GB of S3 storage, etc). You're not off in some sandboxed environment where they just shut you down if you go over the limits. It's more like a monthly credit of $X for the first 12 months of your account. To cover my ass, I set a really low billing alert threshold. Like "email me if my monthly bill ever projects to exceed $1".
It is worth trying if just to gain knowledge on AWS. But for hosting, I'd say DigitalOcean
I had a personal $400 learning experience with Amazon. They did refund it. My last company had a low-5-figure surprise a few years ago. Some of that could be considered their fault (alerts were sent to someone on vacation), but again, the refusal to allow the option of a "hit a limit, pull the plug" option is what causes this.
Personally this is the main reason why I have never considered using AWS for my small projects, but maybe this is an intentional choice by Amazon, to keep away "hobbyists" and only go after companies where an extra $1k in AWS bills this month is just a blip on the radar...
They have billing alerts ('beta') and used to offer a prepaid account type that they have discontinued for new customers (some may still have grandfathered accounts).
Closest thing now is the MSDN credit. It doesn't require a credit card and the account auto-suspends when you hit it. Problem with the MSDN credit is that it is for non-production only (and they reserve the right to kill anything they consider "production").
They should really offer prepaid again or bill caps. But Microsoft is too busy copying AWS to consider that they can do better than AWS.
It's not unlimited liability, most of their services have limits imposed. If you've scaled any service to thousands of machines you'll quickly find out that they stop you at 20-30 machines or so.
Then you have to contact support to get the limit increased.
Sure you can still rake up an unpleasant bill. But there are limits :)
And I've done work for clients that have requested really big increases because of both realistic and unrealistic expectations of handling traffic peaks. E.g. one client asked for an increase to 100 instances of 2-3 different types in a few regions to be prepared to handle a couple of days of high traffic. If said event had happened, they scaled it all up, and somehow didn't take them down again, it'd only take a few days of charges for them to be insolvent at their then-current funding level.
So you're right, there are limits, but limits or not doesn't matter if it's high enough that it can make you go out of business.
Which makes me wonder if anyone has ever gone out of business because AWS was unwilling to forgive a "surprise" bill. I'd be inclined to assume that they're willing to stretch quite far to avoid that, given that they seem to be very good about it. But I'd also not want to stake my business on hoping Amazon will be charitable about something like that.
I agree with your larger point, but you're going to be surprised by a $500 bill, not a $500,000 bill.
Did you set a billing alert? Google BigQuery has proactive "cost controls" that won't let you go overboard, whereas billing alerts are just that - alerts.
They offer billing alerts, have a budget tracker thingy, but have no actual automated caps. Closest thing you can do is write one yourself using the AWS APIs.
Not a bug, this has been amazon's philosophy with accounts on all systems from very early on. Some of the initial designers of amazon knew families where multiple people shared one e-mail address, but wanted separate accounts for shopping.
Multiple accounts per e-mail address was a concious design decision for all Amazon systems.
There was no way at the time for me to a) see that I had a second account associated with my email address or b) reset the password for the second account without going through support c) merging the two accounts into one even with supports help.
It would be great if when entering your CC information, they let you set a default monthly cap for all your projects, to be overridden at the project level if you suddenly need to spend more.
Tested out the free tier of Amazon, but didn't realize spinning down and spinning up would ding me if they were within an hour.
Even know, when I use it for testing and I'm being fairly careful, I'll get a $3 bill at the end of the month. I was trying to set up alerts, but their alerts and dashboard, while I'm sure super capable, is a bit overwhelming as a new user.
It's why all of my projects sit on DO and I only really use Route 53 from AWS.
> What do Lightsail static IPs cost?
> They're free in Lightsail, as long as you are using them! You don't pay for a static IP if it is attached to an instance. Public IPs are a scarce resource and Lightsail is committed to helping to use them efficiently, so we charge a small $0.005/hour fee for static IPs not attached to an instance for more than 1 hour.
That's $3.60/month... seems similar to mail-in rebates—many people forget, and accidentally give Amazon some (mostly) free money.
Also, from later in this thread:
> FWIW, bandwidth overages at Linode and DO are $0.02 per GB, LightSail is $0.09.
It's these seemingly-tiny (but not-so-tiny when I'm running 60-70 VPSes) costs that kill when you get your first bill after a large traffic event.
They don't for traffic. So you do run a small risk of something happening.
However, Linode at least pools your VPSs so if you have 100 of them and 20 of them "go over" the cap you still are often okay because of the other 80 that didn't "go over".
The truth is none of these providers provide truly hard caps. The difference is with Amazon/Google/Azure/etc you can realistically get hit with a 4 figure bill if something goes seriously wrong.
DO/Linode/Vultr I've never seen accidental "mistakes" causing that sort of thing and even an active dos/ddos attack that would cost you more than $100 in overages before they started null routing you.
And this is exactly why we're running out of publically available IPv4 addresses.
Will stick with Linode.
The only bad experience I had was with SES - we got blocked by high bounce rate, sending to a test email that did not exist (specifically because it was a test email). It took two days for the special unblock team to unblock us, even though the general support guy I was talking to had responded a couple of times in that wait period.
The right thing to do is to just discount the product and just re-use IPs unless otherwise reserved. Mail in rebates can be ignored or "lost in the mail" and seems to happen often enough for me to have lost trust in them. I have little control over what the vendor does, so I would rather avoid vendors who think screwing with me is ok.
I don't buy products with mail in rebates and now I won't buy into lightsail (Presume this thread is accurate and Amazon doesn't fix it).
> due to the shortage of IPv4 addresses available, we charge $0.006 per hour for addresses that have been reserved but not assigned to a Droplet. In order to keep things simple, you will not be charged unless you accrue $1 or more.
The Zeno's paradox in action - once you reach half your limit, the speed is cut in half. "Zeno" throttling if you will. :)
It feels like the larger thing they're trying to solve, that I expect actually stops the majority of people who don't choose AWS, is the complexity around setting up VPCs/SecurityGroups/Subnets/etc.
Most providers in the VPS space already charge overages for bandwidth, and most of them don't support suspending the account vs just billing you.
I am currently using Linode, but would move to AWS if they offer a cap. 2 years ago I signed up for the free AWS and forgot about it (didn't use it at all). Ended up costing ~60$ before I found out and since then I've avoided it.
Setting cost caps on more complex applications that use a lot of different AWS services would get complex in a hurry and could easily have unintended effects.
As someone else wrote, I view this as primarily a simple VPS for people who are already using AWS for other things. I suspect that AWS isn't really interested in being a VPS-only provider for the most price-sensitive customers.
So in theory (I haven't checked what the API gives you control over - so this may be worthless), you could monitor your instance (bandwidth, time up, disk usage, etc), and if things get out of hand, or approach your limit (whatever it is), you could use the API to say shut down or delete the instance, or throttle the bandwidth (maybe via firewall rules or something?).
Again - this would assume the API allows you to do this (and ideally from within the instance itself - which shouldn't be an issue, I wouldn't think). And again, it shouldn't take this much work (you're right, it should just be a simple control panel setting).
But maybe it's an option for those who have the skills to implement it?
I just took a quick look at the API docs - and while it doesn't look like you can mess with the firewall rules settings, everything else should be possible (get metric data, start/stop/reboot/delete instance, etc).
Exactly! We have used AWS and DO a lot this past year. DO is great for smaller sites/api's and super easy to use. Their support is also outstanding.
AWS has tons of tools but many come at a cost. We are in the process of moving a couple of sites off of AWS and onto a LiquidWeb dedicated box. We will be paying much less and the LW dedicated server is more than enough for what we need.
AWS is great for spinning up and scaling instances quickly and comes with a ton of other tools. At the end of the day however it is not always the most cost effective or even best offering for most sites/apps.
Well yeah, that's how they pay for those tools -- they charge for them.
>For every Lightsail plan you use, we charge you the fixed hourly price, up to the maximum monthly plan cost.
Wording implies the monthly pricing is a 'maximum' price.
> Data transfer overages above the free allowance are charged at $0.09/GB.
On the $5 instance the second TB (at $90) is 18 times as expensive as the instance itself with the first TB included.
Hetzner charges 1.36€ per Terabyte of traffic, and with most servers, gives you 10-20TB included.
I’ve heard people talk about the ridiculous traffic costs of AWS, but this is an entirely new dimension of expensive.
That amount of traffic is more than 2000 EUR per month at AWS. Of course this is comparing entirely different things, but still, if you have significant traffic and can't avoid it with a CDN or something like that, AWS (as well as Google and Microsoft clouds) get seriously expensive.
Edit: just did some research, there are many cases this isn't true
Plus there's nothing stopping someone breaking into your account and upgrading it in all kinds of evil ways (which has been a huge hassle with AWS tokens being stolen from e.g. Github).
The big question here is what to do with stateful data. Would you accept an immediate deletion of all of your S3 data? RDS instances and snapshots?
They could easily cut off public access to those resources while charging you storage fees. Obviously with any kind of ceiling there are certain details that need to be ironed out (i.e. most people wouldn't want configuration information or data to be lost, but they likely would want VPS to be taken offline and other services to be suspended).
Ultimately for most startups, small businesses, and individual developers being able to say "My AWS bill cannot exceed $1000, period" is a powerful tool. Right now if a billing alert fires at 1am, you may not see it until 9am and by then you're already in huge trouble.
Lambda isn't available in all zones, and not everyone has the time/ability/knowledge to set up such a thing. I'm sure I could do it, but only if I were to spend a few hours researching it, and probably a few nights working on such a solution. I'd also have to trust that I didn't mess it up -- I'd hate to have a bunch of traffic and NOT properly prevent the traffic.
This is the kind of thing that Amazon surely could provide easily if they wanted to.
It's like if your phone company didn't give you an option to limit your spending (prepaid), but said that you could use their arcane API to tell them each month to start/stop service. That's great, but not really very nice to customers.
The monthly budget cap should be allocated to existing storage first. This covers the existing data for the next month. If there is any free limit left, it could be used for new data writes + 1 month of storage, and/or running services. Once the limit is reached, then writes are blocked and services stopped.
The only situation where you would need to delete data is if you want to set a new monthly budget that is lower than your existing monthly storage-only bill - but the UI could just disallow this.
- Setup a HTTPS endpoint on the server that listens for an SNS notification and performs an action (e.g. backup ephemeral data to S3 and shutdown). I wrote the service in Go and the action is just a shell script but choose your favorite language.
- Setup an SNS subscription pointing to the service endpoint.
- Setup an SNS topic for the message.
- Set up an SNS notification in AWS billing. I use "When actual costs are equal to 100% of budgeted amount".
The problem is that it's necessary to lock down the endpoint listener as it will usually need root access in order to shutdown the machine. This can be done by using authentication on the endpoint, setting up a locked down user to run the service under and granting that user the ability to run /sbin/shutdown in the sudoers file.
There are probably nicer ways to do it, but this does work to limit my spend on each instance.
You can also add AWS API calls to delete any other costly related resources (static IPs, load balancers etc.)
I've been thinking about writing a more modular and robust app that handles multiple instances etc but most of my servers are now in GCE so I don't really have the need.
Some types of data transfer in excess of data transfer included in your plan is subject to overage charges. Please see the FAQ for details.
The only overage charge I see is for data transfer. This isn't ideal, I'll grant you, but it's not the same as "various".
We've seen AWS accounts get broken into with stolen tokens, additional VPS's started, VPS upgraded, bandwidth consumed, etc. And while Amazon has been good with refunding the FIRST time, nobody wants to wake up to a 10K bill because your gitignore had a typo.
A ceiling or cap may even stop plan upgrades without an email confirmation. That would be hugely welcome, particularly in a world where bad guys are actively seeking out VPS to break into.
Wait what? That's getting ridiculous. Seven years ago they would have been on time, perhaps even considered early by some, but three years ago when looking for hosting providers I already laughed at the ones without v6 and moved on without a second thought. They weren't even the cheap ones.
Currently enjoying a €3/mo VPS at Pcextreme.nl with the same specs as the $5 Lightsail VPS. But with IPv6 of course.
Perhaps I've been spoiled with dual stack at home since 2009 from xs4all. Other Dutch ISPs promised it in (iirc) 2013 and every year since, but there has yet to be a second big one to offer it and other countries like Belgium surpassed us by now. Even Germany's Telekom is getting there.
The storage is cheap as balls but the transfer can fuck you.
no one with frequent high volume retrieval needs would be advised to use glacier.
That's probably where most of their profit comes from. That option was most likely squashed from the highest authority.
This applies to any other provider, too. What you owe the provider is what you contracted to pay the provider (eg. by consuming services, or clicking the "upgrade" button in a web interface). It is independent of them actually taking the money.
limited liability, can't really beat it
amazon could completely negate this risk by requiring pre-payment for small/unknown operators, which is something a lot of people (myself included) desperately want them to provide.
I'm sure they've done their sums here, and have figured out the increased revenue from customers not being able to set a budget is more than their potential losses from deadbeats
the variable costs are basically zero, after all ( bandwidth and CPU time are worthless if not utilised)
Then I get the next months invoice and it's not using my pre-paid services but instead is billing for full CPU usage - no reserved instances.
After emailing support several times they say it's my own fault for not using the correct instance type, even though it's identical to the one I pre-paid for. It may well be my error but it was caused by them since I never asked for my servers to be moved. It's been an expensive and time-wasting experience -- will never use them again.
Am currently evaluating GKE (even more expensive) and DigitalOcean.
says Someone1234. Should we believe it?
Of course all things are not equal (i.e. CPUs, SSDs, bandwidth, etc).
Provider: RAM, CPU Cores, Storage, Transfer
LightSail: 512MB, 1, 20GB SSD, 1TB
DO: 512MB, 1, 20GB SSD, 1TB
VULTR: 768MB, 1, 15GB SSD, 1TB
LightSail: 1GB, 1, 30GB SSD, 2TB
DO: 1GB, 1, 30GB SSD, 2TB
VULTR: 1GB, 1, 20GB SSD, 2TB
Linode: 2GB, 1, 24GB SSD, 2TB
LightSail: 2GB, 1, 40GB SSD, 3TB
DO: 2GB, 2, 40GB SSD, 3TB
VULTR: 2GB, 2, 45GB SSD, 3TB
Linode: 4GB, 2, 48GB SSD, 3TB
LightSail: 4GB, 2, 60GB SSD, 4TB
DO: 4GB, 2, 60GB SSD, 4TB
VULTR: 4GB, 4, 45GB SSD, 4TB
Linode: 8GB, 4, 96GB SSD, 4TB
LightSail: 8GB, 2, 80GB SSD, 5TB
DO: 8GB, 4, 80GB SSD, 5TB
VULTR: 8GB, 6, 150GB SSD, 5TB
Linode: 12GB, 6, 192GB SSD, 8TB
Dear Vultr Customer,
Including pending charges, your account is carrying a $5.94 balance.
In order to cover your current balance and your estimated monthly costs, our billing system will automatically deposit $275.00 from your payment method on file in 24 hours.
Usually, when I receive an email like this, the amount is equal to the monthly bill of the instances I have active at the moment.
Maybe a bug in their billing software or something...
You are not being "charged" per se. The amount is transferred to your account and is there as credit until you spend it.
Sorry for nitpicking, but it is important point.
You are being _charged_
- This is a charge against your card
- They are not a bank, so the money in your "account" with them is just an unsecured, general liability to you. If they go bust, they owe you money but you will never see it.
- If you want to withdraw that money from your "account" and they refuse, then your options are pretty limited.
Once they take it from your payment method, it becomes their money, not yours. That's a charge.
I've never been charged on vultr, as I use bitcoin to always pay my bills (which is push only). I think they have stopped accepting that for new customers due to abuse though, which is quite a shame (but understandable).
That said, it is an odd message to send out, and I had my concerns when I first received a similar email. Maybe it's something they should look into altering.
EDIT: Anyone cares to explain his reasons behind a downvote?
Your post reminds me of the burden of proof fallacy, in which you create work for other people by asking questions you could easily answer yourself.
If you or anybody posts the comparison you requested, it will surely be upvoted.
NodeServ: $1.25/month = 50GB HDD, 512MB RAM, 1 core, 1TB bandwidth, location in Jacksonville.
Host.us: $6/month = 150GB HDD, 6GB RAM, 4 cores, 5.12TB bandwidth, location in Dallas.
Both deals found on LowEndBox.
That's not to say getting a 8GB openVZ vps for $4 a month isn't an amazing deal, but just that there are caveats.
For the most part it's reasonable, but there's a freaking litany of reasonable things you're not allowed to run, including IRC, audio/video streaming, game servers, and so on.
Why on earth do you sell me X block of resources for Y$/month if you're going to tell me what I can and can't do with them? Surely unreasonable use would be covered by resource limits already in place?
* they oversell, so they assume that only small fraction of web servers will consume 100% of resources, while almost every torrent client will consume 100% of the bandwidth. There is nothing wrong with overselling hosting within a reasonable margin, but most of the people here want to run more than a LAMP stack on the server.
* they get too much admin overhead replying to Tor "abuse" letters etc., so they just decided to deal with it in the simplest way possible.
I guess both factors contribute equally.
I doubt CPU or RAM allocation is the issue here given AWS already have a good CPU time credit system to manage it.
I have a bunch of sites running on it without stepping on each other, and I doubt that would be the case on AWS / Google / DO.
If I need an SSD, there are options too, though DigitalOcean is indeed one of the best if you need a cheap US-based server. If the location doesn't matter, EU, Russia, Ukraine have some great deals.
Example: $4.6/month = 40GB SSD, 1GB RAM, 3 cores, unlimited traffic @200Mbps.
They're also typically leased for at least a full month and can't be spun up/down on demand like you can with these services.
Plus they focus on large (>16GB) dedicated servers.
1. Mission-critical/Production ready reliability and communication (all maintenance and issues)
2. No unexpected termination of instances / Reasonable warning & mediation
3. Not overprovisioned / little concern of noisy neighbors
4. Tier 3/4 redundancies
5. Strong American coverage (each DC with Tier 3/4 level services)
6. No setup fee on new instances
7. 1-minute provisioning (simple creation of instances / no ticket needed for deleting resource)
8. Programmatic IaaS management including provisioning, DNS, and images
9. Quality resources - mostly Xeons not ARMs, local SSD not Ceph
10. Huge backing - they're not closing tomorrow & I wanted a #10
While OVH, Hetzner, Leaseweb seem like nice services, particularly for needs in Europe, I can't build an American-centric service on those, set it and forget it nearly as easily or worry-free as with DO/Linode/Vultr/Lightsail.
Price breakdown vs DigitalOcean, Vultr, Linode, OVH, and Online.net / Scaleway:
| Provider | RAM | Cores | Storage | Transfer |
| ---------------------- | ----- | ----- | ---------- | -------- |
| LightSail | 512MB | 1 | 20GB SSD | 1TB |
| DO | 512MB | 1 | 20GB SSD | 1TB |
| VULTR | 768MB | 1 | 15GB SSD | 1TB |
| Hetzner (virtual) | 1GB | 1 | 25GB SSD | 2TB |
| OVH | 2GB | 1 | 10GB SSD | ∞TB |
| Scaleway (virtual) | 2GB | 2 | 50GB SSD | ∞TB |
| Provider | RAM | Cores | Storage | Transfer |
| ---------------------- | ----- | ----- | ---------- | -------- |
| LightSail | 1GB | 1 | 30GB SSD | 2TB |
| DO | 1GB | 1 | 30GB SSD | 2TB |
| VULTR | 1GB | 1 | 20GB SSD | 2TB |
| Linode | 2GB | 1 | 24GB SSD | 2TB |
| Hetzner (virtual) | 2GB | 2 | 50GB SSD | 5TB |
| OVH | 4GB | 1 | 20GB SSD | ∞TB |
| Scaleway (virtual) | 8GB | 8 | 200GB SSD | ∞TB |
| Online.net (dedicated) | 4GB | 2 | 120GB SSD | ∞TB |
| Provider | RAM | Cores | Storage | Transfer |
| ---------------------- | ----- | ----- | ---------- | -------- |
| LightSail | 2GB | 1 | 40GB SSD | 3TB |
| DO | 2GB | 2 | 40GB SSD | 3TB |
| VULTR | 2GB | 2 | 45GB SSD | 3TB |
| Linode | 4GB | 2 | 48GB SSD | 3TB |
| Hetzner (virtual) | 4GB | 2 | 100GB SSD | 8TB |
| OVH | 8GB | 2 | 40GB SSD | ∞TB |
| Scaleway (dedicated) | 16GB | 8 | 50GB SSD | ∞TB |
| Online.net (dedicated) | 16GB | 8 | 250GB SSD | ∞TB |
| Provider | RAM | Cores | Storage | Transfer |
| ---------------------- | ----- | ----- | ---------- | -------- |
| LightSail | 4GB | 2 | 60GB SSD | 4TB |
| DO | 4GB | 2 | 60GB SSD | 4TB |
| VULTR | 4GB | 4 | 45GB SSD | 4TB |
| Linode | 8GB | 4 | 96GB SSD | 4TB |
| Hetzner (virtual) | 16GB | 4 | 400GB SSD | 20TB |
| OVH | 8GB | 2 | 40GB SSD | ∞TB |
| Scaleway (dedicated) | 32GB | 8 | 50GB SSD | ∞TB |
| Online.net (dedicated) | 32GB | 8 | 750GB SSD | ∞TB |
| Provider | RAM | Cores | Storage | Transfer |
| ---------------------- | ----- | ----- | ---------- | -------- |
| LightSail | 8GB | 2 | 80GB SSD | 5TB |
| DO | 8GB | 4 | 80GB SSD | 5TB |
| VULTR | 8GB | 6 | 150GB SSD | 5TB |
| Linode | 12GB | 6 | 192GB SSD | 8TB |
| Hetzner (virtual) | 32GB | 8 | 600GB SSD | 30TB |
| Hetzner (dedicated) | 64GB | 8 | 1024GB SSD | 30TB |
| OVH | 8GB | 2 | 40GB SSD | ∞TB |
| Scaleway (dedicated) | 32GB | 8 | 50GB SSD | ∞TB |
| Online.net (dedicated) | 64GB | 8 | 1500GB SSD | ∞TB |
There also have no setup cost for these dedicated servers.
But Amazon will lower your CPU quota as well, if you use it for too long, so it’s not like the numbers of Amazon themselves even really mean anything.
The same story with storage performance – Amazon’s is horrible, but it’s network-attached.
It's a small business run by a few people (though it's been around for 10 years, so a pretty stable one), which has the pros and cons that go along with that. The tech staff is good, techies who know what they're doing and generally assume that you do also. So if you send a request or problem report, you aren't going to get a form reply that asks if you tried turning it off and back on again. But it's just a handful of people, so if there's a major issue, fixing things is pretty manual and slower than at places that have armies of 24/7 devops staff.
One specific thing I really like about it: it gives you SSH access to a proper text console, in case you want to install a custom OS, recover a broken install, etc. Most VPS providers give you console access, but most do it via VNC in the browser, which is not my favorite way to do sysadmin work.
Do they let you do that? They don't say that in the purchases page.
Also, how is uptime relative to vultr?
I haven't used vultr so can't comment on that.
You can install a custom OS. But it can be difficult to use an installer we don't provide right now because we only allow serial console access, not VNC. This means most installers won't work out of the box. Worst case you can dd an image to the disk using ssh from the rescue image.
FYI we don't do overage charges right now. For network, if we can't throttle your traffic down then we will shut your service off.
Our blog is a little misleading these days in that for downtime for individual servers, we started emailing customers directly rather than posting to the blog. This is because we want to make sure customers see the downtime notice. We also got confused responses sometimes to the blog wondering whether a given service was affected or not and if we email directly there is no such confusion.
I think our worst case downtime barring about 5 services this year has been the following:
* 0.75 hour network outage, unplanned - 2016-03-16 (gave proportional credit)
* ~2.5 unplanned downtime due to hardware failure requiring new components - 2016-04-03 (gave 15% month credit)
* 2.6 hours downtime from start of maintenance window, planned due to security upgrade - 2016-07-23 (gave proportional credit)
* 2 hours or less downtime, planned due to security upgrade - sometime around 2016-09-01 (gave proportional credit)
* 1.5 hour network outage, unplanned - 2016-09-09 (gave proportional credit)
* 1.3 hour network outage, unplanned - 2016-11-06 (gave proportional credit)
* 2.04 hours downtime from start of maintenance window, planned due to security upgrade - 2016-11-18 (gave proportional credit)
This is a total of up to 12.69 hours downtime over the year so far, assuming downtime started at the beginning of maintenance windows (it usually started after.) Of that 6.05 hours, or less than half, was unplanned.
So far this year there's been about 336 days or 8064 hours. 12.69/8064 is 99.84% uptime overall, which is significantly lower than we would like. For some servers the uptime has so far been significantly better in that there were no hardware failures, one of the security upgrades was unnecessary, and the turnaround time for the remainder of the security upgrades was much faster than for this particular server.
For this particular server, the largest downtime contributors were security upgrades and network outages in that order. For network downtime, we got around to setting up our second upstream but there's a number of single points of failure we should take care of in 2017. There is also some additional scripting we should probably do that would cut down on the network downtime a lot, such as automatically taking down BGP if connectivity beyond the first hop is lost.
For the security update downtime, I think our most realistic bet right now is to get ourselves on the latest version of Xen once it comes out. That will hopefully have a stable implementation (not a technology preview) for live patching.
I downvoted you because you talked about your downvotes.
Don't interrupt the discussion to meta-discuss the scoring system.
Write your post and live with the results.
Really, it just seems like AWS is fighting DO on this one, to get a share of their profits. My impression is they're looking for DO & AWS customers to stay on an Amazon-only stack. The comparison made by the commenter above actually makes me consider Vutlr and Linode :)
What if I need a lot of CPU power, but not much bandwidth? What if I want lots of RAM, but don't need much disk space? What if I'd rather have an HDD with more storage than a faster SSD? There's nobody offering a "configure your own VPS specs" plan.
I have no idea why people think Amazon pricing is worth it.
I'd be grateful if you used my link https://vpsdime.com/aff.php?aff=1272