Hacker News new | past | comments | ask | show | jobs | submit login
Amazon LightSail: Simple Virtual Private Servers on AWS (amazonlightsail.com)
1142 points by polmolea on Nov 30, 2016 | hide | past | favorite | 614 comments

Just to be clear: This service is offered by Amazon/AWS themselves, it isn't a third party. That's a question I had when I first clicked, which is why I am answering it here.

One big "gotcha" for AWS newbies which I cannot tell if this addresses: Does this set or allow the user to set a cost ceiling?

AWS have offered billing alerts since forever. They'll also occasionally refund unexpected expenses (one time thing). But they've never offered a hard "suspend my account" ceiling that a lot of people with limited budgets have asked for.

They claim this is a competitor for Digital Ocean, but with DO what they say they charge is what they actually charge. I'm already seeing looking through the FAQ various ways for this to exceed the supposed monthly charges listed on the homepage (and no way to stop that).

Why even offer a service like this if you cannot GUARANTEE that the $5 they say they charge is all you'll ever get charged? How is this different from AWS if a $5 VPS can cost $50, or $500?

That's what Amazon is missing. People want ironclad guarantees about how much this can cost under any and all circumstances. I'd welcome an account suspend instead of bill shock.

This is exactly it. 5-6 years ago (I think), I signed up for an aws account under the "free" or educational or something tier. AWS was newish at the time, and I wanted to learn about it.

Via some accidental clicking in the control panel (trying to get an IP address for the instance, I think?) I ended up getting a bill from them for over $100. Which, to me at the time, was a huge amount of money.

It put me off of AWS forever. I don't ever want something that tells me how much they're going to charge me after I have already given them my credit card information.

edit: they did credit me back when I complained, but that doesn't matter. The risk to me wasn't/isn't worth it.

100%. I can't stand it. It's unlimited liability for anyone that uses their service with no way to limit it. If you were able to set hard caps, you could have set yours at like $5 or even $0 (free tier) and never run into that.

One of my services had a Google BigQuery "budget" set at $100. One of our test machines went haywire and continuously submitted a bunch of jobs. The "budget" turned out only to be an alarm, and even that they sent us 8 hours late, after $1600 of charges had been racked up. I responded in 20 minutes and shut it down. Google insisted we pay the full bill. After I wrote up a blog post on the situation and had the "publish" button warmed up, they finally relented and refunded us for the amount of time their alarm was delayed. Absolutely ridiculous that's not their policy to begin with...

Me too. I want protection against my own stupidity, as well as sheer ignorance of the charges. This put me off AWS for years, and I was deeply shocked there was no one-click 'suspend at x$'.

For a company that supposedly puts the customer first, this is appalling.

It's difficult to come up with a good model for how a billing ceiling would work in software as a service. A good start would be to fully specify what behavior you desire when an account hits its billing limit. Are you expecting everything to keep working like normal while the cloud provider pays the bill for those resources, or are you expecting the provider to fully shut everything down in a way that prevents the accrual of further costs, or something in between?

There are a number of resource types that, simply by existing, will accrue costs. A lot of them, actually. On AWS that includes things like running EC2 instances, EBS volumes, RDS databases and backups, DynamoDB tables, data in S3 buckets, and more. The question is what should happen to these resources upon hitting a billing ceiling?

Should EC2 instances be terminated (which deletes all data on them), DynamoDB tables deleted, S3 data erased, RDS databases deleted? If that was the behavior, it would be an extremely dangerous feature to enable, and could lead to catastrophically bad customer experiences. This is a nonstarter for any serious user.

Conversely, if you expect those resources to continue to exist and continue operating, then that's basically expecting the cloud provider to pay your bill. The provider will then have to recoup those costs from other customers somehow, and so this option sets poor incentives and isn't fair to others. If you expect your account to remain open the following month, you'd have to settle the bill, and we're back to square one.

AWS gives people tools to tackle this problem, such as billing alerts. These can notify you over SMS, email, or programmatically when you hit an "$X this month" billing threshold, and then you can decide what to do. Since these events can be processed programmatically, it's possible to build a system that will automatically take whatever action you'd like AWS to take, such as shutting things down or deleting resources.

If you think all of this through, it's really hard to come up with an approach to billing limits that's fair and a good experience, so I think it's reasonable for cloud providers to give billing threshold alerts while leaving the choice of what to do in the hands of the customer.

The correct answer, as always, is to ask the customer.

Let's take a simplistic example and say you're paying per gigabyte. You decide you're willing to pay up to $X, and Amazon tells you ahead of time how much your $X will buy you, and you accept.

One type of customer will be using that storage to store priceless customer photos. Even if the customer ends up deleting the photos, it has to be your customer who makes that decision - not you, and not Amazon. You tell Amazon that you'd like an alarm at $X-$Y, but that if you hit $X, keep going, at least until you hit $X+$Z.

Another type of customer will be using it to store a cache copy (for quicker retrieval) of data backed up in a data warehouse somewhere. You tell Amazon that you'd like a policy which automatically deletes all the oldest data, to guarantee to stay under the limit.

Yet another type of customer would rather keep their old data and just return an error code to the user for stuffing too much new data into too little storage, so basically, guarantee to stay under the limit, and guarantee never to delete data.

You can't solve billing until you communicate with your customers and ask what they want.

And the "correct answer" sometimes leads you to realising "Hang on, I just asked the wrong question to the wrong people".

So lets for a moment assume you talked to a large cohort of customers, and found a bunch of "types" including those three you list and many many more (inevitably, at AWS's scale).

You then need to make some business decisions about which of those "types" are most important to you, and which are way less profitable to spend time addressing.

So of course you solve the big pain points for your customers spending tens or hundreds of thousands of dollars per month before you prioritise the customers worried abou going over a tens or hundreds of dollars a month budget.

What would that solution look like? It'd have ways for customers with hundreds or thousands of services (virtual servers, databases, storage, etc) to make all their own decisions about alarms, alerts, cost ceilings - and tools to let them decide how to respond to costs, how to manage their data availability, how to manage capacity, when to shut down services or limit scaling, what can and cannot be deleted from storage. It would also 100% need to allow for practically unbounded capacity/costs for customers who need that (Think AliExpress on their "Single's Day" event where they processed $1 billion in sales in 5 minutes.) All this would need - for the $100k+/month customers - to be machine drivable and automateable, with extensive monitoring and reliable alerting mechanisms - and the ability to build as much reliability and availability into the alerting/reporting/monitoring system and the automated provisioning and deprovisioning systems as each customer needs.

And at least to a first approximation - we've just invented 70% of the AWS ecosystem.

You might think Amazon don't cater to people who want hard $5 or $70 per month upper limits on their spending. You're _mostly_ right. There are many other people playing in that space, and it's _clearly_ not a high priority for Amazon to complete for the pennies a month available in the race-to-the-bottom webhosting that people like GoDaddy sell for $12/year.

The thing to think about is - "who does Amazon consider to be 'their customers'?". I think you'll find for the accounts spending 7 figures a year with AWS - billing _is_ "solved". The rest of us are on the loss-leader path (quite literally for the "free tier" accounts) - because Amazon only need to turn a few tenths or hundredths of a percent of "little accounts" into "their customers" for it all to work out as spectacularly profitably as it is doing right now.

"and it's _clearly_ not a high priority for Amazon to complete for the pennies a month available in the race-to-the-bottom webhosting that people like GoDaddy sell for $12/year."

Except that that's what this announcement is.

Which makes me think this may be AZON's fix to runaway billing - if you don't have the resources to pay for mistakes[1], stay in the per-month kiddie pool and don't play with the heavy machinery.

[1] I started to add, "or trust yourself not to make them", but that's silly, because mistakes will happen.

I'd guess it's more to scoop up mindshare and make getting started easier, which almost assuredly leads to future upsells. That developer who starts prototyping a project on AWS instead of DigitalOcean now might make them $$$$ they otherwise wouldn't have down the line when that person needs to scale and doesn't want the huge pain of switching providers.

I don't disagree with your details, but you're arguing in a circle (here and in another similar comment).

Let's assume, based on the evidence at hand, that Amazon is rolling out Amazon Lightsail, and that as such, they're willing to do work (create business plans and write software) to court the $5/month market. In that case, it's a relevant comment for people to write "I can afford $5/month, or even $20, but I can't afford unlimited liability, even with what I know about AWS customer service, so I cannot use this product." It's relevant because it suggests that there's anxiety that is preventing uptake, which can be solved by a combination of writing software and internally committing themselves to eat the loss if the software is imperfect (as others have said, stopping service actually-on-time is actually harder than it sounds, but the provider can always just eat the loss, invisibly to the seller).

Your (probably-correct) observation that Amazon doesn't really care about the penny-ante user's money (in the short term) is beside the point.

It doesn't have to be an actual functional ceiling -- just a customer-facing cost ceiling. Things don't have to really "freeze". Each service could have some defined "suspend" mode that attempts to minimize Amazon's cost non-destructively. A "limp home" mode. And yes, it's possible that this mode for some kinds of services would be no different than the service's normal operating mode.

When a customer's ceiling is reached, their mix of services goes into limp mode. Things slow down, degrade, maybe become unavailable, depending on each service's "freeze model". Alarms ring. SMS messages are sent to emergency phone numbers. The customer is given a description of the problem and an opportunity to solve it -- raise the cap or cut services.

So wouldn't this cost Amazon money? Sure, but that's a cost of doing business. And as others in the thread have pointed out, the actual costs to Amazon are surely much lower than the "loss" they're incurring by not unquestioningly billing the customer. Especially since Amazon often refunds large surprise bills anyway.

If this were the official policy -- no dickering required -- there's a definite cohort of risk- and uncertainty-averse customers who would be willing to start using Amazon (or switch back).

> Each service could have some defined "suspend" mode that attempts to minimize Amazon's cost non-destructively.

That's what stopping instances _is_ already. You don't get charged for stopped instances which is a defining feature of Amazon's cloud. Very few providers actually offer this. Most just charge away for the compute even if the instances are powered off, Azure being one exception.

This whole "spin up compute and get charged a minimal amount when not in usage, but keep your working environment" model was pioneered by Amazon.

> So wouldn't this cost Amazon money? Sure, but that's a cost of doing business.

Why would Amazon spend a bunch of money, so that they can charge customers _less_ money, in order to keep customers who are cheapskates, and/or won't take the time to learn the platform properly?

Because they can get more customers that way, and having a hundred cheapskates might be more profitable than having ten non-cheapskates.

Raise the price by the actual cost of keeping the resources suspended for a week multiplied by the estimated probability of it happening. If that week passes with no additional payment then delete everything. The additional cost doesn't have to be applied to unlimited liability accounts. What's so difficult about that? There's not much worse customer experience than massive unexpected debt. Outages and data loss are minor problems compared to potential starvation and homelessness.

Oh give me a break man. Starvation and homelessness. Deleting customers data is something you don't do. If they can't pay you can write off the bill. But people have committed suicides because of data loss. The parent post nailed it.

People have committed suicide over debts too. I'm not suggesting Amazon gets rid of unlimited liability accounts, only that they give customers the choice.

If you're going to commit suicide if you lose your data, perhaps you shouldn't rely on the graciousness of a third party to save your data for free.

I think financial ruin was the reason not data loss.

> But people have committed suicides because of data loss.

Citation Required

"it's reasonable for cloud providers to give billing threshold alerts while leaving the choice of what to do in the hands of the customer.".

But, they don't don't give us the choice. I need to keep an eye every moment of every day for an alarm, as hundreds or thousands of dollars rack up. That's the ONE THING I DON'T WANT. I'd take anything else (delete my data, lock everything, whatever) over charging me money I can't afford to pay.

I think it would be reasonable to put everything into a no access / deep freeze mode, until I pay up and choose to unfreeze. Would it cost Amazon that much to just keep my data locked for a couple of weeks while I sort out my storage? I'd even be happy for a reserved $100 or so to pay for keeping the storage going.

"I need to keep an eye every moment of every day for an alarm"

You know you can make a machine do that for you - right?

In fact all the tools Amazon would use to do this are available to you right now. Cloudwatch, SNS, and Lambda are 98% likely to be all you need - apart from the time to get it set up to do whatever you think is "the right thing".

Well, except if something's gone wrong and my bills are suddenly shooting up, that's exactly the kind of time when some piece of software might misbehave, and fail to freeze everything. And it's not really very easy to test either.

This seems like the kind of thing you really want to get right, and it will be (I imagine) hard to get right. If it was easy, I would expect some company to offer it (along with, of course, a guarantee that if they mess it up, they will pay my bill).

Sure - and if you need that, buy that. WHM/Cpanel and Plesk both let you have 100% guaranteed monthly costs with vendor configurable response to over-use of resources. You can get that for $5/month or less - just not from Amazon, because that's not what they sell.

Nobody rings up Caterpillar and complains about the costs of leasing/running/maintaining a D9 'dozer if they're doing jobs that only need a shovel and a wheelbarrow.

Tools for the job. AWS might not be the tool you need. Or might not be the tool you need _yet_.

I've been involved in renting heavy equipment, and it doesn't work like Amazon. No one gets unexpected massive bills, you agree before what the bill will be. I don't see the comparison you are trying to make.

If you leave it parked in a pit overnight that fills with water, you may find yourself on the hook for a big bill if your insurance finds you negligent. Likewise, if you neglect to perform required maintenance, you could find yourself on the hook for an expensive engine overhaul.

Even heavy equipment rentals can result in large unexpected bills if you don't pay attention to what you're doing.

Actually a good comparison -- if reddit users came around and smashed up the equipment, I would be OK as I would have insurance.

I need "reddit / DDos insurance"

Sure - maybe I used a poor example. Apologies.


There's nothing "unexpected" or "unagreed beforehand" about Amazon's pricing or costs either. You order a medium EC2 instance and we all know exactly what the bill per hour will be.

There's nothing unexpected or un agreed beforehand about the ordering/provisioning process. You ask AWS to start one, they'll start one. You tell them to stop it, they'll stop it. You get charged the known agreed upon rate for the hours you run it. You ask for 10, you get 10. There's even checks in place - the first time you ask for 50, you hit a limit which you need to speak to them to get raised before you can get a larger than previously seem bill.

Same with your earthmoving gear. You ring up for prices and they'll say "$200/day for a bobcat, $2500/day for a D9 - includes free delivery in The Bay Area!"

If you need one bobcat for one day at 10 Infinite Loop, Cupertino - and click their web order form and say you want 10 D9s for one day at 1 Infinite Loop, Cupertino (and happily click thru all the never-read the web interface confirmations) - you should 100% expect to get a bill for $25k, as well as dealing with clearing up after parking 10 'dozers in Apple's parking lot.

This is not "unexpected". From the vendor's perspective $25k is not "massive". You knew and agreed to the prices and had every opportunity to calculate what your bill was going to be.

If you were only expecting a $200 bill - that's kinda on you. The earthmoving guy has heaps of other customers who spend many times that every single week - and they all started out as some guy who ordered a $200 bobcat or $25k's work of D9's as a one off. You are just another sale and another prospect in the top of the MRR funnel for him.

(Note: See holidayhole.com for a contemporary example of an unbounded earthmoving bill! ;-) )

The problem isn't starting up 250 servers.

The problem is someone putting up your hobby website on reddit when it's 2 in the morning your time, and you wake up the next day with a $10,000 bill.

It seems like a hard technical problem to shut down gracefully. But it's an easy product problem. Just suspend the account. AWS must do this already for some cases.

No one running a real business on AWS wants a hard ceiling instead of billing alerts and service by service throttling. Which Amazon has.

So, this is just the nuclear option for people's pet projects. It's not a bad thing to have but I wouldn't expect it to operate any differently than what would happen if you broke the TOS and they suspended your account.

> No one running a real business on AWS wants a hard ceiling instead of billing alerts and service by service throttling

That's absurd. Of course there are businesses that want hard ceilings. Perhaps not on their production website[1], but on clusters handed over to engineers and whatnot for projects, experimentation, etc.? I've seen these things lay around for months before they were noticed.

[1] Maybe you don't consider startups 'real' enough, but I can totally imagine early stage startups wanting limits on their prod website, too. You can't save CPU cycles for later consumption.

> No one running a real business on AWS wants a hard ceiling instead of billing alerts

Are you sure? I'd imagine many startups would rather take a few hours of downtime over billed thousands erroneously. The latter could easily mean the end of the company but the former, when you are just striking out is not the end of the world by far.

My CFO and I run a real business and we'd like this. Especially being able to constrain it by sub/child accounts and/or departments/tags.

> No one running a real business on AWS wants a hard ceiling instead of billing alerts and service by service throttling. Which Amazon has.

I know startups that I could bankrupt with a few lines of code and a ~$60 server somewhere long before they'd be able to react to a billing alert if it wasn't for AWS being reasonably good about forgiving unexpected costs.

I'm not so sure no one running a "real business" would like a harder ceiling to avoid being at the mercy of how charitable AWS feels in those kinds of situations, or when a developer messes up a loop condition, or similar.

Perhaps not a 100% "stop everything costing money" option that'd involve deleting everything, but yes, some risks are existential enough that you want someone to figuratively pull the power plug out of your server on a seconds notice if you have the option.

I meant a business that makes significant revenue and has enough users that downtime or data loss would be unacceptable.

If you can't afford downtime you probably can afford to wait for the alert and choose your own mitigation strategy. A system that can't tolerate downtime probably has an on-call rotation and these triggers ought to be reasonably fast.

If you can't react or can't afford to react, you probably can afford some downtime / data loss.

So the system doesn't need to have granular user defined controls. Just two modes. That was my point.

I think I triggered people with the phrase "real business" and I apologize for that.

> If you can't afford downtime

Only a tiny fraction of businesses can't afford downtime. A lot of businesses claim they can't afford downtime, yet don't insure against it, and don't invest enough in high availability to be able to reasonably claim they've put in a decent effort to avoid it.

In most cases I've seen of businesses that claim they "can't afford downtime", they quickly balk if you present them with estimates of what it'd cost to even bring them to four or five nines of availability.

> A system that can't tolerate downtime probably has an on-call rotation and these triggers ought to be reasonably fast.

A lot of such systems can still run up large enough costs quickly enough that it's a major problem.

> If you can't react or can't afford to react, you probably can afford some downtime / data loss.

I'd say it is the opposite: Those who can afford to react are generally those with deep enough pockets to be able to weather an unexpected large bill best. Those who can't afford to react are often those in the worst position to handle both the unexpected bill and the downtime / data loss. But of the two, the potential magnitude of the loss caused by downtime is often far better bounded than the potential loss from a crazily high bill.

Why don't those startups use something like cloudflare? It would stink to be at the mercy of the good graces of ddos purveyors to not attack.

Consider API's etc.. A lot of businesses have needs where putting CloudFlare in between would be just as likely to block legitimate use. I love CloudFlare, and use it a lot, but it's not a panacea.

Plenty of teams will want this for dev/test.

> Should EC2 instances be terminated (which deletes all data on them)

You know exactly how much a paused EC2 instance charges you. The ceiling implementation could say, if the total amount charged so far this month, plus the cost of pausing the instance for the rest of the month, exceeds the ceiling, pause it now. So there's no data loss; the worst case is the customer's service is offline for the remainder of the month (or until they approve adding more money). At some point less than this number, start sending angry alerts. But you still have a hard cap that doesn't lose data.

It's not what a serious production user wants, but it's exactly what someone experimenting with AWS wants, either a running service that's looking at a cloud migration, or a new project/startup that hasn't launched yet.

Even a serious production user would generally have some threshold above which continuing and hoping AWS forgives the bill puts the company at greater risk than suspending service.

Granted, for a big company, that amount may be so big it's unrealistic to ever hit it.

How much does data at rest really cost Amazon? And how much of that cost is simply opportunity costs?

Most companies will hold onto your data for a time, then delete it afterwards.

This doesn't smell like technical concerns to me. It smells like sneaky Amazon-wants-to-make-more-money concerns.

From the other perspective it sounds to me like a sneaky "I want Amazon to bear the costs of me failing to pay for the resources I've agreed to costs for and consumed, and give me a bunch of 'grace time' to change my mind later" concern.

(<snarky> What's a gallon of milk on the shelf really cost Walmart? And how much of it is opportunity cost? If I usually buy 2 gallons a week - why can't I keep taking home a gallon every few days for a month or so after I stop paying, then cut me off afterwards? Sounds like a sneaky Walmart-wants-to-make-more-money concern.)

In the course of using Walmart in a fairly normal way to buy a gallon or two, I make a teeny mistake and take home 15 trailers of milk and they charge me full price for it.

If only Walmart would have a process in place to notice that I was ordering a spectacular and unusual amount of milk and save us all the trouble.

(Not entirely sure if you're agreeing or disagreeing with me here... ;-) )

So my local Walmart has a Netflix guy who gets 1000 trailers of milk twice a day, and the Dropbox and Yelp guys get a few hundred trailers a week each - and I know these guys from when I see them at the other Walmart in the next town over buying the same sort of amounts there as well. There's people like the Obama campaign who we'd never seen before who fairly quickly ramped up from a gallon a day to a pallet a day, then jumped straight to 50 trailerloads a week for a six months, then stopped buying milk completely one day.

What's considered "normal", "unusual", or "spectacular" - and to whom?

What's normal isn't the problem. The problem is if said Walmart doesn't have a way for a new customer to call up and say "our staff is only ever authorized to order 1 trailer a day; if they ask for more, don't fullfil until you have written confirmation that we authorise the amount, or we won't pay".

Plenty of companies operate like that, and e.g. require purchase order ids and accompanying maximum spends issued for any expense over X, where X can be very low. I've worked for companies where it was 0 - every expense, no matter how low, needed prior approval from the CEO or finance director. Not just tiny companies either - one of the strictest such policies I've dealt with was with a company of more than a hundred employees.

Then I click the "I plan to scale this to solar-system size" button when deploying, instead of the default "I'm a fallible human being and prefer not to burn piles of money" setting.

Alternatively, if you don't want the ability and risks associated with being able to scale to solar system size - use a different vendor who isn't focussed on providing that.

Amazon AWS's "important customers" are not "fallible human beings" who plan to keep their monthly spend under $100. They'd perfectly happily inconvenience thousands of those users in favour of their customers who _do_ need solar system scalability.b(And, to their credit, there's an abundance of stories around of people on typically 2 digit monthly spends who screw up and get a 4 digit bill shock - which Amazon reverse when called up and pleaded with.)

So they built their thing as "default unlimited". Because of course you would in their position - follow the money. When Netfix wants 10,000 more servers - they want it to "just work", not have them need to call support or uncheck some "cost safety" checkbox.

If you need "default cheap", AWS isn't the right tool for you. You can 100% build "default cheap" platforms on AWS if you've got the time/desire (well, down in the "I can ensure I don't go over ~$100/month - it's not real easy to configure AWS to keep costs down in the $5/month class - the monitoring and response system needs about twice that to keep running reliably).

I sometimes don't think people (especially peope who "grew up" in their dev career with "the cloud") understand just what an amazing tool AWS is - and the fact that they make it available to people like me for hobby projects or half-arsed prototype ideas still amazes me. I remember flying halfway round the world with a stack of several hundred meg hard drives in my carry on - catching a cab from the airport to PAIX so I could open up the servers we owned, and add in the drives with photos of 60,000 hotels and a hardened and tested OS upgrade. Buying those 4 servers and the identical local setup for dev/testing, getting them installed at PAIX, and flying from Sydney to California to upgrade them was probably $30+ thousand bucks and 3 months calendar time. Now I can do all that and more with one Ansible script from my laptop - or by pointing and clicking their web interface.

AWS is an _amazing_ tool - talk to some grey-beards about it some time if you don't remember how it used to get done.But the old saying holds: "With great power comes great responsibility." If you don't want to accept the responsibility, use a tool with less power. Don't for a minute think Amazon are going to put a "Ensure I don't spend as much money with AWS as I might otherwise" option in there - if there's _any_ chance of it meaning a deep-pocketed customer _ever_ gets a false positive denial from it. (WHich, now I think about it - makes this new Lightsail thing make so much more sense...)

I completely understand that AWS is designed for scaling. But I've seen CS professors and students get burned and turned off AWS when Amazon "screwed them" for a few hundred bucks when they were planning on spending $50. This is not good customer development. The next massive AWS user is a grad student right now.

On another note it makes me sad that people are so willing to justify "costs" without showing or explaining exactly what they are. Everyone needs a costs audit, yesterday.

Also how our are analogies alike? Milk is a consumable, data is information. Completely different usage pattern.

Finally, every internet service provider I've ever used that held data for some reason granted me a grace period, even if it was never officially stated. Sometimes you just have to ask nicely

I do suspect that there is a set of circuit breaker actions that could mitigate runaway bills mostly non-destructively. Stop writes to stateful storage. Stop all inbound/outbound data transfers. Terminate EC2 instances (which as you say wold delete data but that would generally be ephemeral data anyway). Halt tasks like EMR.

On the other hand, based on near-universal industry practice, there doesn't seem to be a huge demand for this. I suspect it may be better for everyone concerned to have heavy-duty users control their costs in various ways and for Amazon to refund money when things go haywire without bringing someone's service down.

Do you mean shutdown EC2 instances vs terminate? I believe a shutdown EC2 instance only accrues costs for things like EBS.

If you are running a large team, and handing out resources to people you may not directly manage, it'd be nice to be able to enforce billing alerts on certain individuals. Is there a way to do that?

I've seen engineering teams hand out accounts to support teams for testing, and since the resources are not under the purview of the dev team things go unnoticed until someone gets the bill. Arguably there are better ways to handle these requirements, but it'd be nice if you could force people down the path of setting billing alerts because these individuals don't always realize that they are spending money.

I wonder if itemizing payments per service would help? You'd then only incur suspension for services you couldn't afford. Maybe this in combination with some form of prepays?

So maybe a couple of EC2 instances go down, but you pay for and keep S3, Dynamo, etc. At least enough to salvage or implement a contingency. You'd still owe Amazon the money.

It's tempting to wonder why Amazon would incur that risk, but it is a risk already inherent to their post-pay model, and it serves as good faith mitigation to the runaway cost risk that is currently borne by the customer.

Not perfect, but maybe a compromise.

They already have to make all these decisions for accounts suspended for non-payment (expired CC, etc). This would just add the option for customers that would prefer to be locked out rather than have unexpected charges.

Sounds like a good business idea. Pay $10-50 a month for us to automatically disable your services which go over your stated budget. Free tier at 1 service.

Until you get successful and Amazon copy your implementation.

It'd definitely be a gamble, but imagine pitching this idea at Amazon HQ. "We're going to roll out a fantastic new project to allow our customers to spend fewer dollars on our platform!"

Not saying Jevon's Paradox wouldn't kick in, but the friction of convincing businesses to work on tools to allow their customers to spend _less_ money is high.

That's not the pitch. The pitch is that you're making people feel safer about spending money on your platform.

This is one of the fundamental things that make any sort of market work. If it's not safe to participate, people won't.

That's what the existing resource limits are for. AWS wants the customers for whom $1000 is a rounding error first and foremost. The DO-style $5-$500 / month customer is gravy on top and probably a future upsell.

That's really not true at Amazon. It's deep in their core values to cut customer expenses whenever possible, regardless of competitive pressures. I can't explain why they don't offer this feature, but I doubt it's because anyone is scared of a feature that could potentially help customers.

If that was true for AWS, their prices would be far lower pretty much across the board.

One of the most amazing feats Amazon has pulled off is to convince people that AWS is cheap. They're cheap in the way that Apple are: Only if you need a feature-set (or name recognition..) that excludes the vast majority of the competitors from consideration. If/when you truly need that, then they're the right choice. There are plenty of valid reason to pick AWS.

But they're very rarely the cheap choice.

It is cheap compared to setting up and running your own datacenter, S3 replacement, etc. You have to keep in mind that that's where the story started and continues to be, a lot of folks (the ones with $$$) see it as "no cloud vs cloud" not "DigitalOcean vs. Amazon".

(EDIT: To be clear I agree with you that that's the reason people often think that AWS is cheap)

Yes, but that's a false comparison. It's cheaper to rent dedicated servers at any of several dozens large hosting providers than it is to use EC2 or S3, for example. For most people it's cheaper to rent racks and lease servers too (but depending on your location, renting dedicated servers somewhere else might be cheaper - e.g. racks in London are too expensive to compete with renting servers from Hetzner most of the time, for example).

It's extremely rare, and generally requires very specific needs, that AWS comes out cheap enough to even be witting batting range of dedicates solutions when I price out systems.

When clients pick AWS, it's so far never been because it's been cheap, but because they sometimes value the reputation or value the feature set, and that's a perfectly fine reason to pick AWS.

The point isn't that people shouldn't use AWS, but that if people thing AWS is cheap, in my experience it means they usually haven't costed out alternatives.

It's an amazing testament to the brand building and marketing department of Amazon more than anything else.

AWS cuts their prices all the time. Every couple months an email comes out announcing price cuts for S3 or various EC2 instances.

And yet they're still far above most of the alternatives.

E.g. my object storage costs are 1/3 of AWS. My bandwidth costs are 1/50th or so of AWS prices.

There are valid reasons to use AWS depending on what exactly you do, but it's extremely rare for price to be one of them.

I'm always fascinated when someone mentions a paradox so I looked up "Jevon's Paradox".

The real economic term for this is elastic demand (specifically, relatively elastic demand). For example, microprocessor cost reductions make new applications possible, thus demand increased so much that the total amount spent on microprocessors went up for decades. Example of inelastic demand is radial tires. They last four times as long as bias ply tires. But since this didn't cause people to drive four times further, the tire industry collapsed on the introduction of radial tires.

Does anyone know an example of an actual paradox? I've never found one, and I'm curious if they really exist.

Are you sure?

Jevons's Paradox is about demand increasing for a resource when it becomes more efficient to use, e.g., someone invents an engine which can go twice as far with the same amount of fuel but instead of halving the demand for fuel the demand actually increases.

If I recall, elasticity of demand has to do with the relationship to supply. A very inelastic demand will cause people to consume the same rate no matter how much _supply_ is available. It doesn't have to do with the efficiency at which the resource is consumed like stated above. It's a subtle difference but I think they're actually quite distinct concepts.

Actual paradoxes are common. Just consider the classic: "This sentence is false".

Yes I'm sure. When coal becomes cheaper as a fuel (efficiency being one path), if that opens up new applications or use by a broader set of customers, it's no surprise at all that total revenue could go up.

As for your example, most sentences are neither true nor false. Nothing interesting has a probability of 0.000 or 1.000.

"This sentence is false" is clever use of language, may be interesting to sophomore philosophy students while smoking weed, but its not useful and there's nothing paradoxical about it.

Regarding your question about a paradox, what would qualify to you as "an actual paradox"? What is the definition against which you try a contender? Free free to look it up in a dictionary, but that probably won't help you generate a definition that makes "This statement is false" non-paradoxical. Note also that the etymology of paradox is "beyond strange", so historically the bar for qualifying is simply to be a idea or combination of ideas that is remarkably strange or surprising.

> most sentences are neither true nor false. Nothing interesting has a probability of 0.000 or 1.000.

I'll start by observing that surely you're talking about propositions, not sentences, nor utterances. Or at least you ought to be.

But more significantly, I'll note that most propositions are either true or false (under a given interpretive framework), but that as epistemologically-unprivileged observers, we must assign empirical propositions probabilities that are higher than 0 and lower than 1. Propositions like "I am a fish" or "You hate meat" or "If Rosa hates meat then Alexis is a fish" are either true or false, under any given set of meanings for the constituent words (objects, predicates, etc). I'm curious what probability you think applies to propositions like "2 + 2 = 4" and "All triangles have 3 sides" and "All triangles have less than 11 sides". I think there are very many interesting propositions that differ from these only in degree of complexity (e.g. propositions about whether or not certain code, run on certain hardware, under certain enumerable assumptions about the runtime, will do certain things).

Based on your very strange claim that all interesting sentences have non-zero non-unity probability, perhaps you're saying that you find theorems uninteresting, and moreover are only interested in statements of empirical belief, such as "I put the odds of the sun failing to rise tomorrow lower than one in a billion." In that case, I cannot imagine what statement interest would qualify as a paradox, except perhaps insofar as some empirical statements of belief are "beyond strange".

"This sentence is false" is a paradox under pretty much everyone's notion of a paradox.

> I'm curious what probability you think applies to propositions like "2 + 2 = 4" and "All triangles have 3 sides" and "All triangles have less than 11 sides".

Those are great examples, thanks. All true, and there's nothing interesting about them.

"All triangles have 3 sides" might be an uninteresting triviality, but "The sum of the squares of the lengths of the catheti is equal to the square of the length of the hypotenuse in a right-angled triangle" is neither trivial nor uninteresting and yet it has a probability of 1.

I love this example the most. It's the exception that proves the rule.

If you need to dig this hard to find something interesting with a probability of 1, that's pretty good evidence that the vast majority of interesting statements are not of the true/false variety.

Although I don't find it interesting, I am open-minded enough to ... embrace.. the .. uh.. diversity of the world, that allows some people, to find that interesting.

Pythagorean Theorem is "digging hard" and "not interesting"? Mind explaining?

The language that contains all Turing Machines that halt on all inputs is not decidable.


e^(iy) = sin(y) + i * cos(y)

Are those uninteresting trivialities to you?

Yes, exactly. And "this code has a mathematical error in it" is often interesting, often non-trivial, and often has probability 1 (and often probability 0).

And these things are exactly the sort of thing that "differ from [trivialities about triangles] only in degree of complexity".

Note that "all triangles have 3 sides" is probably an axiom, but "all triangles have less than 11 sides" is a trivial theorem.

An actual paradox is an apparent paradox that you don't know how to resolve.

"This statement is not true."

"This statement is true, not."

AWS actually have a "pillar" of architecting called "Cost Optimization". Found it interesting in their GameDay at the AWS Reinvent conference they effectively punish solutions costing too much & being inefficient.

Which is the point, right?

well, hell then you get what everyone wanted in the first place anyway :p

That would be Azure. They have hard limits and they shutdown when you get over the limit I believe

I keep wondering how accurate is that. On one hand implementing that precisely surely must be difficult - if it wasn't, everyone would have it. On the other hand, Azure clearly have issues calculating the bills in real time. Heck, once I kicked off a bunch of large VMs for a day. Once shut down I checked out the billing page - the costs estimates kept increasing every hour!

My hypothesis is that they don't really have it nailed down but given big margins they have they can afford to let you use more resources than you pay for in the end.

I can only speak for my own experience, but it has saved my wallet a couple of times in the past few years. The spending limit is advertised fairly explicitly just that: a customizable limit on your monthly spending [1]. If it doesn't work as intended and you end up with a bill larger than the limit you set up, you'd have a rock-solid case for disputing the extra charge.

[1] https://azure.microsoft.com/en-us/pricing/spending-limits/

I've been using a $150/mo spending limit on Azure for maybe two years now, and I can go on record stating that it is extremely accurate. They're really good about showing me a breakdown of exactly where every cent I'm spending is going over time, and the second it hits $150, Azure automatically shuts down all related services and stops charging me.

There are a few Azure services to which the spending limit does not apply, but as long as you know what they are then you can choose to use them on your own volition.

Azure tried to charge me 3200 euro/month for inbound traffic

If you think that solving this problem in a consumer-friendly way is beyond the wit of Amazon then I think you are wrong.

Completely. I will say amazon is usually very good about refunding money if you made a mistake or got hacked so in that sense their customer service is pretty good.

One time we had literally 1 million cloudwatch metrics get created because we were monitoring mongodb databases and a CPAN test was creating test DBs and not deleting them and we were not ignoring dbs created with names like test_* when creating the metrics.

Another time an outside developer committed a root credential on a public repo in github to a (basically) unused amazon account.

Both times they refunded the costs. Not sure if that was because we were paying tens of thousands a month though to get this service though!

That is charity. Why not just put the <SHUTDOWN LIMIT EXCEEDED> check mark which will solve the issue entirely.

What thing would they shut down first?

Does this not work as a hard limit for App Engine? [0] It also says here [1] that 'Spending limits are set for paid apps and cannot be exceeded.'

[0] https://cloud.google.com/appengine/pricing#spending_limit

[1] https://cloud.google.com/appengine/docs/quotas

I'm not sure because we don't use AppEngine, but these seem like two important caveats:

> Important: Spending limits are not supported in the App Engine flexible environment

> You may still be charged for usage of other Google Cloud Platform resources beyond the spending limit.

Is the free tier even worth trying out? I went to look into it previously, but stopped before entering my CC info.

Can it incur charges even if you've set up the server to only be a free tier?

Depends what you want to get out of it. I used it for a year to host a crappy blog that got trivial amounts of traffic. It's perfectly fine for its primary use case of getting your feet wet in the AWS ecosystem, learning what all the different features are for, and how to manage them. If you want to do any "real work" you'll blow past what the free tier offers, but that's ok with me.

Yes, you can incur charges if you exceed what's covered by the free tier. Not all AWS services even have a free tier, and those that do are severely limited (1 micro instance, 5GB of S3 storage, etc). You're not off in some sandboxed environment where they just shut you down if you go over the limits. It's more like a monthly credit of $X for the first 12 months of your account. To cover my ass, I set a really low billing alert threshold. Like "email me if my monthly bill ever projects to exceed $1".

Yes, when I was still in high school, I had set up a Munin graphing node on AWS. Well, what I didn't realize at the time was that Munin likes to use a lot of disk IO every time it writes out the graphs. AWS charges for I/O on their SAN (not on local disks, but the free tier doesn't come with local disks), and so I ended up with a $150 bill and only use them now for Route53 (DNS hosting, it is fantastic for that), and S3/Glacier for archival storage.

It is worth trying if just to gain knowledge on AWS. But for hosting, I'd say DigitalOcean

The newer EBS types don't charge per I/O operation. (Provisioned IOPS types charge for the speed you theoretically can do, but don't charge for the ops you do do.)

https://aws.amazon.com/ebs/pricing/ https://aws.amazon.com/ebs/previous-generation/

Yes, I think Elastic IPs can incur charges in some way.


I had a personal $400 learning experience with Amazon. They did refund it. My last company had a low-5-figure surprise a few years ago. Some of that could be considered their fault (alerts were sent to someone on vacation), but again, the refusal to allow the option of a "hit a limit, pull the plug" option is what causes this.

Of course, there's also the opposite scenario "So here we were, having the best sales day in our history, and suddenly Amazon pulled the plug on our servers because we went over our authorized limit! We lost 5-figures of sales that day. Sure, they sent us warnings, but the alerts went to someone on vacation... but why wouldn't they let us exceed the limit for a bit before they pull the plug"

nobody says that the hard limit should be there for everybody, but I SHOULD be able to hit a checkbox that says "no matter what happens, runaway CPU, somebody DDOSing me with traffic, runaway disk, I do not want to be responsible for more than $x/month"

Personally this is the main reason why I have never considered using AWS for my small projects, but maybe this is an intentional choice by Amazon, to keep away "hobbyists" and only go after companies where an extra $1k in AWS bills this month is just a blip on the radar...

I'm pretty sure Azure had that the last time I used it.


They have billing alerts ('beta') and used to offer a prepaid account type that they have discontinued for new customers (some may still have grandfathered accounts).

Closest thing now is the MSDN credit. It doesn't require a credit card and the account auto-suspends when you hit it. Problem with the MSDN credit is that it is for non-production only (and they reserve the right to kill anything they consider "production").

They should really offer prepaid again or bill caps. But Microsoft is too busy copying AWS to consider that they can do better than AWS.

Azure has spending limits for the subscription which is what is being discussed.

> I can't stand it. It's unlimited liability for anyone that uses their service with no way to limit it.

It's not unlimited liability, most of their services have limits imposed. If you've scaled any service to thousands of machines you'll quickly find out that they stop you at 20-30 machines or so. Then you have to contact support to get the limit increased.

Sure you can still rake up an unpleasant bill. But there are limits :)

But even the default limits are high enough that there are plenty of companies that could at least in theory bankrupt themselves with it. Especially because there is no hard total cap, and so many services have high enough limits that you can get really nasty shocks if any one of them is maxed out. Even more so if you e.g. make use of different instance types (separate limits) in different regions (separate limits) and a wide range of services (separate limits).

And I've done work for clients that have requested really big increases because of both realistic and unrealistic expectations of handling traffic peaks. E.g. one client asked for an increase to 100 instances of 2-3 different types in a few regions to be prepared to handle a couple of days of high traffic. If said event had happened, they scaled it all up, and somehow didn't take them down again, it'd only take a few days of charges for them to be insolvent at their then-current funding level.

So you're right, there are limits, but limits or not doesn't matter if it's high enough that it can make you go out of business.

Which makes me wonder if anyone has ever gone out of business because AWS was unwilling to forgive a "surprise" bill. I'd be inclined to assume that they're willing to stretch quite far to avoid that, given that they seem to be very good about it. But I'd also not want to stake my business on hoping Amazon will be charitable about something like that.

It's not actually unlimited liability. Every AWS service has a set of default limits, and you must request AWS raise those limits before you can provision additional resources.

I agree with your larger point, but you're going to be surprised by a $500 bill, not a $500,000 bill.

Hey there, I work on Google Cloud.

Did you set a billing alert? Google BigQuery has proactive "cost controls" that won't let you go overboard, whereas billing alerts are just that - alerts.

At least last I checked, Azure offers this.

Amazon lets you set a cost limit in your account. Look it up.

I did. They don't.

They offer billing alerts, have a budget tracker thingy, but have no actual automated caps. Closest thing you can do is write one yourself using the AWS APIs.

Yeah, gotta monitor daily

That is only for automated reporting and alerts. It does not allow you to actually limit your expenses.

IIRC, it can fire off to a SNS topic; you can burn down your stuff to your heart's content.

Billing alerts that are simply a notification you exceeded $X is not any sort of limit, particularly when they may take hours to arrive.

There was this awesome bug with AWS a while back too where you could accidentally sign up an account twice with the same email address, but no way of knowing you had two accounts without paying really close attention to your account ID. So if you went to log in and used your email and one password, you would sign into one account, but if you used the same email and a different password, you would sign into the second account (who knows what would happen if both accounts had the same password). Anyways a couple years ago I signed up for an AWS account to mess around with the free tier (doing a pluralsight course I think), and after a bunch of messing around didn't touch it again for a while. A year after I was going to use AWS for something and forgot I had already setup an account earlier (or thought it was on a different email or whatever), and managed to sign up for a second account with the same email address (which now became the primary account). Continue down a few months and I start getting billing emails for a few dollars a month but could not figure out why (and the invoice wouldn't show up in my AWS console for that email address). After digging I realized somehow I had two AWS accounts one the same email and the bill was on the other one, but I couldn't log into it as I didn't remember the password, and doing the password reset would just sent me a reset for the second account. It took a tonne of back and forth emails with amazon support to get it fixed and gain access to both accounts, the charges ended up being for having a few VMs created (but stopped) and my free tier ran out so it was billing me a few dollars for storage a month. I haven't really touched AWS since after that because the billing can be so obtuse if you aren't paying very very close attention.

> There was this awesome bug with AWS a while back too where you could accidentally sign up an account twice

Not a bug, this has been amazon's philosophy with accounts on all systems from very early on. Some of the initial designers of amazon knew families where multiple people shared one e-mail address, but wanted separate accounts for shopping.

Multiple accounts per e-mail address was a concious design decision for all Amazon systems.

A poorly implemented design decision then (which they turned off on AWS back in 2012 due to exactly this happening to many people). https://forums.aws.amazon.com/thread.jspa?threadID=101218

There was no way at the time for me to a) see that I had a second account associated with my email address or b) reset the password for the second account without going through support c) merging the two accounts into one even with supports help.

Indeed. Multiple accounts per email is the most legacy of legacy features, and it's actually pretty easy to not realize it exists even if you work for Amazon. I'm not surprised internal systems don't handle it well. If I recall right, there was a push to get customers to move off it, using site messaging etc, because it was such a pain to maintain.

I have that problem at my current job. The average user is probably 60. Our policy is to tell people that email accounts are free. Our primary concern is that people would recover the password to someone else's account and access data that we only let the other person access.

Similar experience here. In grad school I forgot to shut down an 8 core instance I was doing data analysis on and it ended up costing $400 before I noticed it.

It would be great if when entering your CC information, they let you set a default monthly cap for all your projects, to be overridden at the project level if you suddenly need to spend more.

I think it comes down to them seeing themselves as a utility. You wouldn't want to be prompted for payment confirmation every time you plug something into an outlet, but this does mean that you'll have a huge unexpected charge if you accidentally leave on the air conditioner when you go on vacation.

I think part of the reason that utilities can get away with this is the fact that the maximum bill you are likely to run up is generally 2x-3x your normal bill. This doesn't well for Amazon because your actual bill can be orders of magnitude larger than what you expect.

And every now and then a utility bill makes the news because someone's water line sprung a leak and they used $5000 worth of water in a month.

What's the quote? Something like, "At some point a quantitative difference becomes a qualitative difference."

I know two people near my parents who've ended up with $30,000+ water bills for a single month.

I'm sure they see it that way. The problem here is that they aren't a utility unless the utility was also selling air conditioners and microwaves, and those air conditioners and microwaves had a button that said "Charge met 10x my normal bill this month" on them.

Offering the option to cap expenditures doesn't affect anyone that chooses to not use that feature.

Similar thing here (except I was trying to be more paranoid).

Tested out the free tier of Amazon, but didn't realize spinning down and spinning up would ding me if they were within an hour.

Even know, when I use it for testing and I'm being fairly careful, I'll get a $3 bill at the end of the month. I was trying to set up alerts, but their alerts and dashboard, while I'm sure super capable, is a bit overwhelming as a new user.

$3! :)

I got my first bit of AWS credit in a cloud class and something like this was common. From what I've heard they will null out the bill if they can see you didn't use it.

This happened at one of the first startups I worked at also. They were burning through a ridiculous amount of capital on AWS silly charges here and there. I think they were spending somewhere in the range of 5-6x what they really should have been spending just because they were "testing" features and forgetting about them.

It's why all of my projects sit on DO and I only really use Route 53 from AWS.

That's exactly the reason I wouldn't sign-up for something "free" when I was required to give away my credit card details.

The same thing happened to me when I first sign up for AWS. I contacted support and they just credited my account then gave me a promo on top of it.

Also, if you have a Static IP attached to the VPS and you first Stop, and then Destroy your instance, you will need to make sure you "free" the IP as well to avoid _small_ 0.005/hr price.

From FAQ:

> What do Lightsail static IPs cost?

> They're free in Lightsail, as long as you are using them! You don't pay for a static IP if it is attached to an instance. Public IPs are a scarce resource and Lightsail is committed to helping to use them efficiently, so we charge a small $0.005/hour fee for static IPs not attached to an instance for more than 1 hour.

> $0.005/hour

That's $3.60/month... seems similar to mail-in rebates—many people forget, and accidentally give Amazon some (mostly) free money.

Also, from later in this thread:

> FWIW, bandwidth overages at Linode and DO are $0.02 per GB, LightSail is $0.09.

It's these seemingly-tiny (but not-so-tiny when I'm running 60-70 VPSes) costs that kill when you get your first bill after a large traffic event.

Would anyone please tell me which of Linode, DigitalOcean and Vultr have cost ceilings? I looked at their pricing pages but couldn't figure out. They all claim that they have monthly billing caps for the hourly rates, but meanwhile, both DO and Vultr have per-GB charges if the transfer quota are exceeded, and Linode is silent on this on its pricing or FAQ pages. Can the data transfer charges be capped too? If so, what happens when the quota is reached?

> Would anyone please tell me which of Linode, DigitalOcean and Vultr have cost ceilings? I looked at their pricing pages but couldn't figure out. They all claim that they have monthly billing caps for the hourly rates, but meanwhile, both DO and Vultr have per-GB charges if the transfer quota are exceeded, and Linode is silent on this on its pricing or FAQ pages. Can the data transfer charges be capped too? If so, what happens when the quota is reached?

They don't for traffic. So you do run a small risk of something happening.

However, Linode at least pools your VPSs so if you have 100 of them and 20 of them "go over" the cap you still are often okay because of the other 80 that didn't "go over".

The truth is none of these providers provide truly hard caps. The difference is with Amazon/Google/Azure/etc you can realistically get hit with a 4 figure bill if something goes seriously wrong.

DO/Linode/Vultr I've never seen accidental "mistakes" causing that sort of thing and even an active dos/ddos attack that would cost you more than $100 in overages before they started null routing you.

It's not exactly free money because Amazon will surely reserve that IP for your use at any time. By sitting on Static IPs you are using AWS resources. Yes, operationally it costs them nothing, but there is an opportunity cost.

> but there is an opportunity cost

Is there?

Yes. They could sell the IP address to someone else.

But... they're selling it to you.

It's more valuable to be in use than to sell it to you. They are very limited on ipv4 space, so the charge is really a penalty for keeping that resource from another customer.

IPv4 allocation limits are still mostly a scare tactic to get people onto v6. I know dozens of people from my webhosting days with /12 and /16 allotments doing nothing that they pay peanuts for. This isn't a unique scenario.

That's not relevant, if I have 10 cars and you need a car ... then all that matters to you is that you need a car, not that I have 10 cars sitting there doing nothing. People sitting on IPv4 addresses don't care but new entrants cannot get new IPv4 addresses since they're all allocated.

> This isn't a unique scenario.

And this is exactly why we're running out of publically available IPv4 addresses.

I think the point is that there's a reason they charge you for it instead of letting you hold onto it for free.

They aren’t, unless you have an instance attached to it.


You're capped at 20 instances as well by the looks. Plus you'll get AWS 'dog shit' support included which is hopeless.

Will stick with Linode.

They put the caps on to help with the very problem you are all bitching about; provisioning a ton of resources and getting a big bill. It's very easy to raise the cap.

I wasn't complaining about that. I've had 2-3 day turnarounds on "everything is broken" events on normal AWS VPC. Always factor their support offering in as well.

What level of support did you have? I've been on developer or first-level business support, and generally get someone knowledgeable; only the timing changes.

The only bad experience I had was with SES - we got blocked by high bounce rate, sending to a test email that did not exist (specifically because it was a test email). It took two days for the special unblock team to unblock us, even though the general support guy I was talking to had responded a couple of times in that wait period.

Zero to start with, now business. Business is "ok" - sometimes takes a couple of attempts to get someone who knows what they are talking about.

I suspect that the VPS cap is more about discouraging large AWS users from spinning up a zillion to reduce egress costs.

The main purpose of AWS's cap is to prevent abuse.

Like mail in rebates it creates a moral hazard on the vendor's part.

The right thing to do is to just discount the product and just re-use IPs unless otherwise reserved. Mail in rebates can be ignored or "lost in the mail" and seems to happen often enough for me to have lost trust in them. I have little control over what the vendor does, so I would rather avoid vendors who think screwing with me is ok.

I don't buy products with mail in rebates and now I won't buy into lightsail (Presume this thread is accurate and Amazon doesn't fix it).

DigitalOcean also does this:

> due to the shortage of IPv4 addresses available, we charge $0.006 per hour for addresses that have been reserved but not assigned to a Droplet. In order to keep things simple, you will not be charged unless you accrue $1 or more.

It would be nice if they offered an option of adaptive bandwidth throttling, so that once you're past 75% of your allotment it starts slowing you down such that you never reach your entire quota, and never get charged for overages.

The Zeno's paradox in action - once you reach half your limit, the speed is cut in half. "Zeno" throttling if you will. :)

There must be something you can run on the server that will do this for you. I agree that Amazon should offer it in front of the server, but there are options.

Or imagine if they just bundled a flat rate in the service? You're effectively specifying a burst policy.

I don't think that actually holds up. There are certainly some people that want "ironclad guarantees", but the reality of cost here is pretty close: you pay $5/mo max for the server. The things that can cost more are either overages on bandwidth/dns (and here, 1TB of bandwidth on the low end is included, eliminating a lot of the flux from EC2), or things you choose to initiate, like snapshots.

It feels like the larger thing they're trying to solve, that I expect actually stops the majority of people who don't choose AWS, is the complexity around setting up VPCs/SecurityGroups/Subnets/etc.

Most providers in the VPS space already charge overages for bandwidth, and most of them don't support suspending the account vs just billing you.

I personally don't use AWS for personal projects because of the lack of a cap. I would rather see my system suspended rather than continue to pay beyond my budget.

This. I got excited about this until I came to HN and started reading the comments. I _have_ to have a cap. I don't want to do something stupid that puts me on the hook for $$$.

So who are you using then? I don't see anyone that has a cap. Everyone charges you if you go over your allotted bandwidth.

Not sure if this is still the case. Digital ocean had a pricing structure for bandwidth, but was not charging for that used bandwidth because they did not have a way to show how much bandwidth has been used in a billing cycle. Best to ask them if this is still the case.

I agree.

I am currently using Linode, but would move to AWS if they offer a cap. 2 years ago I signed up for the free AWS and forgot about it (didn't use it at all). Ended up costing ~60$ before I found out and since then I've avoided it.

Linode has bandwidth overages, just like LightSail, so I don't see how this compares. You pay more money if your Linode surpasses the quota.

Exactly. I have alerting setup on DO that notifies if I need to address scaling issues. Let me make the decision for smaller projects. 100%.

AWS's pricing and usage related billing is orders of magnitude better than DO.


Exactly. There's certainly a market for VPSs and other cloud services that have a hard cost cap. AWS has apparently decided that they're fine with not competing for that slice of the business.

Setting cost caps on more complex applications that use a lot of different AWS services would get complex in a hurry and could easily have unintended effects.

As someone else wrote, I view this as primarily a simple VPS for people who are already using AWS for other things. I suspect that AWS isn't really interested in being a VPS-only provider for the most price-sensitive customers.

Exactly, the lack of cost ceiling is the #1 reason I can't pick AWS.

The cost ceiling are the initial resource limits. Maybe you can ask them to lower the limits?

Not for bandwidth, though. That's where AWS gets you. All the complaining of server costs are the AWS of yesteryear - servers are pretty similar in costs to competitors now. But traffic doesn't have any caps that I'm aware of.

It's like they saw all these consumers being screwed by ISP and cell provider data caps and astronomical accidental overage fees and went "how do we get in on that action?!"

Hehe, sounds pretty much like what they've been doing. Really wish they bring some of that customer centric-ness they talk about to this aspect

They frequently refund mistaken overages.

The AWS cost structure is one reasons we switched to a different provider. I feel Amazon has taken metered pricing too far with AWS, especially when the prices aren't really that low. Everything seems to cost extra with AWS, and it's quite hard to estimate beforehand how much a thing will actually cost in practice.

It's so confusing that I switched to DO. Not that DO is cheaper but it's pricing is way simpler.

Cloudwatch alarms can stop running EC2 instances. We do this to prevent accidentally leaving expensive instances running. If that helps a little.

Afaik the data is delayed and the damage might already be done by the time the alarm fires

For just running instances, that's not true. You set up a metric that checks if the instance is alive every minute, ten minutes, whatever, and set it's alarm condition to trigger once the metric is true for six samples, 600 samples, whatever you need. The alarm action: stop instance.

While it wouldn't be as easy as what you want, and this certainly wouldn't be the best solution, they say they offer an API.

So in theory (I haven't checked what the API gives you control over - so this may be worthless), you could monitor your instance (bandwidth, time up, disk usage, etc), and if things get out of hand, or approach your limit (whatever it is), you could use the API to say shut down or delete the instance, or throttle the bandwidth (maybe via firewall rules or something?).

Again - this would assume the API allows you to do this (and ideally from within the instance itself - which shouldn't be an issue, I wouldn't think). And again, it shouldn't take this much work (you're right, it should just be a simple control panel setting).

But maybe it's an option for those who have the skills to implement it?


I just took a quick look at the API docs - and while it doesn't look like you can mess with the firewall rules settings, everything else should be possible (get metric data, start/stop/reboot/delete instance, etc).

"Why even offer a service like this if you cannot GUARANTEE that the $5 they say they charge is all you'll ever get charged?"

Exactly! We have used AWS and DO a lot this past year. DO is great for smaller sites/api's and super easy to use. Their support is also outstanding.

AWS has tons of tools but many come at a cost. We are in the process of moving a couple of sites off of AWS and onto a LiquidWeb dedicated box. We will be paying much less and the LW dedicated server is more than enough for what we need.

AWS is great for spinning up and scaling instances quickly and comes with a ton of other tools. At the end of the day however it is not always the most cost effective or even best offering for most sites/apps.

AWS has tons of tools but many come at a cost.

Well yeah, that's how they pay for those tools -- they charge for them.

Nothing wrong with charging. There are more affordable alternatives though.

Isn't $80 for a 2-core processor a bit high especially if they are competing with Digital Ocean?

And DO is already immensely expensive compared to VPS at OVH, or actual dedicated hardware at Hetzner, OVH, Online.net, Scaleways, KimSufi, however they are all called.

I'm in Australia so I either pay a premium (100x) for an Australian VPS, or deal with 150 (US) to 400ms (EU) latency. I'm "locked out" of Hetzner et al due to that - and yes, I know there's nothing you can do about the speed of light. I just wish Australian/New Zealand/Singaporean/Hong Kong offerings were more competitive (all of those are <200ms)

Digital ocean have servers in Singapore for the same price as all their other ones. The latency from Perth is about 40ms from memory.

167ms from Canberra, on the NBN. Terrible routing though - I wonder what it would have been pre-TPG takeover, though.

Have you checked out vultr? I host my VPS there with no issues and extremely good ping.

Other comments regarding Vultr's billing in this thread have put me off it completely.

They say on the FAQ:

>For every Lightsail plan you use, we charge you the fixed hourly price, up to the maximum monthly plan cost.

Wording implies the monthly pricing is a 'maximum' price.

Traffic is the big item that can add additional charges:

> Data transfer overages above the free allowance are charged at $0.09/GB.

On the $5 instance the second TB (at $90) is 18 times as expensive as the instance itself with the first TB included.

Wow. That traffic is almost 50 times more expensive than at hosters of dedicated servers, or when you buy it directly from Tier 1 or Tier 2 networks.

Hetzner charges 1.36€ per Terabyte of traffic, and with most servers, gives you 10-20TB included.

I’ve heard people talk about the ridiculous traffic costs of AWS, but this is an entirely new dimension of expensive.

You get 30 TB traffic inclusive with the smallest Hetzner server that has ECC (4 core Xeon with 64GB RAM). And that is for ~70 EUR per month + setup fee.

That amount of traffic is more than 2000 EUR per month at AWS. Of course this is comparing entirely different things, but still, if you have significant traffic and can't avoid it with a CDN or something like that, AWS (as well as Google and Microsoft clouds) get seriously expensive.

I get that, but that's after the first TB. For a $5 VPS I'd expect most customers wouldn't go over that.

I'd agree. But it would be nice to have an option in the account settings so you literally cannot accidentally spend $90 on a $5 account.

It's not expected... until your site get unexpectedly featured on HN, reddit or any news site.

I'm pretty sure AWS only charges on traffic out.

Edit: just did some research, there are many cases this isn't true

That's just referring to the VPS itself. If you keep reading the FAQ you'll see that there are ways to exceed that amount.

Plus there's nothing stopping someone breaking into your account and upgrading it in all kinds of evil ways (which has been a huge hassle with AWS tokens being stolen from e.g. Github).

Really? What are these evil ways?

There were lots of stories of hackers spinning up the largest ec2 instances available to mine bitcoins on anyone's account they could get access to

Not OP, but I've heard of people getting hijacked and finding a bunch of VPSs spun up sending out spam and other malware related stuff. Maybe that's what they mean.

It's not; there are additional possible outbound bandwidth overage costs, and they are not insignificant.

>I'd welcome an account suspend instead of bill shock.

The big question here is what to do with stateful data. Would you accept an immediate deletion of all of your S3 data? RDS instances and snapshots?

I said suspend, not removal.

They could easily cut off public access to those resources while charging you storage fees. Obviously with any kind of ceiling there are certain details that need to be ironed out (i.e. most people wouldn't want configuration information or data to be lost, but they likely would want VPS to be taken offline and other services to be suspended).

Ultimately for most startups, small businesses, and individual developers being able to say "My AWS bill cannot exceed $1000, period" is a powerful tool. Right now if a billing alert fires at 1am, you may not see it until 9am and by then you're already in huge trouble.

SNS topics and lambda jobs. Just use the API to add a firewall rule to block traffic when it gets a billing SNS alert. Should be pretty simple to get working.

That's still a very striking omission of a feature.

Lambda isn't available in all zones, and not everyone has the time/ability/knowledge to set up such a thing. I'm sure I could do it, but only if I were to spend a few hours researching it, and probably a few nights working on such a solution. I'd also have to trust that I didn't mess it up -- I'd hate to have a bunch of traffic and NOT properly prevent the traffic.

This is the kind of thing that Amazon surely could provide easily if they wanted to.

It's like if your phone company didn't give you an option to limit your spending (prepaid), but said that you could use their arcane API to tell them each month to start/stop service. That's great, but not really very nice to customers.

There is no need to delete data.

The monthly budget cap should be allocated to existing storage first. This covers the existing data for the next month. If there is any free limit left, it could be used for new data writes + 1 month of storage, and/or running services. Once the limit is reached, then writes are blocked and services stopped.

The only situation where you would need to delete data is if you want to set a new monthly budget that is lower than your existing monthly storage-only bill - but the UI could just disallow this.

Per the AWS TOS, once Amazon suspends your account for non-payment your data's kept for at least 30 days. No reason they couldn't do the same here.

There is a way to hack this up, which is probably a bit complex for a single instance, but works:

- Setup a HTTPS endpoint on the server that listens for an SNS notification and performs an action (e.g. backup ephemeral data to S3 and shutdown). I wrote the service in Go and the action is just a shell script but choose your favorite language.

- Setup an SNS subscription pointing to the service endpoint.

- Setup an SNS topic for the message.

- Set up an SNS notification in AWS billing. I use "When actual costs are equal to 100% of budgeted amount".

The problem is that it's necessary to lock down the endpoint listener as it will usually need root access in order to shutdown the machine. This can be done by using authentication on the endpoint, setting up a locked down user to run the service under and granting that user the ability to run /sbin/shutdown in the sudoers file.

There are probably nicer ways to do it, but this does work to limit my spend on each instance.

You can also add AWS API calls to delete any other costly related resources (static IPs, load balancers etc.)

I've been thinking about writing a more modular and robust app that handles multiple instances etc but most of my servers are now in GCE so I don't really have the need.

These are going to be insanely expensive. Each FPGA chip costs $50k retail, and unlikely to be cheaper than $10k in volume, so I can't imagine the per hour cost being particularly cheap.


Does this set or allow the user to set a cost ceiling? Probably not. Just based on transfer (network) costs, Amazon Lightsail includes this caveat:

Some types of data transfer in excess of data transfer included in your plan is subject to overage charges. Please see the FAQ for details.

Thanks, that's exactly what I was wondering. This makes LightSail a non-starter for little guys like me; I'll stick with Vultr for similar service that charges the actual price listed with no gotchas.

What are these "various" ways you're talking about?

The only overage charge I see is for data transfer. This isn't ideal, I'll grant you, but it's not the same as "various".

DNS, IP Address Reservation, Plan Upgrades, Bandwidth, and so on.

We've seen AWS accounts get broken into with stolen tokens, additional VPS's started, VPS upgraded, bandwidth consumed, etc. And while Amazon has been good with refunding the FIRST time, nobody wants to wake up to a 10K bill because your gitignore had a typo.

A ceiling or cap may even stop plan upgrades without an email confirmation. That would be hugely welcome, particularly in a world where bad guys are actively seeking out VPS to break into.

Genuinely curious to hear if that "gitignore had a typo" 10k bill has a story behind it!

Linode provides 2 gb for $10. AWS provides 1GB for the same thing. Also, AWS does not support ipv6 which means you can not launch your apps in istore if you use AWS servers.

> AWS does not support ipv6

Wait what? That's getting ridiculous. Seven years ago they would have been on time, perhaps even considered early by some, but three years ago when looking for hosting providers I already laughed at the ones without v6 and moved on without a second thought. They weren't even the cheap ones.

Currently enjoying a €3/mo VPS at Pcextreme.nl with the same specs as the $5 Lightsail VPS. But with IPv6 of course.

Perhaps I've been spoiled with dual stack at home since 2009 from xs4all. Other Dutch ISPs promised it in (iirc) 2013 and every year since, but there has yet to be a second big one to offer it and other countries like Belgium surpassed us by now. Even Germany's Telekom is getting there.

I use budgets in AWS to send me a text when I get to a certain limit. You could potentially use an SNS topic to take action and shutdown resources I suppose. I guess if you are always traveling or can't get to a computer this could be an issue for you. I can't imagine there are that many people seriously using AWS who could suddenly run up a huge bill that can't respond to a txt or have an friend, coworker, employee or automated task do so.

On the landing page they say there will be overages for excess data transfer. In terms of how it is different, I think a lot of people here have probably been using AWS for 5+ years and the jargon etc. is familiar, and we can calculate the price and understand what it is, but for people who are just getting started this will be a product that is directly comparable to i.e. Digital Ocean and can serve as a gateway drug to AWS.

That's how they get people on Glacier storage.

The storage is cheap as balls but the transfer can fuck you.

The whole point is to service customers with massive storage needs who seldom need to retrieve the data. For example, many companies are compelled by regulation to archive records for decades, but only need to access data in response to lawsuits/etc.

no one with frequent high volume retrieval needs would be advised to use glacier.

They changed the way Glacier retrieval costs work a little over a week ago to help address this.


>> But they've never offered a hard "suspend my account" ceiling that a lot of people with limited budgets have asked for.

That's probably where most of their profit comes from. That option was most likely squashed from the highest authority.

Can you use one of those top-up VISA cards? That you put so much money on, Do you need to be tied to a bank account?

Amazon usually bill in arrears (no idea about Lightsail). In the general case even, it doesn't matter. What you owe Amazon is independent of whether your card allows the charge or not. You can run out of top-up VISA card credit but you may still end up owing Amazon money.

This applies to any other provider, too. What you owe the provider is what you contracted to pay the provider (eg. by consuming services, or clicking the "upgrade" button in a web interface). It is independent of them actually taking the money.

this is why people set up companies, which in the UK costs 13 pounds/year to keep operational

limited liability, can't really beat it

Intentionally doing that in order to avoid paying sounds a lot like fraud, something your limited liability won't protect you from.

shifting risk to the creditors is the entire point of limited liability; the risk of deadbeats abusing it is always there, and the ability to pay is something almost every business will take into account when arranging a line of credit

amazon could completely negate this risk by requiring pre-payment for small/unknown operators, which is something a lot of people (myself included) desperately want them to provide.

I'm sure they've done their sums here, and have figured out the increased revenue from customers not being able to set a budget is more than their potential losses from deadbeats

the variable costs are basically zero, after all ( bandwidth and CPU time are worthless if not utilised)

Remarkably spot on! Everything.

That's exactly what we do: www.onekloud.com happy to chat (eric@onekloud.com)

I pre-paid for 3 years of service but 1 year in they sent notification that the server farm would moved and I had 30 days to relocate my servers. Was pretty busy at the time so was an inconvenience to move everything, but I did.

Then I get the next months invoice and it's not using my pre-paid services but instead is billing for full CPU usage - no reserved instances.

After emailing support several times they say it's my own fault for not using the correct instance type, even though it's identical to the one I pre-paid for. It may well be my error but it was caused by them since I never asked for my servers to be moved. It's been an expensive and time-wasting experience -- will never use them again.

Am currently evaluating GKE (even more expensive) and DigitalOcean.

"Just to be clear: This service is offered by Amazon/AWS themselves, it isn't a third party. "

says Someone1234. Should we believe it?

It's an official AWS announcement, so yea...

That's exactly what we do: www.onekloud.com. You can contact me at eric@onekloud.com

Price breakdown vs DigitalOcean, VULTR and Linode.

Of course all things are not equal (i.e. CPUs, SSDs, bandwidth, etc).

  Provider: RAM, CPU Cores, Storage, Transfer



  LightSail: 512MB, 1, 20GB SSD, 1TB
  DO:        512MB, 1, 20GB SSD, 1TB
  VULTR:     768MB, 1, 15GB SSD, 1TB



  LightSail: 1GB, 1, 30GB SSD, 2TB
  DO:        1GB, 1, 30GB SSD, 2TB
  VULTR:     1GB, 1, 20GB SSD, 2TB
  Linode:    2GB, 1, 24GB SSD, 2TB



  LightSail: 2GB, 1,  40GB SSD, 3TB
  DO:        2GB, 2,  40GB SSD, 3TB
  VULTR:     2GB, 2,  45GB SSD, 3TB
  Linode:    4GB, 2,  48GB SSD, 3TB



  LightSail: 4GB, 2,  60GB SSD, 4TB
  DO:        4GB, 2,  60GB SSD, 4TB
  VULTR:     4GB, 4,  45GB SSD, 4TB
  Linode:    8GB, 4,  96GB SSD, 4TB



  LightSail: 8GB, 2,  80GB SSD, 5TB
  DO:        8GB, 4,  80GB SSD, 5TB
  VULTR:     8GB, 6, 150GB SSD, 5TB
  Linode:   12GB, 6, 192GB SSD, 8TB
In an easier to read gist: https://gist.github.com/637693650bc8bb9baadf6293a99e1813

I closed my VULTR account after getting this email from them

----- Dear Vultr Customer,

Including pending charges, your account is carrying a $5.94 balance.

In order to cover your current balance and your estimated monthly costs, our billing system will automatically deposit $275.00 from your payment method on file in 24 hours.

That's ridiculous, regardless of your username.

How many instances did you have at the time you received the email?

Usually, when I receive an email like this, the amount is equal to the monthly bill of the instances I have active at the moment.

I had one $5 instance, which is what made this seem ridiculous

Yep, it is insane if you only had 1 $5 instance.

Maybe a bug in their billing software or something...

Yes, but it's still stupid. Cloud hosting companies who are using hourly billing are used to spin instances up and down all the time. Imagine your app has a busy day and you need 500% more resources than usual, with the system of Vultr you will be automatically charged 500% of your regular usage only because of one days spike (obviously only if you are near the $0 balance mark)

> you will be automatically charged 500% of your regular usage

You are not being "charged" per se. The amount is transferred to your account and is there as credit until you spend it.

Sorry for nitpicking, but it is important point.

I'm going to nitpick in reverse.

You are being _charged_

- This is a charge against your card

- They are not a bank, so the money in your "account" with them is just an unsecured, general liability to you. If they go bust, they owe you money but you will never see it.

- If you want to withdraw that money from your "account" and they refuse, then your options are pretty limited.

Once they take it from your payment method, it becomes their money, not yours. That's a charge.

I am not disagreeing with what you say, but "charged" as it was used in the GP comment, it might have lead someone to believe that VULTR charge for usage per month.

To be honest, I have no idea what you are talking about. Your card is being charged and that's all I said. Sure you have credits to spend, but the cash you have no immediate need for on your Vultr account sits now there because of their billing practices.

Wow. Have you contacted them about it? That's crazy, I'd have to assume it's a bug.

I've never been charged on vultr, as I use bitcoin to always pay my bills (which is push only). I think they have stopped accepting that for new customers due to abuse though, which is quite a shame (but understandable).

They do send out emails like this, however they seem fairly flexible. I have several Vultr instances running, and they're quite happy for me to fund the account as and when if required, and have never automatically charged me.

That said, it is an odd message to send out, and I had my concerns when I first received a similar email. Maybe it's something they should look into altering.

Wow, I've never had that happen all the emails I've gotten from them have said they'll automatically deposit $10. Never had anything over $10.

I've been using packet.net for a small side project, they do bare metal / single tenant servers but the provisioning is very similar to linode/digital ocean. Their ~$35/mo ($0.05/hr) option is on par with digital ocean's $80/mo offering.

Plus 5 cents per GB of outgoing transfer.

Curious, why don't you compare with known "non-cloud" hosting companies too, that provide vps services, like ovh, hetzner, leaseweb, etc.?

EDIT: Anyone cares to explain his reasons behind a downvote?

Does anybody care about the answer to your question? In all likelihood, the answer is: because that information is even more work to collect.

Your post reminds me of the burden of proof fallacy, in which you create work for other people by asking questions you could easily answer yourself.

If you or anybody posts the comparison you requested, it will surely be upvoted.

Thanks for posting a note in this thread - I would have missed it otherwise!

I don't believe he directly requested a comparison. What he asked was why the commenter above him chose not to. You seem to acknowledge that in your second sentence, but then go on to state that his comment is reminiscent of asking questions one could answer themselves - asking a commenter why they chose (not) to do something doesn't strike me as answerable by a second or third party.

There are also many less known VPS providers with truly great deals. I've been using these two:

NodeServ: $1.25/month = 50GB HDD, 512MB RAM, 1 core, 1TB bandwidth, location in Jacksonville.

Host.us: $6/month = 150GB HDD, 6GB RAM, 4 cores, 5.12TB bandwidth, location in Dallas.

Both deals found on LowEndBox.

Your lesser known VPS providers probably have lower tier bandwith, more crowded hypervisors, old virtualization tech or old hardware.

That's not to say getting a 8GB openVZ vps for $4 a month isn't an amazing deal, but just that there are caveats.

I just had a look through their ToS for the catch.

For the most part it's reasonable, but there's a freaking litany of reasonable things you're not allowed to run, including IRC, audio/video streaming, game servers, and so on.

Why on earth do you sell me X block of resources for Y$/month if you're going to tell me what I can and can't do with them? Surely unreasonable use would be covered by resource limits already in place?

There are mainly 2 reasons:

* they oversell, so they assume that only small fraction of web servers will consume 100% of resources, while almost every torrent client will consume 100% of the bandwidth. There is nothing wrong with overselling hosting within a reasonable margin, but most of the people here want to run more than a LAMP stack on the server. * they get too much admin overhead replying to Tor "abuse" letters etc., so they just decided to deal with it in the simplest way possible.

I guess both factors contribute equally.

Most VPS providers already limit the options to run Tor nodes or SMTP servers in one way or another. However forbidding things like IRC and audio streaming is quite unusual and I wonder how oversubscribed their bandwidth must be on these hosts.

I doubt CPU or RAM allocation is the issue here given AWS already have a good CPU time credit system to manage it.

In the old days (90s-2000s), allowing IRC bots opened yourself to being a DoS target and general receiver of harassment complaints from perceived social abuses that happened in the chat rooms. I assume things haven't changed that much.

All major IRC networks hide your IP address when you connect now (mode +x), but I bet the perception still exists. Most of these ToS'es are thoughtless copypasta from other services.. every now and then there's a Show HN from a new hosting company that has absurd nonsense in their terms, and the creator gets suitably chastised for it.

I don't really care. My $6/month deal beats any $5/month deal from the major players, and by a huge margin. I recently tested the internet speed on it, and I got 850 real Mbps out of the promised 1Gbps channel, which is good enough for me. I can give Memcached 1GB of RAM and not worry about killing everything else.

I have a bunch of sites running on it without stepping on each other, and I doubt that would be the case on AWS / Google / DO.

You're telling me that your 150GB HDD beats a 20GB SSD? Even though they're both going to sit at 10GB free for years.

All the critical things should sit in RAM anyway. The SSD will beat the HDD if you read/write to disk heavily. But not if you need space.

If I need an SSD, there are options too, though DigitalOcean is indeed one of the best if you need a cheap US-based server. If the location doesn't matter, EU, Russia, Ukraine have some great deals.

Example: $4.6/month = 40GB SSD, 1GB RAM, 3 cores, unlimited traffic @200Mbps.



He is free to use his server how he sees fit. And are you really sure that Amazon is going after a very different market with their 5$/mo. offer than a low-cost VPS provider?

I think comparing a (NVMe?) SSD solution to a HDD one shows the OPs ignorance of the differing market segments each is going after. They aren't comparable solutions.

WTF? I never did anything with wordpress plugins in my life. And even if I did, so what? You have some elitism / insecurity issues.

LowEndBox and LowEndTalk are awesome, been following for years. CyberMonday got me KVM with 8GB RAM for $10.

Possibly because (except for Leaseweb), they're all European and of limited usefulness as a result. Linode, Vultr, DO and AWS all have numerous regions around the world.

They're also typically leased for at least a full month and can't be spun up/down on demand like you can with these services.

Plus they focus on large (>16GB) dedicated servers.

Leaseweb is in Amsterdam.

It's also in the US, Germany, Hong Kong and Singapore: https://www.leaseweb.com/platform/data-centers

Oh wow. They must have expanded over the years then.

That's simple. When DO came on the scene it was the only one of its kind. Now there are 4 almost identical services: DO, Linode, Vultr, and Lightsail. On the surface those seem the same because of the near linear pricing and similar allocation of resources. The ones you listed aren't even close. Each of those may have some but they don't have all of what makes the DO model so useful to some of us:

1. Mission-critical/Production ready reliability and communication (all maintenance and issues)

2. No unexpected termination of instances / Reasonable warning & mediation

3. Not overprovisioned / little concern of noisy neighbors

4. Tier 3/4 redundancies

5. Strong American coverage (each DC with Tier 3/4 level services)

6. No setup fee on new instances

7. 1-minute provisioning (simple creation of instances / no ticket needed for deleting resource)

8. Programmatic IaaS management including provisioning, DNS, and images

9. Quality resources - mostly Xeons not ARMs, local SSD not Ceph

10. Huge backing - they're not closing tomorrow & I wanted a #10

While OVH, Hetzner, Leaseweb seem like nice services, particularly for needs in Europe, I can't build an American-centric service on those, set it and forget it nearly as easily or worry-free as with DO/Linode/Vultr/Lightsail.

I expanded it with those companies:

Price breakdown vs DigitalOcean, Vultr, Linode, OVH, and Online.net / Scaleway:


    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              | 512MB |     1 |   20GB SSD |      1TB |
    | DO                     | 512MB |     1 |   20GB SSD |      1TB |
    | VULTR                  | 768MB |     1 |   15GB SSD |      1TB |
    | Hetzner (virtual)      |   1GB |     1 |   25GB SSD |      2TB |
    | OVH                    |   2GB |     1 |   10GB SSD |      ∞TB |
    | Scaleway (virtual)     |   2GB |     2 |   50GB SSD |      ∞TB |

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   1GB |     1 |   30GB SSD |      2TB |
    | DO                     |   1GB |     1 |   30GB SSD |      2TB |
    | VULTR                  |   1GB |     1 |   20GB SSD |      2TB |
    | Linode                 |   2GB |     1 |   24GB SSD |      2TB |
    | Hetzner (virtual)      |   2GB |     2 |   50GB SSD |      5TB |
    | OVH                    |   4GB |     1 |   20GB SSD |      ∞TB |
    | Scaleway (virtual)     |   8GB |     8 |  200GB SSD |      ∞TB |
    | Online.net (dedicated) |   4GB |     2 |  120GB SSD |      ∞TB |


    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   2GB |     1 |   40GB SSD |      3TB |
    | DO                     |   2GB |     2 |   40GB SSD |      3TB |
    | VULTR                  |   2GB |     2 |   45GB SSD |      3TB |
    | Linode                 |   4GB |     2 |   48GB SSD |      3TB |
    | Hetzner (virtual)      |   4GB |     2 |  100GB SSD |      8TB |
    | OVH                    |   8GB |     2 |   40GB SSD |      ∞TB |
    | Scaleway (dedicated)   |  16GB |     8 |   50GB SSD |      ∞TB |
    | Online.net (dedicated) |  16GB |     8 |  250GB SSD |      ∞TB |

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   4GB |     2 |   60GB SSD |      4TB |
    | DO                     |   4GB |     2 |   60GB SSD |      4TB |
    | VULTR                  |   4GB |     4 |   45GB SSD |      4TB |
    | Linode                 |   8GB |     4 |   96GB SSD |      4TB |
    | Hetzner (virtual)      |  16GB |     4 |  400GB SSD |     20TB |
    | OVH                    |   8GB |     2 |   40GB SSD |      ∞TB |
    | Scaleway (dedicated)   |  32GB |     8 |   50GB SSD |      ∞TB |
    | Online.net (dedicated) |  32GB |     8 |  750GB SSD |      ∞TB |

    | Provider               | RAM   | Cores | Storage    | Transfer |
    | ---------------------- | ----- | ----- | ---------- | -------- |
    | LightSail              |   8GB |     2 |   80GB SSD |      5TB |
    | DO                     |   8GB |     4 |   80GB SSD |      5TB |
    | VULTR                  |   8GB |     6 |  150GB SSD |      5TB |
    | Linode                 |  12GB |     6 |  192GB SSD |      8TB |
    | Hetzner (virtual)      |  32GB |     8 |  600GB SSD |     30TB |
    | Hetzner (dedicated)    |  64GB |     8 | 1024GB SSD |     30TB |
    | OVH                    |   8GB |     2 |   40GB SSD |      ∞TB |
    | Scaleway (dedicated)   |  32GB |     8 |   50GB SSD |      ∞TB |
    | Online.net (dedicated) |  64GB |     8 | 1500GB SSD |      ∞TB |
Gist available here: https://gist.github.com/justjanne/205cc548148829078d4bf2fd39...

It's must be said Hetzner have much cheaper offers on their auction: https://robot.your-server.de/order/market

There also have no setup cost for these dedicated servers.

They're also worryingly keen on non-ECC machines at the low end though

The OVH numbers are wrong. For example in the public cloud for $ 80 you get 60GB RAM, 4 vcores and 400 GB disk (or 200 GB SSD no raid).

Could you link the offer? I couldn’t find any in their VPS or Dedicated offers better than the 16$ VPS (which I then used for the 40$ and 80$ tier, too)

I edited the gist, but sadly can’t edit the post anymore. Thanks, btw. I completely missed those.

Thank you for this.

Well, you are comparing shitty atom cores to E5 cores. These are just some random numbers without context.

Well, I’m just extending the previous’ posters list.

But Amazon will lower your CPU quota as well, if you use it for too long, so it’s not like the numbers of Amazon themselves even really mean anything.

The same story with storage performance – Amazon’s is horrible, but it’s network-attached.

My question is why does no one talk about prgmr?

Do you have experience with prgmr? It's reliable enough for a small project?

I've used it and am happy with it. It's reasonably reliable, but has more downtime than the big providers. I wouldn't run mission-critical software that needs 5 9's uptime on it, but for anything else it's fine, certainly for personal projects. They're transparent with any outages, so you can check up on the outage history on their blog: https://prgmr.com/blog/

It's a small business run by a few people (though it's been around for 10 years, so a pretty stable one), which has the pros and cons that go along with that. The tech staff is good, techies who know what they're doing and generally assume that you do also. So if you send a request or problem report, you aren't going to get a form reply that asks if you tried turning it off and back on again. But it's just a handful of people, so if there's a major issue, fixing things is pretty manual and slower than at places that have armies of 24/7 devops staff.

One specific thing I really like about it: it gives you SSH access to a proper text console, in case you want to install a custom OS, recover a broken install, etc. Most VPS providers give you console access, but most do it via VNC in the browser, which is not my favorite way to do sysadmin work.

> in case you want to install a custom OS

Do they let you do that? They don't say that in the purchases page.

Also, how is uptime relative to vultr?

Yeah, you have a choice of using their prebuilt disk images, running one of the officially supported OS installers from the console menu, or downloading your own installer. The list of prebuilt images and supported installers is here: http://wiki.prgmr.com/mediawiki/index.php/Distributions

I haven't used vultr so can't comment on that.

[Disclaimer: I work at prgmr.com]

You can install a custom OS. But it can be difficult to use an installer we don't provide right now because we only allow serial console access, not VNC. This means most installers won't work out of the box. Worst case you can dd an image to the disk using ssh from the rescue image.

FYI we don't do overage charges right now. For network, if we can't throttle your traffic down then we will shut your service off.

Our blog is a little misleading these days in that for downtime for individual servers, we started emailing customers directly rather than posting to the blog. This is because we want to make sure customers see the downtime notice. We also got confused responses sometimes to the blog wondering whether a given service was affected or not and if we email directly there is no such confusion.

I think our worst case downtime barring about 5 services this year has been the following:

* 0.75 hour network outage, unplanned - 2016-03-16 (gave proportional credit)

* ~2.5 unplanned downtime due to hardware failure requiring new components - 2016-04-03 (gave 15% month credit)

* 2.6 hours downtime from start of maintenance window, planned due to security upgrade - 2016-07-23 (gave proportional credit)

* 2 hours or less downtime, planned due to security upgrade - sometime around 2016-09-01 (gave proportional credit)

* 1.5 hour network outage, unplanned - 2016-09-09 (gave proportional credit)

* 1.3 hour network outage, unplanned - 2016-11-06 (gave proportional credit)

* 2.04 hours downtime from start of maintenance window, planned due to security upgrade - 2016-11-18 (gave proportional credit)

This is a total of up to 12.69 hours downtime over the year so far, assuming downtime started at the beginning of maintenance windows (it usually started after.) Of that 6.05 hours, or less than half, was unplanned.

So far this year there's been about 336 days or 8064 hours. 12.69/8064 is 99.84% uptime overall, which is significantly lower than we would like. For some servers the uptime has so far been significantly better in that there were no hardware failures, one of the security upgrades was unnecessary, and the turnaround time for the remainder of the security upgrades was much faster than for this particular server.

For this particular server, the largest downtime contributors were security upgrades and network outages in that order. For network downtime, we got around to setting up our second upstream but there's a number of single points of failure we should take care of in 2017. There is also some additional scripting we should probably do that would cut down on the network downtime a lot, such as automatically taking down BGP if connectivity beyond the first hop is lost.

For the security update downtime, I think our most realistic bet right now is to get ourselves on the latest version of Xen once it comes out. That will hopefully have a stable implementation (not a technology preview) for live patching.

"EDIT: Anyone cares to explain his reasons behind a downvote?"

I downvoted you because you talked about your downvotes.

Don't interrupt the discussion to meta-discuss the scoring system.

Write your post and live with the results.

It's probably since the ones listed above are the typical VPS to go to when it comes to cheap hosting. The ones you mentioned are lesser known.

Are cores really comparable between DO and Lightsail. Are we sure that 1 core isn't really something not quite a real core but something that is already over allocated based on the assumption of less than 1 core of actual usage? Thus we need to know what the actual over allocation number is to realy figure out if they are comparable.

To me this is what's important. I mainly use VPSs because I'm lazy! I have a bunch of 5$ droplets that I use for development, and even sometimes just to move things around the net more easily... For my particular use case, I don't need to change unless Lightsail offers me a less crowded core.

Really, it just seems like AWS is fighting DO on this one, to get a share of their profits. My impression is they're looking for DO & AWS customers to stay on an Amazon-only stack. The comparison made by the commenter above actually makes me consider Vutlr and Linode :)

That's exactly what makes Lightsail attractive to someone like me. I have production services on AWS and Linode, and I have only positive things to say about Linode, but it would be very nice to manage everything in one place.

More of the same with Ramnode. Really not a lot of competition, is there? Every plan starts around $5/mo for a KVM instance, and each tier increases all of the specs, while doubling the monthly fees each time. No ability to customize to your specific requirements.

What if I need a lot of CPU power, but not much bandwidth? What if I want lots of RAM, but don't need much disk space? What if I'd rather have an HDD with more storage than a faster SSD? There's nobody offering a "configure your own VPS specs" plan.

Google now has "Custom Machine Types" https://cloud.google.com/custom-machine-types/

Well, there's Amazon Web Services (EC2 / EBS), MS Azure, Google Cloud Platform (Compute Engine) to name but three.

Dediserve offer this


Since Oct 2014 I've paid $79.90 per month for 16 GB memory, 24 cores, 1 TB hard drive, 128 GB SSD, unmetered i/o, static IP, Windows O/S, on a dedicated physical machine.

I have no idea why people think Amazon pricing is worth it.

I'd love to see a comparison on core performance and SSD disk performance

I use VPSdime. $15-20 worth of VPS for $7. Already for 7 months with zero problems.

I'd be grateful if you used my link https://vpsdime.com/aff.php?aff=1272

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact