- No way of limiting the expenses AFAIK. I don't want the possibility of having a huge bill on my name that I cannot pay. This unfortunately applies to other clouds.
- The risk of being locked out. For many, many reasons (including the above), you can get locked out of the whole ecosystem. I depend on Google for both Gmail and Android, so being locked out would be a disaster. To use Google Cloud, I'd basically need to migrate out of Google in other places, which is a huge initial cost.
Both of those are basically risks. I'd much rather overpay $20-50/month than having a small risk of having to pay $100k or being locked out of Gmail/my phone. I cannot have a $100k bill to pay, it'd destroy everything I have.
Also I haven't needed it so far. I've had a Node.js project on the front page of HN, with 100k+ visits, and the Heroku hobby server used 30% of the CPU with peaks at 50%. Trying to do the software decently does pay off.
In fact, I recently shut down a personal App Engine service I had been using for myself for a few years just because of this paranoia. The service was not doing anything illegal, just crawling a few websites (forums, ...) I like and sending me emails when there are interesting updates. But you never know if they might determine my outbound traffic is suspicious. I also started the long process of moving my main email from a @gmail.com to a @custom.domain that currently forwards to gmail, just in case I get locked out.
It is quite bizzarre that this is the reputation google gained for themselves.
Agreed. I often second guess my usage of various Google apps and services since I don't want to trigger some process that I would have no way of ever knowing.
The recent case of someone getting banned from using Apple Pay comes to mind (https://news.ycombinator.com/item?id=20841586)
Apple says that their virtual card numbers protect your privacy because they're untraceable. Ok, but that also means that your using Apple pay is mostly indistinguishable from credit card fraud.
But, you ask, does Google really have to worry that much about fraud? Do people really phish known-good Google accounts, add a stolen card, and then buy a whole bunch of ads?
Well.... yeah. That's actually one of the primary uses for stolen credit cards.
If only they had some process where a customer could agree to have Google publicly explain why an account was banned, I think we'd see many more explanations along the lines of "This customer was using Google cloud to launch Ddos attacks" or "This customer sent bomb threats to the president".
Considering these links are all about people with valid businesses and apps, I doubt your examples apply for violations.
The only thing that's preventing Google from telling their side of the story is their own refusal to engage human-to-human with individual customers.
> That's still your private mail. They can't go looking it it,
They definitely subject it to all kinds of automated scanning for spam and potentially abuse. The nearest thing to a public statement from Google on the subject seems to be: “very specific cases where you ask us to and give consent, or where we need to for security purposes, such as investigating a bug or abuse”
i.e. there may be abuse cases where they read your mail without asking.
I am willing to bet there would be zero explanations along those lines.
“Digital Ocean killed our company”
Big G's policy is to lock you, provide no further comments and no contact link.
What's the alternative you're proposing? Are you just saying Cloud Bad?
Google's algorithm can flag a chargeback as suspicious, leading you to be locked out.
(GOOG's market cap as I write this is $985B...)
Source: I work there.
Also: I would exhaust all my escalation options before going the cc route. With any retailer.
> Afaiu this doesn't cancel the contract
There is no contract if Google didn't send the phone. The commenter doesn't owe them money for something they failed to send.
> Then, teaching all the fraud detection systems that your exact usage pattern leads to fraud seems unpleasant too.
Or (hear me out, this might sound insane): a fucking human could talk to the customer and flag it as "not fraud". This is how every other company does it.
The solution to getting screwed by an algorithm is not to give in to the algorithm. It's to talk to a human to override it.
The ultimate solution, I hope, is that the next iteration of the federal government is pro-consumer and enforces our UCC rights and/or breaks up Google.
Don't know about your jurisdiction, but in most countries I lived in the law works otherwise. The moment you checked out you have a contract. Seller not sending out the goods is failing their contractual obligations. But that failure does not cancel the contract, only allows you to execute the appropriate clauses of it (some of which usually lead to refund and cancellation). On the other hand, the delivery of goods doesn't usually prevent you from cancelling the contract (you usually have an obligation to return the goods in that case). The clauses injected by laws tend to make this very consumer friendly... But the ones I remember require you delivering a notice of cancellation and I'm not sure if failing payment counts as such (IANAL, I don't even know your jurisdiction, Yadda Yadda).
Packages get lost all the time. They have a process to file it with their shipping partner.
You can also rate limit some APIs.
> You might cap costs because you have a hard limit on how much money you can spend on Google Cloud. This is typical for students, researchers, or developers working in sandbox environments. In these cases you want to stop the spending and might be willing to shutdown all your Google Cloud services and usage when your budget limit is reached.
Or if the rise in costs is something you can mitigate on your end - such as a bad deployment - you might want time to respond yourself, rather than your site going offline.
More generally, few site reliability engineers are looking to add extra ways for the site to be taken offline.
Of course, if you're large enough to be in those situations, your round-the-clock operations staff will be monitoring the billing situation as carefully as they monitor page load times and error rates and database load so an unexpected bill will be very unlikely.
SRE 101 is rate limiting everything and protection against DDoS. With cloud and auto scaling risks of DDoS are less about uptime but more about getting a bill that will bankrupt the business.
If your company has anything close to "reliability engineers", then you already have legal and finance teams too that can sort terms out.
The discussion is about companies that do not even have departments to begin with.
Apart from the cases already mentioned in sibling comments, at some scale you start adding in outage switches in many cases. Basically quick ways to take parts of your service offline if something starts misbehaving.
Google prefers you be on the hook rather than them.
I also don't understand why this is being framed as a uniquely Google problem. Other cloud providers with serverless services have similar hazards and similar methods to manage the risk.
I don't think the alternatives are nearly that expensive. Vultr, DigitalOcean and others have virtual private servers for only $5 per month. They're small instances but totally fine for side-projects. I run a cheap $5/month VPS and it was able to withstand my project making the HN front page without issues. I don't use Google cloud hosting for the same reasons as you, I don't want to have too many eggs in one basket.
In our modern cloud age, I think we've forgotten how much a single box can actually handle (and how few of us actually _need_ to "scale" from day one). Hacker News front page isn't really that much traffic in the grand scheme of things. My $5 DO instance handled it without a sweat. Hell, even "real" projects can still work under this approach. A $20/mo DO box, sqlite, and a few shell scripts can get you shockingly far ^_^
I have 15 containers connected to 10 Postgres instances running right now handling tens of thousands of views per month for $20/mo, AND I have Heroku-like convenience to deploy with a "git push dokku master", without having to pay a minimum of $7/mo for each app I deploy. I can deploy a new app at no extra cost
Sure, I have to patch my own OS (minimal effort but still effort) and backups/DR/HA is on me to provide, so it might not be for everyone. But I have a mantra that all my side projects combined need to be able to pay for all my side projects combined to keep me from spending too much, so keeping costs low is important. And that $20/mo would be over $100/mo on Heroku. For me it was a no-brainer. One low-revenue side project pays the bills for all my just-for-fun projects.
For backups: I have a bash script set up every night to run a pg_backup and send it to an S3 bucket where I store the last 7 days of backups. All static files (images mostly) are hosted on S3 with no real backup but that works fine for my particular use case.
I just looked up DO's guides for Dokku and it seems like they're redirecting to the deploy page... that's a shame, they were quite good. In case it's just a bug on my side, here's the link: https://www.digitalocean.com/community/tags/dokku?type=tutor...
And Dokku's documentation, which is quite good as well: http://dokku.viewdocs.io/dokku/deployment/application-deploy...
For deploying, it works basically the same as Heroku except there's no GUI for it. Following Dokku's deploy guide top to bottom works perfectly. Look into Dokku plugins for things you might want/need (database support is a plugin, for example) and it uses a system called "herokuish" to allow Heroku buildpacks to work if you have weird stacks like React on Rails. Or you can bring your own Dockerfiles and avoid buildpacks altogether. Ultimately Dokku just manages Docker containers like a lightweight, single host Kubernetes.
Eventually I'm going to have to migrate to Kubernetes... Dokku's lack of built-in HA/DR/load balancing is its main drawback. But it's served me well for years with very minimal maintenance. I hardly ever even think about my infrastructure stack because it just gets out of the way. Which is incredible because it's so small and lightweight, built mostly with Bash scripts.
And also we think that "scale" can fix crap software. Pick/write decent web apps and you can worry far less about scale.
- Hobby server, at $7/month
- Database, there are many but add another $5-$15/month
- Redis, either $0/month or $15/month, depending on the needs
If your hobby project goes down for a while nobody will remember that and having to pay $100k will make you remember that for an eternity.
Also I know I make mistakes, both at coding and at setting things up, which can easily trash things around and make a 10x-100x multiplier for the cost. The risk is small, but the consequences are horrible so I prefer to avoid this risk.
Edit: also note that even a $10k would be horrifying to spend in most personal projects, and $1500 is more than what most programmers are saving monthly in most of the world.
Here I've published stats on my few posts that reached first page on HN. https://thehftguy.com/2017/09/26/hitting-hacker-news-front-p...
> The service will create more and more instances of your application up to the limit you defined
The Cloud Run docs confirm an instance maximum can be set and the price per instance can be less than $5/month.
Understanding the cost of these services is not easy at all, especially for extreme cases/situations. And a calculator/estimator won't fix the problem. That is why I love the fixed $X/month where there's no room for surprises.
Keep your stuff backed up.
I think some folks may be overestimating their ability to put a dent in Google's infrastructure.
80 concurrent requests per container instance
1000ms execution time per request
5kb outbound network bandwidth per request
100 million requests per month
What if we bump it up to 100kb per request? In my experience only initial requests end up being enormous, especially in single page apps. But to be fair some folks may not have time to optimize. That still only brings the monthly bill to $1,071.48
Then again that second estimate probably isn't relevant since I typically host my static data on a CDN.
If you give me $1071-$120 = $951/month forever for optimizing an app once I'll optimize it for you :)
I don't know how optimizing comes into the equation. The app you build and deploy to Google Run leverages their infrastructure. A fiber optic cross (with just one telecommunications company) connect would cost probably close to $1000/month alone, but Google is probably peered with every telecommunications company in the entire world, and they don't have just one fiber optic line connecting to each of them. So, it's not really a one-time optimization when you put your work on Google. It's like I would be paying Google $1071 to rent the infrastructure of their entire network to receive and distribute data in my name.
In just minutes I had passed free quota and here I have been lucky because I have checked console.
If I have left that version (and I have been sure it is just innocent commit) for a few hours running I would be up for a surprise.
For me, this is the main reason I try not to use any Google products, except for Gmail and Android. For mail, I started to migrate away from Google to reduce risk of being locked out.
It’s what I do since year, basically for every customer I work, I create a new account and even share the credentials with the customer (if he wants it).
Same with Amazon accounts and AWS bills for that matter.
I understand the concern though... and using separate accounts is probably best practice.
This pattern seems common among business people: things working in the common case vs things working in corner cases, it’s how you end up with consumer windows running critical machines. I’m always shocked and moderately disturbed when I see it but I guess we all need to accept the reality that most people are very pragmatic. It makes sense, most people’s intuition comes from “the real material world” where you have to pragmatic, I think many of them fail to realize that on a computer you don’t have to give up certainty the way you do in “the real world.”
> Humans may be hardwired to be loss averse due to asymmetric evolutionary pressure on losses and gains: for an organism operating close to the edge of survival, the loss of a day's food could cause death, whereas the gain of an extra day's food would not cause an extra day of life (unless the food could be easily and effectively stored).
For lots of companies an accidental over-use of tens or hundreds of thousands of dollars is an annoyance, but for a single person that could bankrupt them. I generally avoid programmatically interacting with cloud providers on my own time for exactly this reason. One mistake in a loop can get expensive fast.
Also, show us how you would rack up a $100k bill for a side project that receives a traffic spike. It's simply not realistic.
Auto-scaling magic is lovely in theory, but in practise it is hard.
Annecdata - a video streaming startup here in Newcastle that enabled DJs to stream live sets was used to illegally stream some football games. The subsequent bandwidth bill killed the startup. Yes, they got some things wrong with their tech, and security, but that's the danger that put's people off using "clever" services.
Personally I don't think I can go straight up Heroku or DO because I like things like firestore/dynamo, S3, etc etc. But this is pushing me to move everything I do over to AWS. The only thing is I am very comfortable in GCP, so that would kind of suck. bleh.
I wonder how hard would it be to run a script that checks your balance e.g. every 15 minutes, and shuts down public access to your services when a certain threshold is triggered. I wonder if a ready-made service for that exists in cloud providers' offerings.
Welcome to US healthcare. Happens daily! :)
It's surprising that no major cloud provide prepaid option, which would be handy for such.
It is a best-practice to have a GSuite account instead of a consumer-grade Gmail account to manage an associated GCP account.
It is a bit onerous for a hobbyist, admittedly. But if its anything more ambitious than that, do you _really_ want Google scraping the contents of your email while you build the Next Great Thing? Try not to use a consumer account.
They force-upgraded the java version. The problem was their their own libraries didn’t work with the new version and we had to rewrite a ton of code.
It ended up being insanely expensive at scale.
We were totally locked-in to their system and the way it did things. This would be fine but they would also deprecate certain things we relied upon fairly regularly so there was regular churn to keep the system running.
Support was extremely weak for some parts of the system. Docs for java were outdated compared with the python docs.
Support (that we paid for) literally said to us “oh... you’re still using appengine?”
Finally, they can jack up the pricing at any time and there really isn’t anything you can do - you can’t switch to an alternative appengine provider.
Oh, and when we got featured on a bunch of news sites, our “scalable” site hit the billing threshold and stopped working. No problem, just update the threshold, right? Except it takes twenty four hours (!) for the billing stats to actually update. So were were down on the one day that “unlimited scaling” actually mattered to us.
I’m never again choosing a single-vendor lock-in solution. Especially since it’s not limited to appengine - Google once raised the fees for the maps API from thousands a year to eight figures (seriously) a year with barely any notice.
App Engine was the very first PaaS, came out before Docker, and did things very uniquely in order to try to only allow scalable apps. App Engine standard has to explicitly create special environments for each of their runtimes, and that's slow and expensive. Services like Datastore and Memcache were tightly coupled.
Cloud Run fixes all that. It's just a Docker container that listens on the PORT env variable. Use whatever runtime you want. Run the same container locally, or on another cloud provider. The other services like Firestore or Memorystore (Redis) are truly optional and external.
Cloud Run is what lets you avoid single-vendor lock in, but still get from 0 scaling.
Cloud Run, however, uses standard containers, so as long as you don't use Google proprietary stuff on the backend it's relatively easy to move. As the article mentions, it's useful for low-traffic projects, and if they pick up you can move them to full-time instances.
Certain early decisions are still biting GCP.
Well, there's an open-source API-clone of appengine...
Interestingly, AWS will not cut you off for non-payment (we had an issue with finance and were 250k in the red by the time we got the first 'Is there any issues over there?' email)
If your only payment method is credit card, IMHO allowing a past due account to continue generating cost is a very risky move and an easy target of abusing (imaging stolen credit card etc).
Disclaimer: I work for Google Cloud as SRE and have first hand experience dealing with past due account (it was highly manual and not automated).
(I work on GCP but not this product)
Or think again if you really need 'infinitely scalable bla blah' for personal project on a small budget.
Since it wasn't really worth debugging what went "wrong" it got chalked up as a learning experience and moved back to the laptop where its happily running. If it dies/whatever its no big deal, just a git clone;./runme; and it rebuilds the whole database and creates another web facing interface.
The IaaS guys are masters of marketing. They have convinced everyone that their offerings are less expensive and more reliable, which repeatedly is proven to be false in all but the most dynamic of situations. In this cases its saving $7.99 a month over a unlimited site shared hosting godaddy/whatever plan. Just in case it might need to scale a lot, in which case your going to be paying hundreds if not thousands in surge pricing.
People should not be scared of by these anecdotes, however true they may be. It’s perfectly possible to run a very cost effective business on AWS.
Sure, to most people it will never happen. But the risk is always there. Reminds me of https://en.wikipedia.org/wiki/Survivorship_bias
If you are a programmer, I kind of find the comment puzzling; maybe I am reading you wrong, but you seem to be saying that you are writing code for some company while being happy to commit it and not care if the company loses money if your code is bad? As you would not bet on your work yourself, but you do not mind your employer paying you for it and as such betting some of it’s financial future on your work? Sorry if I misunderstood your comment.
Even if you are hyper competent and can probably get all of this correctly, you can't rest easy. You simply don't know whether you did everything correctly or not. Just one dumb mistake can saddle you with an enormous bill.
This is just like gun safety: don't point your gun at anything you're not intending to shoot. Mistakes happen and the consequences of it can be catastrophic.
If you never pay then expect some aggressive account shutdown, bans across all connected user accounts, and calls from debt collections.
Seems like this is a huge dealbreaker. You cannot be expected to perfectly audit your side project for security. Suppose someone finds a remote code execution and starts mining Monero on it. Or someone just points a botnet at it. You could be on the hook for an unlimited amount of money.
Might as well just install Kubernetes on a monthly-fee VPS and pretend you're using GCP.
It boggles the mind that AWS and GCP don't offer a payment cap - for that reason alone, I wouldn't go near them for side projects.
I have a side project that uses AWS at the moment, and while stuff like serverless RDS instances are really cool, it scares me that somehow amazon is going to have a bug which empties my checking account. I've read as much of the documentation as I can find and have done everything I seem to be able to to prevent this from happening according to AWSs documentation, but it sill worries me.
In fact here is a banking feature I'd love to see: per merchant daily spending limits. I would love to be able to tell my bank that amazon is allowed access to up to $20/day or something of money until they get rate limited and I have to intervene.
Uh oh I better stop before I start advocating for the blockchain, haha
1. notify me so I can ok continued service
2. start throttling usage
3. turn off if #1 has not responded and set new limits etc.
EDIT: This comment is based on the assumption that the parent was trying to start a side business. I'm certainly not advocating fraud.
The IRS, for what it's worth, treats a single-member LLC as a sole proprietorship and all losses/gains are considered personal income.
As always, check your state laws and consult a real, in-person lawyer.
There are nuances to this, obviously fraud is fraud, regardless of using a shell company to perpetrate it, and a new business is unlikely to get a credit limit large enough to be a worthwhile avenue for fraud.
BUT if you have 'malicious intent' -- meaning you build an llc on purpose to overrun costs and get free stuff, you CAN and WILL be sued and face penalties. cause that's just cut-and-dry fraud.
(Incidentally, it's fairly common to structure a pair of ltds with the assets in one and the liabilities in another. If the whole thing crashes and burns, at least the IP doesn't go with it).
Note that this is a vague, general answer and most certainly does NOT constitute legal advice.
If you do allow some kind of ordered shutdown of service, what is the UX going to look like? What is the API going to look like?
Having a cap is not an easy feature to design or implement. And on a backlog, it is going to be a “multi quarter, low impact” item—meaning it will never get built.
Still, I bet there is a lot of companies that might be more upset with the provider shutting everything down even if they had set up a cap.
Maybe just a simply array with the list of services to be shut down in the order provided? This should be enough for hobby and smaller projects, bigger projects will have plans/people in place anyway.
It is not that they can't do it - we are talking about a company with insane technical ability and resources. It is just that they don't want to do it.
>Warning: This example removes billing from your project, shutting down all resources.
this is the digital equivalent of literally pulling the plug. i wonder how long it takes for GCP to register that you've pulled billing though? in some cases i'm sure even milliseconds could make a large difference.
Not only do I have to worry about bad code causing a huge bill, I also have to worry about the quality of code (or config) that emergency stops the billing.
I wonder if you can sign up with a pre-paid credit card and use a fake name.
I'm luckily still on Azure's MSDN tier...which is hardcapped to 150 bucks.
The only complaint I have with Cloud Run now (after many usability updates since the initial release) is that there is no IP rate-limiting to prevent abuse, which has been the primary cause of unexpected costs. (due to how Cloud Run works, IP rate-limiting has to be on Google's end; implementing it on your end via a proxy eliminates the ease-of-use benefits)
How to keep a Cloud Run service “warm”?
You can work around "cold starts" by periodically making requests to your Cloud Run service which can help prevent the container instances from scaling to zero.
Use Google Cloud Scheduler to make requests every few minutes.
Does my application get multiple requests concurrently?
Contrary to most serverless products, Cloud Run is able to send multiple requests to be handled simultaneously to your container instances.
Each container instance on Cloud Run is (currently) allowed to handle up to 80 concurrent requests. This is also the default value.
What if my application can’t handle concurrent requests?
If your application cannot handle this number, you can configure this number while deploying your service in gcloud or Cloud Console.
Most of the popular programming languages can process multiple requests at the same time thanks to multi-threading. But some languages may need additional components to do concurrent requests (e.g. PHP with Apache, or Python with gunicorn).
If you're familiar with AWS, would using AWS Batch exclusively with Spot pricing  (or Fargate with Spot pricing) and Aurora Serverless  been cheaper than Cloud Run + CloudSQL?
 Say, the service runs for 5 mins every hour for 30 days. The respectable a1.large instance would cost $0.50 per month, the cheapest t3.nano would cost around $0.19 per month.
 Say, the service stores rolling 10 GiB/Month ($1.00) and does about 1000 calls per 5 minutes every hour ($0.20) using 2 ACUs ($3.60). This would cost $4.80 per month.
From the pricing (https://cloud.google.com/run/pricing):
12 * ($0.00002400) + 12 * (2 * $0.00000250) = $0.000348 per text
...and that's assuming you go over the free tier limit.
Just reading this line makes you suspicious:
"I have built hundreds of side projects over the years "
And then below:
"I am yet to have a side project go ‘viral’"
Out of hundreds of projects over the years, none of them went viral?
And if you look at his "blog" you will see it has 3 entries in total: https://alexolivier.me/
> Please don't post insinuations about .. shilling.. It degrades discussion and is usually mistaken. If you're worried about abuse, email us and we'll look at the data.
Also: every blog starts somewhere. Plenty of stuff in the author's GH and linkedin, both linked from TFA.
They mention Cloud SQL, which is of course instance based and would run into scaling issues if your app got suddenly hammered. Not to mention, the cost isn't $0 if your app gets 0 traffic, you are going to have to pay to keep that running around the clock.
I realize some applications are very heavy on the app side and light on needing to hit the DB, but in my experience, that isn't very common.
So if you are serving up just a few megabytes and hit a few million page views you may find an AWS bill for a few thousand dollars.
To me with my limited understanding this seems like just another step.
Now granted when it comes to work, I'm using docker with specifics that I know why I would want / can specify with docker ... but for personal projects this ever comes up for me.
But otherwise for personal projects it just never comes up.
Yet on the other hand when it comes to various examples I see more and more that involve docker where, I'm not sure it needs to / what the advantage is.
Obviously there must be some strategic choices / advantages I'm missing.
You can use containers on Heroku, by the way.
The docs confirm an instance maximum can be set and the price per instance can be less than $5/month.
So far so good. Managed to host gitlab, prometheus, grafana and ghost working this weekend, which I'm pretty chuffed about.
Not as clean as OP's, but the intention was learning, so sacrifices on convenience are acceptable.
You’d be surprised how much you can handle with a single beefy VPS or dedicated.
I'm paying 7USD for a 16gig/4 core ryzen I snagged on a special. That's a good 10x off from what big cloud would give me.
...down side is it needs to be engineered for "provider may disappear overnight" so backup strategy needs to be on point.
Another advantage not stated by others is that cloud run (mostly) adheres to the KNative API spec. This allows you lift-and-shift your application from cloud run to a Kubernetes cluster with the KNative plugin installed, a feature you can’t find with any other serverless product
Not sure why you write "of course" and "nothing's for free". If you use dedicated hosting with something like Hetzner, "suddenly hit a big traffic" doesn't automatically mean more costs for you. The website might get slower, or even crash, but your pricing will still be the same as it was before.
 €2.99 for 2vCPU, 2GB RAM, 200Mbits/s bandwidth.
See section "Traffic" on https://www.scaleway.com/en/object-storage/