Hacker News new | past | comments | ask | show | jobs | submit login
Deploy your side-projects at scale for basically nothing – Google Cloud Run (alexolivier.me)
722 points by alex-olivier on Jan 12, 2020 | hide | past | favorite | 380 comments

I have used it for work-related reasons and indeed the service is quite nice. But I don't use Google Cloud Run for personal projects for two reasons:

- No way of limiting the expenses AFAIK. I don't want the possibility of having a huge bill on my name that I cannot pay. This unfortunately applies to other clouds.

- The risk of being locked out. For many, many reasons (including the above), you can get locked out of the whole ecosystem. I depend on Google for both Gmail and Android, so being locked out would be a disaster. To use Google Cloud, I'd basically need to migrate out of Google in other places, which is a huge initial cost.

Both of those are basically risks. I'd much rather overpay $20-50/month than having a small risk of having to pay $100k or being locked out of Gmail/my phone. I cannot have a $100k bill to pay, it'd destroy everything I have.

Also I haven't needed it so far. I've had a Node.js project on the front page of HN, with 100k+ visits, and the Heroku hobby server used 30% of the CPU with peaks at 50%. Trying to do the software decently does pay off.

I had the same thoughts: even if I like Google Cloud a lot (I use it extensively at work), I don’t feel it’s safe for me to use it at home, since I don’t want to risk having my entire google account locked due to “suspicious activity”, whatever that might mean.

In fact, I recently shut down a personal App Engine service I had been using for myself for a few years just because of this paranoia. The service was not doing anything illegal, just crawling a few websites (forums, ...) I like and sending me emails when there are interesting updates. But you never know if they might determine my outbound traffic is suspicious. I also started the long process of moving my main email from a @gmail.com to a @custom.domain that currently forwards to gmail, just in case I get locked out.

It is quite bizzarre that this is the reputation google gained for themselves.

> since I don’t want to risk having my entire google account locked due to “suspicious activity”, whatever that might mean.

Agreed. I often second guess my usage of various Google apps and services since I don't want to trigger some process that I would have no way of ever knowing.

The recent case of someone getting banned from using Apple Pay comes to mind (https://news.ycombinator.com/item?id=20841586)

Sounds like self-censorship in dystopia full of secret laws. I'm glad Google isn't running a country.

Their reputation has turned dramatically. Back in the day Penn and Teller's episode of BS about the death penalty said they might not mind the death penalty if Google were in charge of it. Maybe they were somehow being presciently ironic?

Not only Google, just visit any online space and you will see a lot of arbitrariness, inconsistent rules, obscure decision making, etc. I honestly think that the best people to rule a country are the politicians: They are corrupt, narcisists and dangerous but at least they are somewhat professional in what they do.

Sounds a lot more like abuse detection signals firing based on Apple Pay using virtual card numbers.

Apple says that their virtual card numbers protect your privacy because they're untraceable. Ok, but that also means that your using Apple pay is mostly indistinguishable from credit card fraud.

But, you ask, does Google really have to worry that much about fraud? Do people really phish known-good Google accounts, add a stolen card, and then buy a whole bunch of ads?

Well.... yeah. That's actually one of the primary uses for stolen credit cards.

I used to use google docs until they randomly locked one of the docs I was working on for a week due to one of their "suspicious activity" scripts. Really hammered in the message that if you don't host it then you don't own it.

That's actually kinda nice of them. Instead of waiting till we were totally and completely locked in to play big bad wolf, they've done it earlier while there's still time to get the message out.

I think the issue is you aren’t paying for it, so you don’t own it. Paid hosted services cannot pull this crap.

Paid G suite can and does pull this crap. There was a comment on HN a year-ish ago, when someone's entire ~100 person company (almost?) went out of business because Google flagged the personal Gmail of the domain admin, this "spread" to their company email, Google closed it and losing the admin account made the entire domain get deleted. Not "blocked" or "pending review" - deleted! IIRC even pulling personal favors at Google couldn't save them.

Do you have a link to this story?

Google's main problem here is they can't tell their side of the story.

If only they had some process where a customer could agree to have Google publicly explain why an account was banned, I think we'd see many more explanations along the lines of "This customer was using Google cloud to launch Ddos attacks" or "This customer sent bomb threats to the president".

How is that a problem? What's stopping them? Google can easily write blog posts or release post mortems or have the dozens of PMs that visit HN talk about it.

Considering these links are all about people with valid businesses and apps, I doubt your examples apply for violations.

What law do you think is preventing them from doing that already? Especially where US residents with no general privacy law are concerned?

The only thing that's preventing Google from telling their side of the story is their own refusal to engage human-to-human with individual customers.

Both the law and Googles privacy policy stops them telling the world if you sent bomb threats to the president. That's still your private mail. They can't go looking it it, let alone telling the world about it.

> the law

Which law?

> That's still your private mail. They can't go looking it it,

They definitely subject it to all kinds of automated scanning for spam and potentially abuse. The nearest thing to a public statement from Google on the subject seems to be: “very specific cases where you ask us to and give consent, or where we need to for security purposes, such as investigating a bug or abuse”

i.e. there may be abuse cases where they read your mail without asking.

Downvote - Google could reply here on HN, which is one of (if not THE) top developer site in the world.

> more explanations along the lines of

I am willing to bet there would be zero explanations along those lines.

Paid hosted services can and do pull this crap, and it’s not just Google.

“Digital Ocean killed our company” https://news.ycombinator.com/item?id=20064169

At least DO was responsible, transparent and disclosured the bug for the public. After reading through the post, I've felt much more confident to use their services, specially after that comment: https://news.ycombinator.com/item?id=20119939

Big G's policy is to lock you, provide no further comments and no contact link.

“Transparency” due to blowup of bad publicity on HN. They’re simply not big enough to ignore this audience. They wouldn’t have given a damn if they were, just like they ignored many cases that didn’t blow up.

Sure, fine, but Google blows up hn here and there with horror stories but because of search and I guess the fact that 2024 is when they're possibly nuking cloud anyway, the companies in question still got deleted.

What's the alternative you're proposing? Are you just saying Cloud Bad?

Quite a lot of paid services have done exactly this. They use vague all-encompassing terms of service designed to give them complete control. Pretty much anything can be used as a violation of those terms, allowing them to keep your money while also blocking all access and even deleting your data. Very few customers have the legal and financial resources to unblock this if it does happen.

I ordered a phone from Google that's been lost in delivery. I have Gmail/documents/photos/music... Should I do a charge-back? Sue them in small claims court? I should never have done business with them.

Absolutely don't do a chargeback. Try to settle it with the package carrier.

Google's algorithm can flag a chargeback as suspicious, leading you to be locked out.

This is a delightfully horrifying response... "Don't poke the trillion dollar bear, he might bite you and lock you out of your account!"

(GOOG's market cap as I write this is $985B...)

It is horrifying and it should be illegal to provide so little customer service when you encourage deep, irreplaceable investments into your platform.

At least have all your account recovery lined up before you try a chargeback. It is taken as an extremely strong signal that the account has been compromised and is being used to fraud the rightful owner.

Source: I work there.

Also: I would exhaust all my escalation options before going the cc route. With any retailer.

Having thought about this some more, I really can't recommend doing a chargeback with any company you want to keep taking your money. Afaiu this doesn't cancel the contract, so you owe them the money unless you manage to void the contract, in which case they should send the money back anyways. Then, teaching all the fraud detection systems that your exact usage pattern leads to fraud seems unpleasant too. Just too many ways for this to backfire, even if the company doesn't play offended.

This is a response from a dystopian anti-consumer future that we seem to be living in because Google thinks customer service has no value.

> Afaiu this doesn't cancel the contract

There is no contract if Google didn't send the phone. The commenter doesn't owe them money for something they failed to send.

> Then, teaching all the fraud detection systems that your exact usage pattern leads to fraud seems unpleasant too.

Or (hear me out, this might sound insane): a fucking human could talk to the customer and flag it as "not fraud". This is how every other company does it.

The solution to getting screwed by an algorithm is not to give in to the algorithm. It's to talk to a human to override it.

The ultimate solution, I hope, is that the next iteration of the federal government is pro-consumer and enforces our UCC rights and/or breaks up Google.

> There is no contract if Google didn't send the phone. The commenter doesn't owe them money for something they failed to send.

Don't know about your jurisdiction, but in most countries I lived in the law works otherwise. The moment you checked out you have a contract. Seller not sending out the goods is failing their contractual obligations. But that failure does not cancel the contract, only allows you to execute the appropriate clauses of it (some of which usually lead to refund and cancellation). On the other hand, the delivery of goods doesn't usually prevent you from cancelling the contract (you usually have an obligation to return the goods in that case). The clauses injected by laws tend to make this very consumer friendly... But the ones I remember require you delivering a notice of cancellation and I'm not sure if failing payment counts as such (IANAL, I don't even know your jurisdiction, Yadda Yadda).

Just contact them and they'll send you another one, like pretty much any retailer.

Packages get lost all the time. They have a process to file it with their shipping partner.

Same thing happened to me once. I called google support and they send me another phone.

Same here. It seems very unlikely they would not resend a phone lost in delivery. When it happened to me, they immediately ordered a replacement device.

Absolutely don't do a chargeback. They will lock out anything that requires payment, and possible more.

The “oh noes!” Lock-in arguments are comical. Everything is lock-in. And it’s unlikely google makes some radical consumer change to screw people would hurt its efforts to be the #1 or #2 player. If we just focus on building on the things these cloud providers have built we can stop being leery about things and focus on the product and quit wasting cycles on things that don’t matter like the lock-in fear.

You may have misinterpreted the comment chain. It sounds like you're talking about vendor lock-in. They're talking about being locked out of their Google account due to a Google bot incorrectly categorizing their work as spam or abuse. The implications for them being locked out is that they can't use anything related to their Google account. That could include their personal phone, personal email account and personal cloud services.

You can set up alerts if you exceed a budget, and you can program a response to turn off billing on a project. Google has a guide for doing exactly what you want. It's not a particularly clean fix, but it is fairly easy (copy paste) and can be done. It also allows for fine grain control eg you could kill an expensive backend process but keep the fronted running.

You can also rate limit some APIs.


> You might cap costs because you have a hard limit on how much money you can spend on Google Cloud. This is typical for students, researchers, or developers working in sandbox environments. In these cases you want to stop the spending and might be willing to shutdown all your Google Cloud services and usage when your budget limit is reached.

Is this not typical for...literally every use case? No one has an unlimited budget.

Depends - if your profits increase in line with your expenses, you might be raking in the ad revenue/views/sales/whatever because your product 'went viral' and you might prefer to keep it up.

Or if the rise in costs is something you can mitigate on your end - such as a bad deployment - you might want time to respond yourself, rather than your site going offline.

More generally, few site reliability engineers are looking to add extra ways for the site to be taken offline.

Of course, if you're large enough to be in those situations, your round-the-clock operations staff will be monitoring the billing situation as carefully as they monitor page load times and error rates and database load so an unexpected bill will be very unlikely.

> More generally, few site reliability engineers are looking to add extra ways for the site to be taken offline.

SRE 101 is rate limiting everything and protection against DDoS. With cloud and auto scaling risks of DDoS are less about uptime but more about getting a bill that will bankrupt the business.

> few site reliability engineers are looking to add extra ways for the site to be taken offline

If your company has anything close to "reliability engineers", then you already have legal and finance teams too that can sort terms out.

The discussion is about companies that do not even have departments to begin with.

> few site reliability engineers are looking to add extra ways for the site to be taken offline.

Apart from the cases already mentioned in sibling comments, at some scale you start adding in outage switches in many cases. Basically quick ways to take parts of your service offline if something starts misbehaving.

Sure, but typically a business will have specific resources they’d be willing to shut down rather than the entire billing account.

Google doesn't have good enough billing systems to be able to guarantee to be able to limit your spend. Lots of billing things are only done daily for example, meaning you could spend millions of dollars before the billing run at the end of the day.

Google prefers you be on the hook rather than them.

Google themselves recommend using the limits like max instances to mitigate the risk of out of control costs.

I also don't understand why this is being framed as a uniquely Google problem. Other cloud providers with serverless services have similar hazards and similar methods to manage the risk.

> I'd much rather overpay $20-50/month

I don't think the alternatives are nearly that expensive. Vultr, DigitalOcean and others have virtual private servers for only $5 per month. They're small instances but totally fine for side-projects. I run a cheap $5/month VPS and it was able to withstand my project making the HN front page without issues. I don't use Google cloud hosting for the same reasons as you, I don't want to have too many eggs in one basket.

+1 to just running something on a cheap Digital Ocean style box.

In our modern cloud age, I think we've forgotten how much a single box can actually handle (and how few of us actually _need_ to "scale" from day one). Hacker News front page isn't really that much traffic in the grand scheme of things. My $5 DO instance handled it without a sweat. Hell, even "real" projects can still work under this approach. A $20/mo DO box, sqlite, and a few shell scripts can get you shockingly far ^_^

A few years ago I set up a $5 DO droplet with the Dokku image they provide. Years later it's still running all of my side projects in production, even though I moved from the $5/mo plan to the $20/mo plan as my business grew and my needs increased.

I have 15 containers connected to 10 Postgres instances running right now handling tens of thousands of views per month for $20/mo, AND I have Heroku-like convenience to deploy with a "git push dokku master", without having to pay a minimum of $7/mo for each app I deploy. I can deploy a new app at no extra cost

Sure, I have to patch my own OS (minimal effort but still effort) and backups/DR/HA is on me to provide, so it might not be for everyone. But I have a mantra that all my side projects combined need to be able to pay for all my side projects combined to keep me from spending too much, so keeping costs low is important. And that $20/mo would be over $100/mo on Heroku. For me it was a no-brainer. One low-revenue side project pays the bills for all my just-for-fun projects.

Are you hosting the database on the same instance? Also, how are you doing automatic backups?


Yes the database is on the same instance. The biggest downside to that is my droplet gets low on space as the database grows but so far it hasn’t been too much of an issue. The growing need for SSD space has pretty much matched the growing need for RAM as I increase the droplet size.

For backups: I have a bash script set up every night to run a pg_backup and send it to an S3 bucket where I store the last 7 days of backups. All static files (images mostly) are hosted on S3 with no real backup but that works fine for my particular use case.

Dokku's great, but it doesn't support ARM. Anyone who wants that can try Piku, which is even smaller!


Sounds like a great setup. Do you have any documentation on it that others could follow to build something similar or links to tutorials that helped you?

Luckily Digital Ocean and the Dokku project have pretty great documentation on their own. Here's the one-click image for DO: https://marketplace.digitalocean.com/apps/dokku

I just looked up DO's guides for Dokku and it seems like they're redirecting to the deploy page... that's a shame, they were quite good. In case it's just a bug on my side, here's the link: https://www.digitalocean.com/community/tags/dokku?type=tutor...

And Dokku's documentation, which is quite good as well: http://dokku.viewdocs.io/dokku/deployment/application-deploy...

For deploying, it works basically the same as Heroku except there's no GUI for it. Following Dokku's deploy guide top to bottom works perfectly. Look into Dokku plugins for things you might want/need (database support is a plugin, for example) and it uses a system called "herokuish" to allow Heroku buildpacks to work if you have weird stacks like React on Rails. Or you can bring your own Dockerfiles and avoid buildpacks altogether. Ultimately Dokku just manages Docker containers like a lightweight, single host Kubernetes.

Eventually I'm going to have to migrate to Kubernetes... Dokku's lack of built-in HA/DR/load balancing is its main drawback. But it's served me well for years with very minimal maintenance. I hardly ever even think about my infrastructure stack because it just gets out of the way. Which is incredible because it's so small and lightweight, built mostly with Bash scripts.

> I think we've forgotten how much a single box can actually handle

And also we think that "scale" can fix crap software. Pick/write decent web apps and you can worry far less about scale.

Granted, but those are not 1-to-1 to Google Cloud Run mentioned in the article, I normally use Heroku and the typical "production" server for me is comprised of:

- Hobby server, at $7/month

- Database, there are many but add another $5-$15/month

- Redis, either $0/month or $15/month, depending on the needs

Cloud hosting is great for businesses, that's why I believe that every web developer should experiment with those services. It's not great for personal use for the reasons you've stated - the risk of your service going down due to it being viral is much easier to bear than the risk of having to pay outrageous amounts of money for those services in that event.

If your hobby project goes down for a while nobody will remember that and having to pay $100k will make you remember that for an eternity.

Years ago I ran a side project that went viral. It grew to 60k visitors a day and my monthly cloud expenses were around $1,500. I cannot fathom a scenario where you get a surprise $100k bill from a viral hit. Anytime my project went down due to scaling, it was painful. You don’t get many chances at going viral.

60k is on the low-end of visits only from HN where I've normally seen 100k+. I saw around 60k once of 4-5 of my projects hitting the front-page.

Also I know I make mistakes, both at coding and at setting things up, which can easily trash things around and make a 10x-100x multiplier for the cost. The risk is small, but the consequences are horrible so I prefer to avoid this risk.

Edit: also note that even a $10k would be horrifying to spend in most personal projects, and $1500 is more than what most programmers are saving monthly in most of the world.

That depends a lot on the article and the views from reports on reddit/twitter/facebook/linkedin. 100k is rather toward the top of the curve.

Here I've published stats on my few posts that reached first page on HN. https://thehftguy.com/2017/09/26/hitting-hacker-news-front-p...

$1500/month is a massive cost for a side project, I'd rather have it go offline by far.

Side project turned business. It’d paid for itself and more.

I’ve seen surprise bills not far off that, not from a viral hit, but from bugs in the firmware for connected devices which suddenly switched them from taking 1 action/10 minutes to 1 action/3 seconds. Needless to say firmware QA has become much more focused since that incident.

What was that cost for? 60k visitors is less than 1 req/sec, something a terribly small server should be able to handle with relative ease. We’re there a lot of static assets not served by a CDN or cache?

It was a SaaS. Required heavy database and memory work. But it supports my point that a surprise $100k is a far fetched concern.

I was getting $3-5k bills from Azure and AWS several times without being viral just because I enabled some wrong features. Luckily they refunded them. I don't want that crap anymore. We also tried to run on AWS EC2 for a while and it was costing us 10 times more than a dedicated server that we got later on. Ridiculous. Now I have a backup server on Azure just because they give me $50 credits and it's a basic VM with a 1Tb slow disk attracted and they manage to charge me $70 for this, when I can buy a box at https://www.kimsufi.com/en/servers.xml for 5-7 EUR. I think cloud is for idiots

You're right. I've went with the $100k assumption from the other comment.

From the article:

> The service will create more and more instances of your application up to the limit you defined

The Cloud Run docs confirm an instance maximum can be set and the price per instance can be less than $5/month.

Cost is not only based on the number of instances. From a quick search:


Understanding the cost of these services is not easy at all, especially for extreme cases/situations. And a calculator/estimator won't fix the problem. That is why I love the fixed $X/month where there's no room for surprises.

I hope GCP/AWS add a max spend ability. Until then it's complexity of pricing model vs time spent on OS and database management. To each their own.

It's why I'm on digital ocean. Cheap, but good quality service that's comparable to the big boys and it's capped. $5/month is perfect for me

I use DO too, but they have deleted a lot of customer data.



Keep your stuff backed up.

I use Heroku for these situations where everything is managed automatically. The only con here would be paying more, $20-50/m (Node.js+DB+Redis?) instead of $5/m for DO, but I'm happy to pay for that and spend no time on manual management.

You can definitely set limits on GCP

The trouble with this is what do you do when your "max spend" is reached? Shut everything down? Shut parts of it down? Most "real world" systems aren't built to have the stool kicked from underneath them like that, so there will be data/business loss and pissed off customers (and in the case of Cloud also pissed off customers' customers).

The conversation was about side-projects where it's better to have it shut down than drain the owner's checking account. The alternative being an overloaded web server that becomes unavailable.

I just did some math on their calculator: https://cloud.google.com/products/calculator/

I think some folks may be overestimating their ability to put a dent in Google's infrastructure.

  1 CPU
  2GB memory
  80 concurrent requests per container instance
  1000ms execution time per request
  5kb outbound network bandwidth per request
  100 million requests per month
$120.19 per month

What if we bump it up to 100kb per request? In my experience only initial requests end up being enormous, especially in single page apps. But to be fair some folks may not have time to optimize. That still only brings the monthly bill to $1,071.48

Then again that second estimate probably isn't relevant since I typically host my static data on a CDN.

I'm not from the US, and the thought of "only" paying $1071/m for a side project which is most likely not generating revenue is mindblowing.

If you give me $1071-$120 = $951/month forever for optimizing an app once I'll optimize it for you :)

I can't say it's "only" $1071/month. I definitely wouldn't go broke because of it though. Especially if it's only a side project I probably would look for ways to reduce cost for subsequent months.

I don't know how optimizing comes into the equation. The app you build and deploy to Google Run leverages their infrastructure. A fiber optic cross (with just one telecommunications company) connect would cost probably close to $1000/month alone, but Google is probably peered with every telecommunications company in the entire world, and they don't have just one fiber optic line connecting to each of them. So, it's not really a one-time optimization when you put your work on Google. It's like I would be paying Google $1071 to rent the infrastructure of their entire network to receive and distribute data in my name.

Silly me I have last week made an infinite loop on Firestore - update to document triggered CloudFunction which updated document and Firestore is fast..

In just minutes I had passed free quota and here I have been lucky because I have checked console.

If I have left that version (and I have been sure it is just innocent commit) for a few hours running I would be up for a surprise.

Your setup is unrealistic. 100 million requests per month with 80 concurrent requests?. 1000 $ is cheap if you try to run wikipedia.com fully managed.

A lot of web stacks optimize for concurrency. Based on 2GB memory, which ends up being ~25MB per request (which for me is extremely high), I don't see it as being unrealistic. Especially for the use case I'm considering. Typically the only reason these boxes exist is to allow a web browser to gain access to data in a database, so most of the 1000ms per request wouldn't be spent in CPU, it will be spent waiting for the database to return a response.

Just loading the Javascript for a basic React app is way over 100kb though.

It's 6kb for React and 25kb for React-DOM if you do a production build. Still a lot in terms of JS, but not quite as much as you're saying.

Wouldn't it be better to do that over S3 / cloud storage? And would be much smaller once minified and compressed

I agree about being too scared to do business with Google. FWIW, I have a project that monitors an AWS S3 hosted web site every minute and takes it down if the charges exceed a quota. It ends up costing me 44 cents a month to run it as a lambda function but I think of it as insurance. Pull requests are welcome from whoever understands Terraform better than me because I couldn't figure out how to automate everything about the deployment.


> The risk of being locked out

For me, this is the main reason I try not to use any Google products, except for Gmail and Android. For mail, I started to migrate away from Google to reduce risk of being locked out.

Why not just make another Google Account just for this project?

It’s what I do since year, basically for every customer I work, I create a new account and even share the credentials with the customer (if he wants it).

I’ve heard about Google correlating these accounts (through billing methods, contact methods, access patterns) and banning them together when one infringes on something. I’m not sure it’s the protection you think it is.

Yes they do. If you want to bypass these rules, just make a ton of accounts and wait for a couple months and the use those accounts with whatever credit card you have (keep in mind that you should get another card from your bank because google can tell when your card is a generated one like privacy.com)

Won’t they just correlate the name and address on your different credit cards?

I don't recall ever giving my real name/address for any billing address/name, because most if not all the time, they don't really care (or at least I never seen a difference).

Has anyone actually been locked out of Gmail because of a google cloud bill? I don't think they are connected in the sense that your Gmail/Youtube/Android etc account will stop working if you don't pay.

Same with Amazon accounts and AWS bills for that matter.

I understand the concern though... and using separate accounts is probably best practice.

All those accounts are "related". Made from the same PC, used from the same phone, etc.

It's against the ToS to create another account to circumvate a ban.

> No way of limiting the expenses

This pattern seems common among business people: things working in the common case vs things working in corner cases, it’s how you end up with consumer windows running critical machines. I’m always shocked and moderately disturbed when I see it but I guess we all need to accept the reality that most people are very pragmatic. It makes sense, most people’s intuition comes from “the real material world” where you have to pragmatic, I think many of them fail to realize that on a computer you don’t have to give up certainty the way you do in “the real world.”

Loss aversion[1] describes the phenomenon pretty well:

> Humans may be hardwired to be loss averse due to asymmetric evolutionary pressure on losses and gains: for an organism operating close to the edge of survival, the loss of a day's food could cause death, whereas the gain of an extra day's food would not cause an extra day of life (unless the food could be easily and effectively stored).

For lots of companies an accidental over-use of tens or hundreds of thousands of dollars is an annoyance, but for a single person that could bankrupt them. I generally avoid programmatically interacting with cloud providers on my own time for exactly this reason. One mistake in a loop can get expensive fast.

[1] https://en.wikipedia.org/wiki/Loss_aversion

That's not quite what I'm describing, it's more like how people will ignore git error messages and randomly fiddling with things thinking "that's just how it is and I can't understand it" rather than figuring out what's actually broken.

These are really non issues. You can create a new Google account and setup a limit for the number of instances and alerts.

Also, show us how you would rack up a $100k bill for a side project that receives a traffic spike. It's simply not realistic.

I don't know how I would get a $100k bill from a spike. That ignorance is enough for me to avoid using the service - unless I know how the billing works, what my maximum monthly bill is going to be (with a hard limit that cannot be crossed), and exactly where all the gotchas are then I won't use the platform.

Auto-scaling magic is lovely in theory, but in practise it is hard.

Annecdata - a video streaming startup here in Newcastle that enabled DJs to stream live sets was used to illegally stream some football games. The subsequent bandwidth bill killed the startup. Yes, they got some things wrong with their tech, and security, but that's the danger that put's people off using "clever" services.

Wow is the ban thing a real risk that anyone can substantiate? That's horrifying and I had the same thought that everyone else did about using a burner email, which is apparently impossible.

Personally I don't think I can go straight up Heroku or DO because I like things like firestore/dynamo, S3, etc etc. But this is pushing me to move everything I do over to AWS. The only thing is I am very comfortable in GCP, so that would kind of suck. bleh.

If I were to run any public-facing protect on Google Cloud, I definitely would use a separate account, created just for that. You never knows what might happen to that account. I thought everybody does this.

I wonder how hard would it be to run a script that checks your balance e.g. every 15 minutes, and shuts down public access to your services when a certain threshold is triggered. I wonder if a ready-made service for that exists in cloud providers' offerings.

Google somehow is able to link your newly created account to your personal/regular account. So if you some shady stuff with the new account, your other account is at risk of being locked out, too.

Has anyone gone down an account per side project and not gotten into a problem? And is there a warning before the banning sledge hammer falls

> - No way of limiting the expenses AFAIK. I don't want the possibility of having a huge bill on my name that I cannot pay. This unfortunately applies to other clouds.

It's surprising that no major cloud provide prepaid option, which would be handy for such.

Why not make a burner gmail instead of migrating out of Google

Google can (and frequently does) link burner gmail accounts with your real e-mail account.

What if you use a different google account only for this?

> I cannot have a $100k bill to pay, it'd destroy everything I have.

Welcome to US healthcare. Happens daily! :)

That makes me happy to be an European

I'm not in the US, and would not dream of going there without health insurance.

Happens even with insurance.

The max out of pocket is about $8k under the ACA. That is a lot of money for a lot of people, but it isn't nearly $100k

There are a bunch of costs that could increase this number: having to deal with networks, having an emergency and having the max amount for a procedure exceeded, and, in some cases, your deductible.

Your deductible counts towards the out of pocket max

> The risk of being locked out.

It is a best-practice to have a GSuite account instead of a consumer-grade Gmail account to manage an associated GCP account.

It is a bit onerous for a hobbyist, admittedly. But if its anything more ambitious than that, do you _really_ want Google scraping the contents of your email while you build the Next Great Thing? Try not to use a consumer account.

I wont use this for the simple reason that I bought into the Google Appengine stack in the past and it really bit me for several reasons:

They force-upgraded the java version. The problem was their their own libraries didn’t work with the new version and we had to rewrite a ton of code.

It ended up being insanely expensive at scale.

We were totally locked-in to their system and the way it did things. This would be fine but they would also deprecate certain things we relied upon fairly regularly so there was regular churn to keep the system running.

Support was extremely weak for some parts of the system. Docs for java were outdated compared with the python docs.

Support (that we paid for) literally said to us “oh... you’re still using appengine?”

Finally, they can jack up the pricing at any time and there really isn’t anything you can do - you can’t switch to an alternative appengine provider.

Certain pages in the management console were completely broken due to a js error (on any browser). In order to Use them i had to manually patch the javascript. Six months after reporting it several times and it was still broken.

Oh, and when we got featured on a bunch of news sites, our “scalable” site hit the billing threshold and stopped working. No problem, just update the threshold, right? Except it takes twenty four hours (!) for the billing stats to actually update. So were were down on the one day that “unlimited scaling” actually mattered to us.

I’m never again choosing a single-vendor lock-in solution. Especially since it’s not limited to appengine - Google once raised the fees for the maps API from thousands a year to eight figures (seriously) a year with barely any notice.

You're outlining all the reasons why Cloud Run is the successor to App Engine.

App Engine was the very first PaaS, came out before Docker, and did things very uniquely in order to try to only allow scalable apps. App Engine standard has to explicitly create special environments for each of their runtimes, and that's slow and expensive. Services like Datastore and Memcache were tightly coupled.

Cloud Run fixes all that. It's just a Docker container that listens on the PORT env variable. Use whatever runtime you want. Run the same container locally, or on another cloud provider. The other services like Firestore or Memorystore (Redis) are truly optional and external.

Cloud Run is what lets you avoid single-vendor lock in, but still get from 0 scaling.

My understanding is that the 'flexible' version of App Engine runs on the same infrastructure as Cloud Run (and GKE for that matter). IMO running a regular node express app on Node Flexible App Engine isn't that different from running on Cloud Run, and I'm not really using any specific App Engine services (e.g. I don't use datastore, just a regular postgres DB on Cloud SQL). Lets me get up and running quickly with the knowledge I can easily containerize it if needed.

App Engine is indeed problematic. I have an important app on it and various forced upgrades gave me a big headache. It's been stable for several years now, so I'm okay with it, but as much as I like the idea, and indeed the execution, of App Engine I'm not going to do anything new on it because of the lock-in factors.

Cloud Run, however, uses standard containers, so as long as you don't use Google proprietary stuff on the backend it's relatively easy to move. As the article mentions, it's useful for low-traffic projects, and if they pick up you can move them to full-time instances.

These products remind me of that one colleague who doesn't understand why people don't like them. Uh, you've systematically fucked them over for years without even realizing it, and when you get caught you never even apologize, thats why they don't like you.

It’s a good point. A reputation is really hard to build up, but easily torn down in an instant.

Certain early decisions are still biting GCP.

> you can’t switch to an alternative appengine provider.

Well, there's an open-source API-clone of appengine...


My 6-7 year old app running almost free with about 2-3 updates. I think for bigger projects where it's start to become problematic.

The thing I really want out of these services is the ability to set a payment cap. It’s probably never going to be an issue, but I have anxiety, and I can’t sleep easily knowing that if I fuck up, if someone sinister abuses my application or whatever I may be stuck with a giant bill.

That's not a bug... that's a 'feature'.

Interestingly, AWS will not cut you off for non-payment (we had an issue with finance and were 250k in the red by the time we got the first 'Is there any issues over there?' email)

This sounds like a short term 'thinking'. If there are a few stories on the internet (and there are some in this very thread) about developers getting burned by this 'feature', it would turn off a lot of potential users in the future. It would make sense to be transparent and give tools to strictly control usage/billing - both in terms of trust and money.

It is. I'm using free there to build and host my projects. If/when I exceed the free tier I'm not making a billing account unless I can set a max bill. I'll sooner rewrite everything to run on bare metal that has a fixed monthly cost.

I pay my $5 for linode (expanded swap to 2GB, it's SSD backed) and just run Caprover... I'd rather not deal with AWS's billing (even their billing dashboards are delayed by 24 hours)

I've been using gc/firebase. It's very easy to use, with acceptable documentation. I've been able to get dangerous quickly without even trying to. If I weren't on the free tier I would've racked up unaffordable bills. Instead I exhausted my API limits and had to wait. Perfectly acceptable behavior for development. I'll also add some logic to the finished project to handle resource exhaustion. Should I start exhausting resources,and it's not a bug, I'm not going to continue with infinitely scalable/infinitely billable gc. I guess if target customers are companies with 9-12 figure budgets getting a 9 figure bill wouldn't be the end of the World. If I got a high 4 figure bill I'd have to declare bankruptcy.

If you have a payment agreement, GCP won't cut you off provide your account is in good behavior in the past.

If your only payment method is credit card, IMHO allowing a past due account to continue generating cost is a very risky move and an easy target of abusing (imaging stolen credit card etc).

Disclaimer: I work for Google Cloud as SRE and have first hand experience dealing with past due account (it was highly manual and not automated).

And even as an employee, if you tell them over and over again that customers keep asking for that feature, they disagree with you over and over again.

You can deploy with a —max-instances flag to prevent infinite scaling. This has been a feature since the Cloud Run beta and has been in GAE forever.

One issue in GAE is that it can take 24 hours to update the billing cap. So if you happen to get a ton of fantastic press coverage one day, your site will go down and there is literally nothing you can do to fix it.

Seems unrelated to capping instance counts, which I assume would go into effect very quickly.

(I work on GCP but not this product)

Set budget alerts. And check what's going on if they start going off.

Or think again if you really need 'infinitely scalable bla blah' for personal project on a small budget.

I do not want the failure mode to be the expensive case.

aws lightsale and digital ocean are great alternatives for personal projects

Real answer here.

Here's a story about why you are right to be anxious: https://hackernoon.com/how-we-spent-30k-usd-in-firebase-in-l...

These stories are everywhere. A couple months ago, it hit an associate of mine pretty hard, he moved a small python monitoring and statistics application off a laptop in the office to AWS. A couple weeks later he came back and discovered that it had burned up a few thousand $'s in storage and transfer fees for what normally was a couple hundred MBs of data a day being passed around and a some database updates.

Since it wasn't really worth debugging what went "wrong" it got chalked up as a learning experience and moved back to the laptop where its happily running. If it dies/whatever its no big deal, just a git clone;./runme; and it rebuilds the whole database and creates another web facing interface.

The IaaS guys are masters of marketing. They have convinced everyone that their offerings are less expensive and more reliable, which repeatedly is proven to be false in all but the most dynamic of situations. In this cases its saving $7.99 a month over a unlimited site shared hosting godaddy/whatever plan. Just in case it might need to scale a lot, in which case your going to be paying hundreds if not thousands in surge pricing.

No thanks...

Not sure what happened to your associate but this sounds way, way out there. I run a fairly resource intensive SaaS on AWS (lots of background jobs generating data) and we barely go over $600 a month.

People should not be scared of by these anecdotes, however true they may be. It’s perfectly possible to run a very cost effective business on AWS.

That's great. But you are always just one unfortunate bug or mis-configuration away from having it happen.

Sure, to most people it will never happen. But the risk is always there. Reminds me of https://en.wikipedia.org/wiki/Survivorship_bias

It's possible to write bug-free code, but would you bet your financial future on your ability to write bug-free code?

Starting to think the reason we arnt working for google/aws is we make mistakes. Whereas engineers at those companies just dont make mistakes, therefor they assume that billing caps are not necessary. Such is our lot in life.

I think the bigger difference (other than a potential skill difference) is rather that they're more willing to take these risks. The ones that are good at it and succeed will rise to the top!

Bug free is a not realistic but if you are a programmer and definitely if you are a tech founder you definitely should be willing to bet your financial future on writing reasonably low bug code.

If you are a programmer, I kind of find the comment puzzling; maybe I am reading you wrong, but you seem to be saying that you are writing code for some company while being happy to commit it and not care if the company loses money if your code is bad? As you would not bet on your work yourself, but you do not mind your employer paying you for it and as such betting some of it’s financial future on your work? Sorry if I misunderstood your comment.

When you work for somebody else (or with somebody else) then you try to do as good of a job as possible, but the ultimate responsibility still lies elsewhere - might be your boss or could even be the group. There will be other people who interact with your code and might spot errors. There will be people who are trained, in some capacity, to figure out ways to mitigate against accidentally generating very large bills. It is exceedingly unlikely that these points hold true for a solo developer working on a main project, let alone a side project.

Even if you are hyper competent and can probably get all of this correctly, you can't rest easy. You simply don't know whether you did everything correctly or not. Just one dumb mistake can saddle you with an enormous bill.

This is just like gun safety: don't point your gun at anything you're not intending to shoot. Mistakes happen and the consequences of it can be catastrophic.

You are right, but at some level you are thinking 'you can do it' right? Otherwise you would be pretty miserable I would imagine. But I agree with the rest you wrote. You meant it slightly more nuanced than I read it!

Virtual cards with limits are an answer to this. Works very well.

Not really. You're still responsible for the bill even if your payment method is capped. They don't just forget about it.

If you never pay then expect some aggressive account shutdown, bans across all connected user accounts, and calls from debt collections.

You are absolutely right; no idea (thinking hard what service I was thinking off now) why I typed that. I was thinking of something entirely else. No-one try this; bad advise; they will indeed come after you. Not

This was discussed on HN (237 comments): https://news.ycombinator.com/item?id=17661391

It seems absolutely within amazon's technical ability to allow you to prepay for usage, and then evaluate your use on a per-hour basis.

I have a side project that uses AWS at the moment, and while stuff like serverless RDS instances are really cool, it scares me that somehow amazon is going to have a bug which empties my checking account. I've read as much of the documentation as I can find and have done everything I seem to be able to to prevent this from happening according to AWSs documentation, but it sill worries me.

In fact here is a banking feature I'd love to see: per merchant daily spending limits. I would love to be able to tell my bank that amazon is allowed access to up to $20/day or something of money until they get rate limited and I have to intervene.

Uh oh I better stop before I start advocating for the blockchain, haha

In Europe revolut has virtual cards with monthly spending limits. Cash App must have a similar feature in the US.

Checkout privacy.com you can set merchant daily, monthly or yearly spending limits.

Google cloud doesn’t allow those cards.

It's very ironical to see the word "privacy" in a thread about "serverless" stuff.

oh all the fanboy downvotes.

I believe you could accomplish this with Brex using virtual cards and per-card spending limits.

Are there any cloud products that have hard $$$ caps? Even non-big 3. Any at all?

Seems like this is a huge dealbreaker. You cannot be expected to perfectly audit your side project for security. Suppose someone finds a remote code execution and starts mining Monero on it. Or someone just points a botnet at it. You could be on the hook for an unlimited amount of money.

Might as well just install Kubernetes on a monthly-fee VPS and pretend you're using GCP.

Yes, Azure lets you set a per-Subscription spending limit.

It boggles the mind that AWS and GCP don't offer a payment cap - for that reason alone, I wouldn't go near them for side projects.

It would be nice to have three caps:

1. notify me so I can ok continued service 2. start throttling usage 3. turn off if #1 has not responded and set new limits etc.

It's cheap insurance to create an LLC to protect yourself from extreme situations for this kind of thing. Worst case the LLC goes defunct, but you won't be personally liable.


EDIT: This comment is based on the assumption that the parent was trying to start a side business. I'm certainly not advocating fraud.

Good luck telling the judge your single person LLC with no investors shouldn't be pierced to pay debts incurred by its only owner and only employee.

Works fine if you didn’t co-mingle personal usage of the business accounts.

This VERY heavily depends on your state and I'm sure a lot of other factors. Years ago when I set up my single-member LLC my lawyer was quick to point out a thousand ways the LLC would be useless in court. Mingling bank accounts was only one of them.

The IRS, for what it's worth, treats a single-member LLC as a sole proprietorship and all losses/gains are considered personal income.

As always, check your state laws and consult a real, in-person lawyer.

wut? are you suggesting that if there's a cost overrun you just shutdown the LLC? that sounds like a loop that couldn't possibly be true? when you sign TOS you agree to be responsible for service fees. if being an LLC indemnifies against these sorts of charges it would also indemnify you against other legitimate fees? take out a business credit card and cash advance and then close the LLC? free money!

It basically does work like that, however the law is almost never that cut and dry. The situation you're describing would have the person benefiting from the money as being personally liable because that person presumably would be taking the cash advance for self-interest instead of taking the loan in the interest of the business.

Yes. This is exactly what the “limited liability” in LLC means. You are not personally liable, only the business is.

There are nuances to this, obviously fraud is fraud, regardless of using a shell company to perpetrate it, and a new business is unlikely to get a credit limit large enough to be a worthwhile avenue for fraud.

more or less, yes. bankruptcy courts vary by location. the whole point of a 'limited liability corporation' is you not being personally liable.

BUT if you have 'malicious intent' -- meaning you build an llc on purpose to overrun costs and get free stuff, you CAN and WILL be sued and face penalties. cause that's just cut-and-dry fraud.

Companies are doing this all the time and not prosecuted (or prosecuted and not found guilty); they just do not make it obvious. So the ‘will’ is not that cut and dry really; even in countries where bankruptcies are not normal and frowned upon (like NL), there are many companies only created to spend money and killed off when it runs out. On paper they look like real businesses. Also it is not unusual to put employees in a separate llc and kill that when the money runs out; again that happens a lot, even with the intent of doing that. As long as it looks good on the outside, it works. I cannot stand it personally but there are not many solutions to resolve it; you cannot read the minds of the founders to get their real intentions.

Which is exactly why banks require 2+ years of business history, collateral, or a personal guarantee. They will also review your financials when you apply for additional credit facilities.

Banks often ask for personal guarantees from ltd owners for precisely this reason. I've just skimmed the T&Cs and I can't see any indemnity clauses, but that document's massive and I'm just some guy, so who knows.

(Incidentally, it's fairly common to structure a pair of ltds with the assets in one and the liabilities in another. If the whole thing crashes and burns, at least the IP doesn't go with it).

As someone who used to work as an attorney in a past career, there's a policy called "piercing the corporate veil" to handle cases like this. Basically, if the only reason an LLC exists is to protect the owner from consequences of these sorts of shenanigans, the courts can pretend like the LLC doesn't exists for purposes of financial responsibility.

Note that this is a vague, general answer and most certainly does NOT constitute legal advice.

A “cap” feature isn’t as easy to implement as you’d think. What happens when you hit the cap? Does it start shutting down instances? If yes, which ones and in what order? And don’t say “I don’t care” because you do care—you are allowing your provider to basically cause an outage in your service.

If you do allow some kind of ordered shutdown of service, what is the UX going to look like? What is the API going to look like?

Having a cap is not an easy feature to design or implement. And on a backlog, it is going to be a “multi quarter, low impact” item—meaning it will never get built.

Why not allow people to set a cap that disables everything set to that billing method once it's been reached? That seems to be the use case that people are saying is missing from cloud offerings.

Yeah after I posted I realized that once you hit your cap, that’s it you are done.

Still, I bet there is a lot of companies that might be more upset with the provider shutting everything down even if they had set up a cap.

I imagine most of those companies can afford to have some overages built into their caps. They should also be able to afford the developer time to create fallback behavior, and graceful degradation of services.

Things like storage (disks or S3) continue to incur costs even if the instance is shutdown. Should they just delete your data too?

What is the API going to look like?

Maybe just a simply array with the list of services to be shut down in the order provided? This should be enough for hobby and smaller projects, bigger projects will have plans/people in place anyway.

It is not that they can't do it - we are talking about a company with insane technical ability and resources. It is just that they don't want to do it.

What would you like to do with your VM disks and/or object storage when you hit the cap? Delete them?

Do-able today with GCP Billing

I thought GCP is alerts not capping?

The 'official' method is programmatically DIY!



>Warning: This example removes billing from your project, shutting down all resources.

this is the digital equivalent of literally pulling the plug. i wonder how long it takes for GCP to register that you've pulled billing though? in some cases i'm sure even milliseconds could make a large difference.

This is ridiculous. I'm guessing a huge revenue source is going after businesses for long-tail billing spikes.

Not only do I have to worry about bad code causing a huge bill, I also have to worry about the quality of code (or config) that emergency stops the billing.

I wonder if you can sign up with a pre-paid credit card and use a fake name.

Damn that's ugly.

I'm luckily still on Azure's MSDN tier...which is hardcapped to 150 bucks.

I wonder how friendly the UI is to accomplish that

It's pretty complicated to do that on every services, imagine for any service they would have to call an API to get the $$ remaining , compute before scaling / creation if you have enough ect ...

If you wanted to be that precise, sure. But something like an hourly ‘calculate what I owe’ and compare it to a cap is not a big stretch of the imagination, and would suffice for many use cases.

A ml app could scale massively in an hour.

Then don’t get infinitely scaling services and get a Vultr instance, where your costs are fixed.

This makes me anxious as well.

I've been using Cloud Run for my GPT-2 text generation apps (https://github.com/minimaxir/gpt-2-cloud-run) in order to survive random burst, and also for small Twitter bots (https://github.com/minimaxir/twitter-cloud-run/tree/master/h...) which can be invoked via Cloud Scheduler to utilize the efficiency benefits. It has been successful in those tasks.

The only complaint I have with Cloud Run now (after many usability updates since the initial release) is that there is no IP rate-limiting to prevent abuse, which has been the primary cause of unexpected costs. (due to how Cloud Run works, IP rate-limiting has to be on Google's end; implementing it on your end via a proxy eliminates the ease-of-use benefits)

I'm currently serving an api that uses a 500mb resnet v2 model. The bootup takes to long, so now I have a single instance that can't handle any peaks and costs too much. Doesn't your model take to long to spin up before being able to serve a request ?

From: https://github.com/ahmetb/cloud-run-faq#how-to-keep-a-cloud-...


How to keep a Cloud Run service “warm”?

You can work around "cold starts" by periodically making requests to your Cloud Run service which can help prevent the container instances from scaling to zero.

Use Google Cloud Scheduler to make requests every few minutes.

Does my application get multiple requests concurrently?

Contrary to most serverless products, Cloud Run is able to send multiple requests to be handled simultaneously to your container instances.

Each container instance on Cloud Run is (currently) allowed to handle up to 80 concurrent requests. This is also the default value.

What if my application can’t handle concurrent requests?

If your application cannot handle this number, you can configure this number while deploying your service in gcloud or Cloud Console.

Most of the popular programming languages can process multiple requests at the same time thanks to multi-threading. But some languages may need additional components to do concurrent requests (e.g. PHP with Apache, or Python with gunicorn).

Yes unfortunately, but that's the caveat of services-on-demand. I'm looking more into more efficient/cheap model deployment workflows. (it might be just running the equivalent of Cloud Run on Knative/GKE, backed by GPUs)

Does GCP have an equivalent for Aurora Serverless? If so, would choosing that over CloudSQL been cheaper?

If you're familiar with AWS, would using AWS Batch exclusively with Spot pricing [0] (or Fargate with Spot pricing) and Aurora Serverless [1] been cheaper than Cloud Run + CloudSQL?

[0] Say, the service runs for 5 mins every hour for 30 days. The respectable a1.large instance would cost $0.50 per month, the cheapest t3.nano would cost around $0.19 per month.

[1] Say, the service stores rolling 10 GiB/Month ($1.00) and does about 1000 calls per 5 minutes every hour ($0.20) using 2 ACUs ($3.60). This would cost $4.80 per month.

Tensorflow serving, or whatever name Google cloud has given their managed version.

You can run it on GKE with autoscaling.

How much do you end up paying on average for a tweet-sized generated text?

I haven't create a service for auto-generated tweets yet (just human-curated ones), but for similar service which output tweet-length text (w/ a 2GiB RAM side), it takes about 30s on a cold boot (which makes sense as it has to load the model to RAM), and ~12s to generate text after a cold boot.

From the pricing (https://cloud.google.com/run/pricing):

12 * ($0.00002400) + 12 * (2 * $0.00000250) = $0.000348 per text

...and that's assuming you go over the free tier limit.

Can't you see this article is a paid advertisement for Google Cloud? Just like those 1 hour long videos on youtube - where they show how pilots fly an airplane of a specific company and how well is al organized or a 1 hour long video of a german car factory.

Just reading this line makes you suspicious: "I have built hundreds of side projects over the years "

really? Hundreds?

And then below:

"I am yet to have a side project go ‘viral’"

Out of hundreds of projects over the years, none of them went viral?

And if you look at his "blog" you will see it has 3 entries in total: https://alexolivier.me/

Bummer to see comments like this so high up the page after so many hours.

HN guidelines:

> Please don't post insinuations about .. shilling.. It degrades discussion and is usually mistaken. If you're worried about abuse, email us and we'll look at the data.

Also: every blog starts somewhere. Plenty of stuff in the author's GH and linkedin, both linked from TFA.

This article fails to mention the issue of needing a database. It doesn't matter how seamlessly your application can scale if your data backend won't scale with it.

They mention Cloud SQL, which is of course instance based and would run into scaling issues if your app got suddenly hammered. Not to mention, the cost isn't $0 if your app gets 0 traffic, you are going to have to pay to keep that running around the clock.

I realize some applications are very heavy on the app side and light on needing to hit the DB, but in my experience, that isn't very common.

AWS has Aurora Serverless (https://aws.amazon.com/rds/aurora/serverless/) for that purpose with MySQL and Postgres compatible engines. But I haven't heard anyone using it for side projects yet.

Note that you can either run Aurora Serverless constantly (at a cost of about $43 a month) or have ~30 second startup times if the instance has timed out (~15 minutes).

They also have RDS Proxy (https://aws.amazon.com/rds/proxy/) that lets you pool connections from tons of lambda instances in order not to overwhelm your DB when scaling up.

I wonder where the threshold is that the proxy is beneficial. MySQL is actually really good at handling new connections.

A colleague keeps reminding me that in the end, if you don't need ACID, you can just use S3 as a key-value database and never pay more than pennies a month (and you get infinite scaling). Just depends if you need a db just for a few minor use cases or the app fundamentally depends on it

Agree. Except I would keep in mind that AWS data egress is very expensive, so if there is any heavy media involved, you would also want to put a CDN in front of s3 with proper caching or you could find yourself with a hell of a bill from AWS.

So if you are serving up just a few megabytes and hit a few million page views you may find an AWS bill for a few thousand dollars.

See https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-p... for more specific guidelines on S3 scaling. It used to be you had to be careful and use randomized prefixes... It looks like that might no longer be the case. Also note there is a limit (arguably pretty high) per prefix on request rates.

As a noob I have a question as to what advantages I have using docker compared to just a service like Heroku where I just push the application to them... and I don't bother with docker?

To me with my limited understanding this seems like just another step.

Now granted when it comes to work, I'm using docker with specifics that I know why I would want / can specify with docker ... but for personal projects this ever comes up for me.

But otherwise for personal projects it just never comes up.

Yet on the other hand when it comes to various examples I see more and more that involve docker where, I'm not sure it needs to / what the advantage is.

Obviously there must be some strategic choices / advantages I'm missing.

The first obvious advantage is you can test your app locally in the exact same environment it will run, including any system dependencies and native modules. On any OS.

Heroku build packs only work on Heroku. Containers work everywhere and they support any linux application.

The first thing that comes to my mind is lock-in. if you push the application to Heroku and "it just works" then when you need to deploy it somewhere else you still have that hurdle to cross. Heroku's pricing adds up quickly.

... or you just deploy it on any VPS or bare metal with dokku installed: http://dokku.viewdocs.io/dokku/

Containers are great for packaging an app, ensuring dependencies will be met and avoiding lock-in.

You can use containers on Heroku, by the way.

Your liability is not capped anywhere. I better stick to 5$/mo DO for my side-projects.

You can actually specify maximum number of instances https://cloud.google.com/run/docs/configuring/max-instances as well as max requests handled by instance at a time (https://cloud.google.com/run/docs/about-concurrency).

From the article:

> The service will create more and more instances of your application up to the limit you defined

The docs confirm an instance maximum can be set and the price per instance can be less than $5/month.

It takes a lot of virality to go over $5/mo for a smaller-side project in Cloud Run. (essentially if you got the point where it would go over, the additional surplus traffic would be well worth it)

Or a $7 Heroku dyno if you don’t want to manage disks, networking, patching etc. and can afford the extra $2

Interesting. I've been looking looking at options too & opted for essentially the opposite: Get a big(ish) VPS and stack everything on top of each other with docker behind a nginx reverse proxy.

So far so good. Managed to host gitlab, prometheus, grafana and ghost working this weekend, which I'm pretty chuffed about.

Not as clean as OP's, but the intention was learning, so sacrifices on convenience are acceptable.

This is the advice I give early stage startups... don’t waste cycles learning the AWS stack, and getting locked in. Just pay for a cheap VPS, and scale it vertically as you grow. By the time you outgrow vertical scaling you should have the revenue or funding to figure out your at scale architecture.

You’d be surprised how much you can handle with a single beefy VPS or dedicated.

Yup. The other aspect that surprised me is how crazy the relative costs are.

I'm paying 7USD for a 16gig/4 core ryzen I snagged on a special. That's a good 10x off from what big cloud would give me.

...down side is it needs to be engineered for "provider may disappear overnight" so backup strategy needs to be on point.

A slight twist on this is a cheap, beefy colo box with nothing installed except something like k3s. This way you end up with all the YAML bureaucracy paid down in case that dreamy future with millions of users finally manifests

Had a VPS for side project at Vultr. First step first, could not send sign up emails from it.

Sounds like the problem was with the networking or application code. VPS can do anything when it's configured properly.

What’s the advantage of GCR over AWS Fargate/ECS? I’ve been running an app on ECS for a couple months now and have been pretty happy with the ease of set-up, load-balancing, auto-scaling etc, though there are still kinks I’m figuring out (SSHing into containers to perform database management, for example, or deploying updated tasks without downtime). Is the main selling point of GCR just its price? I haven’t found ECS pricing to be an issue (but I’m also not running anything at scale, and I do pay more than a few cents a month — but still under 10 bucks).

Disclaimer: I work on cloud run

Another advantage not stated by others is that cloud run (mostly) adheres to the KNative API spec. This allows you lift-and-shift your application from cloud run to a Kubernetes cluster with the KNative plugin installed, a feature you can’t find with any other serverless product

Thanks. What are the usual cold start and warm start times for Cloud Run? Is there potential for those start times to go down to a few milliseconds in the near-future?

GCR runs containers like Fargate, but scales to zero like lambda, so you only pay for usage

GCR is the container registry and does nothing but store your artifacts.

The article and comments are about Google Cloud Run

I don't think you can scale up and scale down Fargate based on Http requests

I'm not sure if you can reasonably scale to 0, but I believe you can attach Autoscaling Groups to Fargate.

Most likely it would be based on CPU/Memory and not requests

Most definitely, but you can auto scale on custom metrics with a bit of glue. Even a very small node app has a start time in the 10-30 sec range. Every auto scaling system, even cloud run, can struggle to keep up with extremely bursty traffic. If you want no timeouts, you have to overprovision.

AFAIK, PaaS solutions like heroku have a similar way of working, at least for side-projects. Here, you deploy a container and Google runs it somewhere and Heroku containerizes your application every time you push it. Similar to here, Heroku's free hobby containers also go to sleep in ~30min inactivity.

Heroku's "hobby" is $7/m while "free" is $0/m. There's no "free hobby":


This is exactly the service that Azure needs and doesn't seem to have: while there is a consumption plan for functions, that's about it, and App Service is incredibly expensive for what you get.

As @GordonS mentions, Azure Container Instances is about as close as you’ll get with Azure, but it’s only close on ease-of-use: rather different from a deployment/scaling characteristics POV.

I’m generally very underwhelmed by Azure offerings, but ACI is one of the few that actually lives up to what is advertised.

Yeah, I did look at that, and I think you're right on ease-of-use - it's a deploy and done deal, which is nice. I think it suffers from two things:

* the cost seems net-net similar to App Service, but more granular. So I can go this route and get slightly more flexible pricing, but at the cost of some arbitrary limits on assignable CPU/memory.

* it's easy to wander into Azure Container Service stuff accidentally, and that has "deprecated" plastered all over it. Shame the naming doesn't separate them more clearly.

Have you seen Azure Container Instances (I just posted another comment about it just now, before I saw your comment).

An aside, but I totally agree that App Service is way too expensive for the piffling amount of CPU/RAM they provide, even though I acknowledge the platform side of the offering is excellent.

If very few people visit a side-project, that's probably bad for Google search. Its crawler will be detecting slow response times and Google can penalize you in search results.

What a complimentary business strategy :)

More like unintended consequences. Like many real world cases, each of the policies is a reasonable, sound and fair policy by their own right. But when you combine them they might seem designed with exploitive motives.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact