Hacker News new | past | comments | ask | show | jobs | submit login
The Free Stack – Running Your Application for Free on AWS (agnihotry.com)
329 points by pagnihotry on July 24, 2018 | hide | past | favorite | 97 comments

There are ways to actually run your application for free using the big players' free services. One route I've looked at for e-commerce is storing product data on Stripe, hosting product pages with Netlify, pulling product data during static site build, triggering a build when Stripe data gets updated, and using Netlify's wrapper around AWS Lambda for free FaaS (AWS Lambda can be free forever, but you also need AWS API Gateway which isn't free forever). This results in the only fee being a CC fee.

I've put together a demo over at libra-shop.org. It definitely still needs some work, but I think most of the important bits are all sorted out.

Of course, the biggest issue with using free tiers for your entire stack is that you have no guarantee that the services will be running forever. (eg. Netlify could stop providing a free AWS Lambda connection)

All of this work and complete dependency on third party services is worth it? Isn't much easier to create a lightweight app in say, Go and SQLite, and pay a so-low-to-be-insignificant monthly fee?

I wonder what amount of money would be sufficient to create the digital equivalent of a foundation to ensure the uptime of a website/project. It would be kinda cool to put away a few grand to ensure something stays up "forever".

The price of forever ranges from a few hundred to millions depending on how resilient you want to be.

In reality, it would probably be more effective to dedicate nearly $0 to infrastructure and $N,000.00 to an endowment for 1-3 hours of engineering time each year.

Figuring out how to host a static site for free and moving the site to that service requires almost no time for a competent engineer. But finding a solution that is guaranteed to exist in 100 years with zero human intervention would probably cost millions...

That's along the lines of what I was thinking. The www.mywebsite.com foundation, worth a whopping 10k, invests it's 300$ annual interest in updating the hosting for www.mywebsite.com.

Well, Network Solutions seems(?) to offer 100-year domain registration for $1k ($999).

Thinking about it, a holistic solution to this kind of problem would [categorically need to] be extremely expensive in order to ratelimit demand. Because it's not just "host this for me for a while" with an implicit sort of "maybe let it fall over one day and see if I yell at you" - that 2nd bit is completely removed, and the expectation is 100.0% (amortized, aggregate) uptime... indefinitely.

It's interesting how not even graves last forever. They decay over time (hundreds of years). Not all are maintained.

What sort of "forever" are you talking about? "Most Important Thing™ when the paperclip maximisers take over"?

Would just having a pre-paid credit card work? I guess not, also given how every once in a while you may need to accept new terms and conditions.

I agree. There is a diminishing return on service reliability as you approach 'free', and at some point, you finally pay $9.78/mo and survive anyway.

For most businesses, probably not. I have a few friends that have Etsy shops as hobbies. They don't consistently make enough money from it to justify paying for a Shopify, etc site. So I figured I'd see if I could manage to get the costs down to only a credit card transaction fee.

Netlify free tier is magical. It's allowed me to stop spending on hosting altogether. My blog, personal website, and all other sites I own are static, so I don't have to pay for my droplet anymore.

> Of course, the biggest issue with using free tiers for your entire stack is that you have no guarantee that the services will be running forever.

That's not guaranteed for paid tiers either; no product offering, free or paid, is guaranteed to be offered forever.

This is a nice write up. My only concern is that these days we think that the alternative to free is $40 to hundreds of dollars a months for something decent.

In the days of shared hosting, you could drop $3 to $5 to host your small website and expect your website to decently perform. But with your DO/Linode, you can expect excellent performance for the same price.

Only last month my two $5 droplets handled 5 million web requests from a viral post.

For 5 Euros per month a cloud VPS at Hetzner.com. comes with 2 CPUs and 4GB RAM. That's 8 or maybe 16 times the RAM that Facebook probably launched with plus extra CPU and SSD disk speed. All these cobbled-together free tiers may be slightly cheaper but the added complexity I can do without. It just seems like tech for tech's sake.

You'd have to be doing something really computationally expensive or be running a very large operation to really need something expensive.

Simple budget web hosting packages can easily handle tens of thousands of requests per hour for like $4/month..

You're absolutely right.

Im running a $5(+$2 for backups) 1GB linode and getting around 50k views a month on a properly configured wordpress with caching and 30 plugins with nginx on Centos 7. Sometimes 25-50 ppl at the same time. Site is pretty fast(faster than most- maybe because lack of a dozen of trackers). With 2GB ram and 1 more core it would be super fast. Im using Centmin mod stackscript for managing nginx.

There are a lot of naysaying in the comments but I think it's a nice write-up and if you are interested in doing something like this I can say for sure it is possible based on my own experience. One of my side projects, https://cmdchallenge.com is setup almost identical. Cloudfront / api gateway / dynamodb and is covered a bit in this blog post https://about.cmdchallenge.com/building-cmdchallenge.html .

My last month's AWS bill (not in the free tier anymore) was only $2.23. For the runner that runs arbitrary shell commands I used to run a t2.micro in AWS but when the free tier expired I moved that over to gcp and I'm burning through their credits. I'm also using gitlab ci/cd to periodically renew the docker instance that runs shell commands.

It only works because there isn't much traffic obviously, and api requests are not made on every page load.

Wow, just want to say nice work on whatever validation method you are using...

  # You have a new challenge!
  # There is a file named "access.log" in the
  # current directory. Print the contents.
  bash(0)> tac access.log | tac - - [09/Jan/2017:22:29:57 +0100] "GET 
  /posts/2/display HTTP/1.0" 200 3240 - - [09/Jan/2017:22:30:43 +0100] "GET 
  /posts/foo?appID=xxxx HTTP/1.0" 200 1116 - - [09/Jan/2017:22:34:33 +0100] "GET 
  /pages/create HTTP/1.0" 500 3471 - - [09/Jan/2017:22:35:30 +0100] "GET 
  /posts/foo?appID=xxxx HTTP/1.0" 500 2477 - - [09/Jan/2017:22:38:03 +0100] "GET 
  /bar/create HTTP/1.0" 200 1116 - - [09/Jan/2017:22:42:18 +0100] "GET 
  /posts/1/display HTTP/1.0" 200 2477 - - [09/Jan/2017:22:44:25 +0100] "POST 
  /posts/1/display HTTP/1.0" 200 3471 - - [09/Jan/2017:22:49:02 +0100] "GET 
  /posts/foo?appID=xxxx HTTP/1.0" 200 2477 - - [09/Jan/2017:22:52:31 +0100] "DELETE 
  /posts/2/display HTTP/1.0" 404 2477 - - [09/Jan/2017:22:57:11 +0100] "GET 
  /posts/foo?appID=xxxx HTTP/1.0" 200 3471
  #     Correct!
  # You have a new challenge!
  # Print the last 5 lines of "access.log".

He probably just validates the output. You can do "whatever" as long as it generates the output.

> api gateway / dynamodb

It's funny—this is essentially what Google App Engine's standard environment gives you (equivalents to), but nobody has ever been very excited about using it.

Hi! I really like your project. Perhaps others have suggested it already, but it would be nice to read the man pages directly from the app (currently it gives man: command not found error for me).

I don't want it to sound as advertising, but I pay €3.08 for a VPS (KVM with exposed AES-NI) at hetzner. You can get a lot of bang for the buck these days.

I just wanna say, thas an awesome project

I read this, and I think: wow, that's extraordinarily expensive. The stack he designs is $20/mo for 1,000 user sessions/day.

Back in the days of cPanel/LAMP shared hosting, you'd have similar capability for $5/mo.

With a basic DO droplet, you can still do that for $5/month, and hey, if it takes off your app won't depend on a bunch of expensive proprietary software. Also I'm guessing that if cost is a concern, denial of service is preferable to blowing the bank under unanticipated high loads. Maybe I need to drink more AWS/GCP kool-aid though and see the light.

Recently I've been fascinated at gap between server providers these days. It seems each provider is either:

- Enterprise public cloud, AWS/GCP/Azure, expensive but scalable and enterprise friendly

- Developer public cloud, Linode/DO, cheap and easy to use

Although I say that AWS/GCP/etc are expensive, they obviously have negotiable prices for large customers. I doubt the smaller providers do that.

But it makes me wonder why people use AWS/GCP when the other providers are so much cheaper. How do Linode/DO offer such good prices? Would they kick me off if I actually maximized the server capacity they offer, like a shared cPanel host would do, back in the day?

Hetzner.com is half the price of Linode/Digital Ocean.

The cheapest AWS EC2 reserved instance is about the same $/month as the cheapest DO instance, both can be used for a small website. You’ll pay more for on-demand pricing.

The cheapest DO instance lets you pin the CPU to 100% for your whole usage cycle. The cheapest EC2 reserved instance will throttle you if you attempt to do so. So if your "small website" is actually a web app of any complexity, the DO instance is almost always a better choice.

(If, on the other hand, the "small website" is just a website, why not just put it in an S3 bucket and put Cloudflare in front? Even cheaper!)

https://aws.amazon.com/lightsail/pricing/ is a good middle ground. Under the covers it's T2 + EBS + a nice bandwidth allowance, but wrapped in a simplified UI and nice flat pricing.

I recently had to spin up an instance of an app I work on aws, and having never really used aws outside some light s3 usage, lightsail is a god send. At work, everything we have is hosted on our own (on prem ) servers, that are either fire walled off or restricted to company login, hence why I had to spin something up on aws for demos or whatever they want to do with it. And, for my personal/hobby stuff, I just have a swarm cluster setup on DO. And as soon as I went to create an EC2 instance (I think that’s what I was looking for), I realized I had no idea what was going on. Since this was a quick little thing that was going to get almost no real usage, I didn’t want to spend several hours figuring the options and what kind of instance I needed. I just wanted to go back to good ol’ DO where I could have a new droplet spun up in like three clicks.

And then I found lightsail, when I was clicking through the million different services, and I was like holy shit, this is familiar, and in like three clicks later, I had a server running! Obviously, it doesn’t have all the features DO offers (because I imagine those can be done with other AWS services), but I didn’t need them/use them (aside from DO’s DNS). Really, my only gripe I’ve had with it so far is the OS offerings. All my servers (personal & work) are CentOS, so I was slightly annoyed when I found out I could get a CentOS server on lightsail, so I just settled with Ubuntu because that’s fine and I’m familiar enough with it.

So, if you need something hosted on AWS for whatever reason, and are not familiar with it at all, but are with something like DO/Linode, lightsail is something to look into. I think the pricing was fairly comparable to DO, maybe a little more expensive, but I’m not paying for it so I didn’t really look.

Word on the street is we’re gonna start moving a lot of our infra to AWS at work though, so I guess I should probably at least figure out how the hell EC2 works...

EC2 has a lot of options indeed. I just searched YouTube for "EC2 free tier setup" and clicked what the guy clicked (t2.micro, no more than 30GB block storage, and you can get an Elastic IP* for free, as long as you immediately attach it to the instance).

* - when you stop and start an instance, it will change IP address. But if you attach it an Elastic IP, it stays the same.

The Amazon Lightsail proves there is a huge market for DO and Linode. And DO / Linode aren't exactly cheap, there are lots of players in VPS market that are half the price or more.

I think what is missing is a middle ground approach that scale both ways. Or a Heroku that is built on top of DO / Linode.

I wouldn't consider VPS providers part of "the cloud". You simply rent a (virtual) server. With AWS and the like, you pay for their automation of services. You don't need to manually deploy load balancing, CDNs, DDoS mitigation, security hardening, and the like. The big pitch is that you're paying a bit more in order to phase out your IT team.

> The big pitch is that you're paying a bit more in order to phase out your IT team.

Really? I have to say I don't understand it. I remember the "NoOps" movement from a few years ago and I just find the whole concept around it to be almost hilarious; kind of like Salesforce's old "No Software" logo (which is nowhere to be found in their newer marketing).

As I see it, as long as your organization has people using IT in any capacity at all, you will need someone in charge of IT Operations. Whatever you may want to call it: DevOps, or SRE, or PE, or even if you just decide it's something that each of your regular devs is going to be doing, it's a function that needs to be done.

Someone needs to be able to set up the systems, monitor them, scale them, secure them, and troubleshoot issues. It doesn't matter how well-engineered and maintained and automated the components you rely on are, they will break in various compound fractures and you will need to deal with the downtime and potential corruption to your composite system.

I'll eat my hat if you can point me towards a single non-trivial software service that has been running continuously without any "IT".

As I see it, as long as your organization has people using IT in any capacity at all, you will need someone in charge of IT Operations. Whatever you may want to call it: DevOps, or SRE, or PE, or even if you just decide it's something that each of your regular devs is going to be doing, it's a function that needs to be done.

That's true, but I don't need to worry about the server that my database, load balancer, queuing system, RedisCache, storage, etc. is running on. I only have to worry about my applications and the actual database. If I need to provision more hardware, it's a click of a button (well actually updating my CloudFormation template).

There is an entire level of both hardware and operating system maintenance that I don't have to worry about.

>There is an entire level of both hardware and operating system maintenance that I don't have to worry about.

Right. So you traded [relatively] inexpensive OS and infrastructure maintainers for [relatively] expensive application and tier maintainers.

My interpretation of devops/noops is that it marks the lack of major distinction in hiring because both ops and development are seen as part of the same process, as opposed to simply pretending you don’t have infrastructure.

This is certainly part of the reasoning for things like terraform, cloudformation, even chef/puppet/ansible—people want a single process for development of reviewing and executing code.

>the "NoOps" movement

I always pronounced that as "nnnn..oops"

With DO and Linode adding things like block storage volumes, object storage, and load balancing, they're becoming more than just simple VPS providers.

They definitely don't offer anywhere near as many services as Google, MS, Amazon, or even Alibaba's cloud offerings. But they're becoming pretty decent 'cloud lite' providers who cover what's needed for plenty of folks and companies.

And - at least DO - they've been offering these things for quite a while now, too

>I wouldn't consider VPS providers part of "the cloud". You simply rent a (virtual) server. With AWS and the like, you pay for their automation of services.

Might want to go recheck the API documentation for providers like DO et al

fwiw ... DO, Vultr, and Hetzner have never kicked me off - regardless of how heavily I hit the systems

Come to father Bezos, he will enlighten you

Hi malchow and anothergoogler,

Yup. That would work, but it takes out a couple of things from the original requirements I had: 1. scalability It should be able to stretch especially for bursty traffic situation. 2. I should not have to pay if the app or service is not being used.

Yeah $5 for a single point of failure in one data center of possibly questionable quality. Versus 20 a month for an app spread across availability zones for the all-in price, resilient to failure on single machines, etc.

A single downtime event on a single host will likely have costs that can never be actuarially recovered from, in this head to head. Pay the $15 extra bucks.

You say that, but IME it takes lots of configuration effort and lots more AWS services in order to really get the kind of resiliency that, say, Amazon.com enjoys. And core services in core regions in AWS do go down. It doesn't not happen.

I think AWS would do well to expand massively its free tier. Talented startups in my world are starting to look askance at AWS fees.

Respectfully, I disagree. If you use the services listed in the article, you're not really talking about that many more components than you're going to run on your own box. You've got a DB (dynamodb), an app server (lambda), and an http load balancer / content server (s3 / api gateway). And then you have to set up DNS anywhere but Route 53, why not. I don't see how this is dramatically a larger configuration haul compared to:

1. Nginx config & tuning 2. Postgres / MySQL config & tuning 3. App server config & tuning

It is dramatically different config & tuning, so we might exist in a time where people overvalue familiarity & it colors their impressions of effort, but I don't think it's a long-term viable crutch to lean on either.

As to core services going down, I read you loud & clear, it does happen. From my experience, it happens way less often than outages on a single box though, or even a collection of boxes you're managing. And I don't know if there has been a time where there were simultaneous region failures in AWS -- and with dynamo global tables its easier than ever to get a multi-region app launched.

Pricing is the kicker for sure though. Your bill line is going to be higher. You're also priced in though which from a biz standpoint is huge. It's very difficult to budget failure, it's probably a hidden component in your engineering team costs. For startups this can be mitigated through AWS' accelerator-linked programs though, otherwise, I'd encourage any CTO / CEO / founder team to really think about how cost is not measured in the data center bill, but in that cost, plus engineer cost, plus customer retention costs, plus opportunity cost.

Also, couldn't you just set a limit in your budget in AWS. That and alarms on events etc?

Bad title. It's not free, and after 12 months it's even very expensive compared to alternatives, considering what one gets for the price.

A couple of cheap VPSs would be cheeper and have way more headroom for the price.

Yup. It's not free even for the first 12 months, even assuming upper bounds on load.

$6.35 != $0. (For any numerical value of $ not equal to zero.)

What's wrong with using RDS in production? I'd rather do that than build around a vendor-lock-in Amazon offering when not strictly necessary. I'd also rather do that than administer a database myself.

"...AWS RDS Sevice. It offers t2.micro for free during the first 12 months. It is fine for development. However, I will not recommend using it in production."

I think maybe they were talking specifically about not using the t2.micro instance size in production.

Do you know what the downsides would be of using a t2.micro in production (for a low volume of requests)?

Hi ameliaquining, like bootlooped pointed out, I am suggesting to not use t2.micro in production and not RDS.

Hi, thanks for the write-up, very informative! What would be the lowest instance type you would recommend for a similar setup? I have an app I'm thinking of porting to AWS...

C/M should do it for general workloads in production, R/I on RDS. If you are doing something specific, you may have to look closely at the resource bottlenecks on those instances. t2.micro should be fine for development.

Nothing is "wrong" with it, you just can't feasibly do it for free.

I think this is the wrong way to go about building a free stack, given the vendor lock-in problem. I use a lot of free services from Netlify, for example, but I'm cautious about relying on anything that I can get only from Netlify. I feel a lot better if I can list 3+ providers who all provide a free tier that meets my needs (even if some aren't quite as present to use). That way, I know I have somewhere else to go if I need to—and I feel better than I won't need to because the provider knows I have other options.

What Netlify alternatives have you prepared? eg Github pages

With all costs that goes into building a business (directly in $ or indirectly in hours) is it worth going down the path of saving money to this degree?

Setup heroku in <1 hour for <$100/month, forget about hosting and go build your product.

If $100/month breaks you, I think you’re in the wrong path anyways.

You're discounting hobby projects or small side businesses that don't generate a lot of revenue. For those, much more than a few bucks a month may be a deal-breaker.

I've been called a mercenary before, but the only reason I spend time working on software development "hobby projects" is with the sole aim of learning new skills that will make me more money on my job/contract work or at least keep me up to date with my skills. For that reason alone, I use AWS for side projects. Companies pay well for "AWS Architects". If I want to do a hobby project and save money, I could just as easily set up my own server and use my own gigabit home internet.

You can set up a FaaS app just as fast as Heroku.

I find it even more efficient, since Go, AWS SAM and Lambda offers dev/build/prod simplicity that Heroku never figured out.

I open sourced and documented the stack that I use instead of Heroku here:


I am not convinced that you really need lambda and all that complexity. I also don't really like the characteristics of Lambda, taking forever to wake up and limited debugging. A $5 DO/Hetzner Instance can already do a lot if you pay a bit attention to performance. I run a small Golang Api on DO for $5, using DynamoDB and some Cloudflare Caching (free) and it can handle quite a lot of traffic.

I think the complexity of AWS's pricing is what led me to use the simplicity of AWS' lightsail offering.

No free tier, but the free tier is frought with potential for overrun. Lightsail is not.


That looks a lot like Digital Ocean's offering - anyone tried both who can compare Lightsail vs DO?

edit: Blog post that says they are about equal, Lightsail has AWS integration but DO is a but more full-featured as of the writing (https://cloudacademy.com/blog/amazon-lightsail-vs-digital-oc...)

I wonder how this compares with App Engine's free tier? That's what I use for hobby projects that I don't want to pay a monthly fee on.

I do the same but haven't calculated just what the gae free tier would cover in the way the author did

I always enjoy articles like this, but must counterbalance with the vast vast cost saving of working with a stack with which you're familiar.

Is it generally reliable asking Amazon support what capacity you need for a particular use case, so you're not buying more than needed? e.g. a service that might be used once per day for a minute, and another day it might get used a dozen times for maybe an hour, and only during fixed business hours. And there are general purpose, compute optimized, storage optimized, etc. But what if the service is more network latency sensitive than either storage or compute sensitive?

Or are there 3rd party estimators that do a better job of telling you what kind of instance to get? Or just pick a general purpose instance, run it for a week, and then tweak it?

I am on GCP but I usually request/allocate more and once in production for a few weeks, I can turn some things down and others up. That is if your product/setup is new and you aren't sure how everything will interact once live (in terms of perf).

Otherwise, for Lambda there is on Github a repo with a script that will publish your code in every configuration possible, run stress tests, and only keep the most optimum (perf/cost).

Hi, could you share the link of said repo?

I searched around and closest I found was https://github.com/Nordstrom/serverless-artillery

> Is it generally reliable asking Amazon support what capacity you need for a particular use case

Everyone's use case is different, they are not experts at your application, you are.

Spend some time learning about the various services, do a deep-dive and maybe even a prototype on the ones that look interesting. There are literally dozens of ways to build any application, depending on what your goals are (low cost, low latency, low maintenance, etc)

> run it for a week

Run it for an hour, you should be able to quickly get a cost/benefit.

There are a ton of instance types, but you generally only need to test 2 or 3 families, and 2-3 sizes. (Do you need lots of RAM? CPU? Disk? FPGAs? GPUs?). It's worth the time to automate this, so you can periodically test it. (Yes, it will cost you a dollar or two.)

> But what if the service is more network latency sensitive

Don't forget your own part of the stack here. Writing in a scripting language can add miliseconds, as can normalizing your data (i.e. NoSQL usually prefers de-normalizing, which trades off more duplication for lower latency).

You can also pay (extra) for no hypervisor.

I think you will have to buy premium support https://aws.amazon.com/premiumsupport/ or spin up an instance around what usage you expect and then analyze your bottlenecks to swap it later on.

What I do is first automate setting up an instance from scratch, then launch a variety of instance types and benchmark their performance at the task I want them to perform.

I recently went through this, though with the (arbitrary) goal of running for free indefinitely. This is tougher, since you can't use API Gateway and need to throttle your dynamo operations.

SQS ended up being the key to making this work: everything is asynchronous, and the js SDK is used to enqueue messages directly from the frontend.

That setup is described in https://www.simonmweber.com/2018/07/09/running-kleroteria-fo..., if you're interested.

TL;DR: Not free, but ~$6/year for the first year, ~$20/year thereafter

Yup, also note in there - "You will have $0 bill in first 12 months if your S3 storage is below 5GB, CloudFront traffic stays below 50GB/Month and API Gateway requests are less than 1 million a month, or you get 666 user sessions a month compared to the 1,000 user sessions I picked while running these calculations."

It's $20/month, not year.

I think I lose more coins out of my pocket on the street yearly

If you're loosing $20 worth of coins out of your pocket somewhere on the street on a yearly basis you're either dealing with a hell of a lot more coins than the average person or you're one clumsy individual.

Nah, just my country uses a lot more of cash. Most common coins are worth $1 and $2.

It depends. US coins larger than 25c are uncommon.

But other countries have $1 and $2 coins, with paper bills only starting at $5.

Transit tokens in Toronto are tiny and worth $3. I’ve dropped a few because they catch on my phone case edge.

Recently I thought I've lost a €5 banknote because some toll machines in Italy send cash[0] flying as if they were doing it with contempt.

I found it eventually along a €2 coin somebody left there. On my way back I also found a €1 coin and a 50c one. I guess people lose a measurable amount of cash this way.

[0] Non-Italian cards don't work on Italian motorways - only cash. Knowledge I acquired through error of my own so now other people don't have to.

Huh. How'd that work then? Did they just fine you or send you a bill later?

Once they notice that someone's clogging the toll booth you receive a receipt with payment details and you have 14 days to do that after which interest starts accumulating.

In one instance in Venice though the employee preferred to argue with me over the speakerphone insisting that I "insert euro" even though I didn't have any cash one me(there are no ATMs on the motorways - something I learned that day).

His line of thinking was that I want the receipt so that I can avoid paying by never returning to Italy.

How would this work with a more standard webpage. For example, Laravel (PHP) on the backend with a Relational Database like MySQL?

AWS Lambda does not support PHP. You will need to spin up an EC2 instance for running PHP and another instance in RDS for MySQL. For development, you can spin up t2.micro on EC2 and RDS and pay $0 but pick another instance type when you go live to production.

Why is everyone so concerned with $20/mo? If you're running a business, this should be the least of your concerns. Building on AWS means easy scaling for the future.

Who wants to redesign their entire project if it takes off?

Not only is this stack free, it’s easy to develop and set up and operate.

Here’s a boilerplate Go app that sets up this stack:


That's a rather misleading title. More like, Running Your Application for Free on AWS for the first 12 months (and then $20/mo after...)

I'm obsessive about optimizing cost on cloud platforms. I tend to make many, small hobby/experimental/etc projects. So I've long needed to find every way I can to ensure the costs for these projects remains small. $20/month is fine for a single project, but 20 projects? Yeah...

I know a lot people just look at cloud service pricing from the perspective of a startup. But I can't be the only one who uses cloud services for small, hobby/experimental/etc projects. So perhaps my insights will be helpful.

TL;DR: Use Google Cloud. App Engine Standard if you can.

When it comes to cost at small and medium scale, it's _really_ hard to beat Google Cloud. Check their Always Free tier: https://cloud.google.com/free/ That'll cover pretty much everything you need for small scale projects. You can grab a tiny VM, some Storage, some Datastore, all for the low, low price of $0/mo "forever". On top of that I've found their service pricing to either be on-par with AWS, cheaper, or if it isn't cheaper it's more granular than AWS so your off-the-lot prices end up cheaper anyway. For example AWS's managed NoSQL service requires you to allocate processing bandwidth up-front, which means there is a minimum cost no matter how little you use. Gcloud's is just charged based on usage. Don't use it? It's free.

Azure is interesting and has _some_ always free tier like Google Cloud. Their prices have come down a _lot_ in the past few years. They're worth a look if you haven't checked in awhile. But they're still up-and-coming in a lot of ways. E.g. their Container Registry is charged by how much disk you've allocated, rather than how much you've used. GCloud CR charges based only on usage, and gets roped into your Storage usage so it's part of the Always Free tier.

Gcloud's App Engine Standard Environment is a _beast_ for cost optimization. If you can fit your project into an App Engine + Datastore shaped hole, your project will absolutely, no strings attached, cost _nothing_ at the small scale. And when your project suddenly gets rocket fuel, it'll scale automatically with no effort on your part at reasonable cost.

(Do not even look at Flexible Environment. Its pricing is ludicrous.)

The BIG CAVEAT to Google Cloud is the usual Google failings. Their services tend to be unreliable (not the case for App Engine), their customer support is atrocious, their automated systems may randomly ban you and nuke your projects, and they may increase service pricing 25x with short notice when you least expect it.

I can't emphasize those caveats enough. Tread carefully. I'm sure others will chime in with their horror stories for AWS/Azure/etc. But Gcloud comes up on the HN news feed more often than the other cloud providers for a reason.

(This is all for people who want general cloud infrastructure. If you just need servers, there's of course the usual Vultr, et. al. with just low cost VMs.)

EDIT: For reference, I'm currently running some 12 or so small, personal projects in Gcloud right now. Some are "dead" projects, others are actively used by myself, and a rare few are used actively by a small user base. My monthly costs for all those projects is currently ~$1/mo all-in. A lot of that stuff lives on App Engine, which means I've had to do no maintenance on them in ... well some of them have been their for _years_ now without me touching them.

One of my previous companies transitioned from AWS to Gcloud for cost reasons. That move cut the cloud expenses down to 25% of what they were, and also enabled us to add new features (because it was not possible to deploy them to AWS without it costing absurd amounts of money). But I'll also note that the transition was incredibly painful due to Gcloud's various failings [luckily foisted on one of my new hire's who enjoyed the learning experience].

Hi fpgaminer,

The post is intended to analyze spend of running on AWS, your point about comparing other cloud services are completely valid but out of scope for the post.

The $20 per month is for 1,000 sessions per day after 1 year. I assume that getting 1,000 or 100 users on a daily basis for a whole year is pretty good for an app/service to move over bootstrapping and start generating some revenue or raise some money so that $20 bill seems small. The upside is if you don't go big you don't pay.

Note that that tiny VM/compute instance only comes with 1G of transfer, which you might use just doing a speedtest.. if anything close to normal transfer for a public facing site traverses that VM, you will see charges for sure.

Yeah, bandwidth is exceedingly expensive on all the cloud providers. If you're going to use general cloud infrastructure, that's impossible to avoid. (Again excluding simpler providers like Digital Ocean where you get a terabyte or two of bandwidth included).

I've taken to throwing CloudFlare in front of any of my personal services that serve static content and which are in danger of using a lot of bandwidth. Bandwidth served through CloudFlare is free, so as long as you're hitting their cache you won't see any charges at your cloud.

Good point, however, may as well just use Netlify if we're just talking about static assets. Anyway, the 600M always free VM instance is a pretty awesome freebie for sure. There's plenty you can do with it, even if you have to end up paying some cents for BW.

Is there a similar offer on Azure?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact