I've put together a demo over at libra-shop.org. It definitely still needs some work, but I think most of the important bits are all sorted out.
Of course, the biggest issue with using free tiers for your entire stack is that you have no guarantee that the services will be running forever. (eg. Netlify could stop providing a free AWS Lambda connection)
In reality, it would probably be more effective to dedicate nearly $0 to infrastructure and $N,000.00 to an endowment for 1-3 hours of engineering time each year.
Figuring out how to host a static site for free and moving the site to that service requires almost no time for a competent engineer. But finding a solution that is guaranteed to exist in 100 years with zero human intervention would probably cost millions...
Thinking about it, a holistic solution to this kind of problem would [categorically need to] be extremely expensive in order to ratelimit demand. Because it's not just "host this for me for a while" with an implicit sort of "maybe let it fall over one day and see if I yell at you" - that 2nd bit is completely removed, and the expectation is 100.0% (amortized, aggregate) uptime... indefinitely.
It's interesting how not even graves last forever. They decay over time (hundreds of years). Not all are maintained.
What sort of "forever" are you talking about? "Most Important Thing™ when the paperclip maximisers take over"?
That's not guaranteed for paid tiers either; no product offering, free or paid, is guaranteed to be offered forever.
In the days of shared hosting, you could drop $3 to $5 to host your small website and expect your website to decently perform. But with your DO/Linode, you can expect excellent performance for the same price.
Only last month my two $5 droplets handled 5 million web requests from a viral post.
Simple budget web hosting packages can easily handle tens of thousands of requests per hour for like $4/month..
You're absolutely right.
My last month's AWS bill (not in the free tier anymore) was only $2.23. For the runner that runs arbitrary shell commands I used to run a t2.micro in AWS but when the free tier expired I moved that over to gcp and I'm burning through their credits. I'm also using gitlab ci/cd to periodically renew the docker instance that runs shell commands.
It only works because there isn't much traffic obviously, and api requests are not made on every page load.
# You have a new challenge!
# There is a file named "access.log" in the
# current directory. Print the contents.
bash(0)> tac access.log | tac
184.108.40.206 - - [09/Jan/2017:22:29:57 +0100] "GET
/posts/2/display HTTP/1.0" 200 3240
220.127.116.11 - - [09/Jan/2017:22:30:43 +0100] "GET
/posts/foo?appID=xxxx HTTP/1.0" 200 1116
18.104.22.168 - - [09/Jan/2017:22:34:33 +0100] "GET
/pages/create HTTP/1.0" 500 3471
22.214.171.124 - - [09/Jan/2017:22:35:30 +0100] "GET
/posts/foo?appID=xxxx HTTP/1.0" 500 2477
126.96.36.199 - - [09/Jan/2017:22:38:03 +0100] "GET
/bar/create HTTP/1.0" 200 1116
188.8.131.52 - - [09/Jan/2017:22:42:18 +0100] "GET
/posts/1/display HTTP/1.0" 200 2477
184.108.40.206 - - [09/Jan/2017:22:44:25 +0100] "POST
/posts/1/display HTTP/1.0" 200 3471
251.111.109.143 - - [09/Jan/2017:22:49:02 +0100] "GET
/posts/foo?appID=xxxx HTTP/1.0" 200 2477
220.127.116.11 - - [09/Jan/2017:22:52:31 +0100] "DELETE
/posts/2/display HTTP/1.0" 404 2477
18.104.22.168 - - [09/Jan/2017:22:57:11 +0100] "GET
/posts/foo?appID=xxxx HTTP/1.0" 200 3471
# You have a new challenge!
# Print the last 5 lines of "access.log".
It's funny—this is essentially what Google App Engine's standard environment gives you (equivalents to), but nobody has ever been very excited about using it.
Back in the days of cPanel/LAMP shared hosting, you'd have similar capability for $5/mo.
- Enterprise public cloud, AWS/GCP/Azure, expensive but scalable and enterprise friendly
- Developer public cloud, Linode/DO, cheap and easy to use
Although I say that AWS/GCP/etc are expensive, they obviously have negotiable prices for large customers. I doubt the smaller providers do that.
But it makes me wonder why people use AWS/GCP when the other providers are so much cheaper. How do Linode/DO offer such good prices? Would they kick me off if I actually maximized the server capacity they offer, like a shared cPanel host would do, back in the day?
(If, on the other hand, the "small website" is just a website, why not just put it in an S3 bucket and put Cloudflare in front? Even cheaper!)
And then I found lightsail, when I was clicking through the million different services, and I was like holy shit, this is familiar, and in like three clicks later, I had a server running! Obviously, it doesn’t have all the features DO offers (because I imagine those can be done with other AWS services), but I didn’t need them/use them (aside from DO’s DNS). Really, my only gripe I’ve had with it so far is the OS offerings. All my servers (personal & work) are CentOS, so I was slightly annoyed when I found out I could get a CentOS server on lightsail, so I just settled with Ubuntu because that’s fine and I’m familiar enough with it.
So, if you need something hosted on AWS for whatever reason, and are not familiar with it at all, but are with something like DO/Linode, lightsail is something to look into. I think the pricing was fairly comparable to DO, maybe a little more expensive, but I’m not paying for it so I didn’t really look.
Word on the street is we’re gonna start moving a lot of our infra to AWS at work though, so I guess I should probably at least figure out how the hell EC2 works...
* - when you stop and start an instance, it will change IP address. But if you attach it an Elastic IP, it stays the same.
I think what is missing is a middle ground approach that scale both ways. Or a Heroku that is built on top of DO / Linode.
Really? I have to say I don't understand it. I remember the "NoOps" movement from a few years ago and I just find the whole concept around it to be almost hilarious; kind of like Salesforce's old "No Software" logo (which is nowhere to be found in their newer marketing).
As I see it, as long as your organization has people using IT in any capacity at all, you will need someone in charge of IT Operations. Whatever you may want to call it: DevOps, or SRE, or PE, or even if you just decide it's something that each of your regular devs is going to be doing, it's a function that needs to be done.
Someone needs to be able to set up the systems, monitor them, scale them, secure them, and troubleshoot issues. It doesn't matter how well-engineered and maintained and automated the components you rely on are, they will break in various compound fractures and you will need to deal with the downtime and potential corruption to your composite system.
I'll eat my hat if you can point me towards a single non-trivial software service that has been running continuously without any "IT".
That's true, but I don't need to worry about the server that my database, load balancer, queuing system, RedisCache, storage, etc. is running on. I only have to worry about my applications and the actual database. If I need to provision more hardware, it's a click of a button (well actually updating my CloudFormation template).
There is an entire level of both hardware and operating system maintenance that I don't have to worry about.
Right. So you traded [relatively] inexpensive OS and infrastructure maintainers for [relatively] expensive application and tier maintainers.
This is certainly part of the reasoning for things like terraform, cloudformation, even chef/puppet/ansible—people want a single process for development of reviewing and executing code.
I always pronounced that as "nnnn..oops"
They definitely don't offer anywhere near as many services as Google, MS, Amazon, or even Alibaba's cloud offerings. But they're becoming pretty decent 'cloud lite' providers who cover what's needed for plenty of folks and companies.
Might want to go recheck the API documentation for providers like DO et al
Yup. That would work, but it takes out a couple of things from the original requirements I had:
1. scalability It should be able to stretch especially for bursty traffic situation.
2. I should not have to pay if the app or service is not being used.
A single downtime event on a single host will likely have costs that can never be actuarially recovered from, in this head to head. Pay the $15 extra bucks.
I think AWS would do well to expand massively its free tier. Talented startups in my world are starting to look askance at AWS fees.
1. Nginx config & tuning
2. Postgres / MySQL config & tuning
3. App server config & tuning
It is dramatically different config & tuning, so we might exist in a time where people overvalue familiarity & it colors their impressions of effort, but I don't think it's a long-term viable crutch to lean on either.
As to core services going down, I read you loud & clear, it does happen. From my experience, it happens way less often than outages on a single box though, or even a collection of boxes you're managing. And I don't know if there has been a time where there were simultaneous region failures in AWS -- and with dynamo global tables its easier than ever to get a multi-region app launched.
Pricing is the kicker for sure though. Your bill line is going to be higher. You're also priced in though which from a biz standpoint is huge. It's very difficult to budget failure, it's probably a hidden component in your engineering team costs. For startups this can be mitigated through AWS' accelerator-linked programs though, otherwise, I'd encourage any CTO / CEO / founder team to really think about how cost is not measured in the data center bill, but in that cost, plus engineer cost, plus customer retention costs, plus opportunity cost.
A couple of cheap VPSs would be cheeper and have way more headroom for the price.
$6.35 != $0. (For any numerical value of $ not equal to zero.)
I think maybe they were talking specifically about not using the t2.micro instance size in production.
Setup heroku in <1 hour for <$100/month, forget about hosting and go build your product.
If $100/month breaks you, I think you’re in the wrong path anyways.
I find it even more efficient, since Go, AWS SAM and Lambda offers dev/build/prod simplicity that Heroku never figured out.
I open sourced and documented the stack that I use instead of Heroku here:
No free tier, but the free tier is frought with potential for overrun. Lightsail is not.
edit: Blog post that says they are about equal, Lightsail has AWS integration but DO is a but more full-featured as of the writing (https://cloudacademy.com/blog/amazon-lightsail-vs-digital-oc...)
Or are there 3rd party estimators that do a better job of telling you what kind of instance to get? Or just pick a general purpose instance, run it for a week, and then tweak it?
Otherwise, for Lambda there is on Github a repo with a script that will publish your code in every configuration possible, run stress tests, and only keep the most optimum (perf/cost).
Everyone's use case is different, they are not experts at your application, you are.
Spend some time learning about the various services, do a deep-dive and maybe even a prototype on the ones that look interesting. There are literally dozens of ways to build any application, depending on what your goals are (low cost, low latency, low maintenance, etc)
> run it for a week
Run it for an hour, you should be able to quickly get a cost/benefit.
There are a ton of instance types, but you generally only need to test 2 or 3 families, and 2-3 sizes. (Do you need lots of RAM? CPU? Disk? FPGAs? GPUs?). It's worth the time to automate this, so you can periodically test it. (Yes, it will cost you a dollar or two.)
> But what if the service is more network latency sensitive
Don't forget your own part of the stack here. Writing in a scripting language can add miliseconds, as can normalizing your data (i.e. NoSQL usually prefers de-normalizing, which trades off more duplication for lower latency).
You can also pay (extra) for no hypervisor.
SQS ended up being the key to making this work: everything is asynchronous, and the js SDK is used to enqueue messages directly from the frontend.
That setup is described in https://www.simonmweber.com/2018/07/09/running-kleroteria-fo..., if you're interested.
But other countries have $1 and $2 coins, with paper bills only starting at $5.
Transit tokens in Toronto are tiny and worth $3. I’ve dropped a few because they catch on my phone case edge.
I found it eventually along a €2 coin somebody left there. On my way back I also found a €1 coin and a 50c one. I guess people lose a measurable amount of cash this way.
 Non-Italian cards don't work on Italian motorways - only cash. Knowledge I acquired through error of my own so now other people don't have to.
In one instance in Venice though the employee preferred to argue with me over the speakerphone insisting that I "insert euro" even though I didn't have any cash one me(there are no ATMs on the motorways - something I learned that day).
His line of thinking was that I want the receipt so that I can avoid paying by never returning to Italy.
Who wants to redesign their entire project if it takes off?
Here’s a boilerplate Go app that sets up this stack:
I'm obsessive about optimizing cost on cloud platforms. I tend to make many, small hobby/experimental/etc projects. So I've long needed to find every way I can to ensure the costs for these projects remains small. $20/month is fine for a single project, but 20 projects? Yeah...
I know a lot people just look at cloud service pricing from the perspective of a startup. But I can't be the only one who uses cloud services for small, hobby/experimental/etc projects. So perhaps my insights will be helpful.
TL;DR: Use Google Cloud. App Engine Standard if you can.
When it comes to cost at small and medium scale, it's _really_ hard to beat Google Cloud. Check their Always Free tier: https://cloud.google.com/free/ That'll cover pretty much everything you need for small scale projects. You can grab a tiny VM, some Storage, some Datastore, all for the low, low price of $0/mo "forever". On top of that I've found their service pricing to either be on-par with AWS, cheaper, or if it isn't cheaper it's more granular than AWS so your off-the-lot prices end up cheaper anyway. For example AWS's managed NoSQL service requires you to allocate processing bandwidth up-front, which means there is a minimum cost no matter how little you use. Gcloud's is just charged based on usage. Don't use it? It's free.
Azure is interesting and has _some_ always free tier like Google Cloud. Their prices have come down a _lot_ in the past few years. They're worth a look if you haven't checked in awhile. But they're still up-and-coming in a lot of ways. E.g. their Container Registry is charged by how much disk you've allocated, rather than how much you've used. GCloud CR charges based only on usage, and gets roped into your Storage usage so it's part of the Always Free tier.
Gcloud's App Engine Standard Environment is a _beast_ for cost optimization. If you can fit your project into an App Engine + Datastore shaped hole, your project will absolutely, no strings attached, cost _nothing_ at the small scale. And when your project suddenly gets rocket fuel, it'll scale automatically with no effort on your part at reasonable cost.
(Do not even look at Flexible Environment. Its pricing is ludicrous.)
The BIG CAVEAT to Google Cloud is the usual Google failings. Their services tend to be unreliable (not the case for App Engine), their customer support is atrocious, their automated systems may randomly ban you and nuke your projects, and they may increase service pricing 25x with short notice when you least expect it.
I can't emphasize those caveats enough. Tread carefully. I'm sure others will chime in with their horror stories for AWS/Azure/etc. But Gcloud comes up on the HN news feed more often than the other cloud providers for a reason.
(This is all for people who want general cloud infrastructure. If you just need servers, there's of course the usual Vultr, et. al. with just low cost VMs.)
EDIT: For reference, I'm currently running some 12 or so small, personal projects in Gcloud right now. Some are "dead" projects, others are actively used by myself, and a rare few are used actively by a small user base. My monthly costs for all those projects is currently ~$1/mo all-in. A lot of that stuff lives on App Engine, which means I've had to do no maintenance on them in ... well some of them have been their for _years_ now without me touching them.
One of my previous companies transitioned from AWS to Gcloud for cost reasons. That move cut the cloud expenses down to 25% of what they were, and also enabled us to add new features (because it was not possible to deploy them to AWS without it costing absurd amounts of money). But I'll also note that the transition was incredibly painful due to Gcloud's various failings [luckily foisted on one of my new hire's who enjoyed the learning experience].
The post is intended to analyze spend of running on AWS, your point about comparing other cloud services are completely valid but out of scope for the post.
The $20 per month is for 1,000 sessions per day after 1 year. I assume that getting 1,000 or 100 users on a daily basis for a whole year is pretty good for an app/service to move over bootstrapping and start generating some revenue or raise some money so that $20 bill seems small. The upside is if you don't go big you don't pay.
I've taken to throwing CloudFlare in front of any of my personal services that serve static content and which are in danger of using a lot of bandwidth. Bandwidth served through CloudFlare is free, so as long as you're hitting their cache you won't see any charges at your cloud.