
How I use the good parts of AWS - DVassallo
https://twitter.com/dvassallo/status/1154516910265884672
======
ookblah
I think this is really great for getting stuff up and running as quick as
possible, maybe if you're just starting out, but I'm really surprised at some
of the things said. Maybe if this is a site that you run on the side.

Doing test/staging on a nano and then pushing to production on an m5? What?
Like get ready to troubleshoot random issues completely unrelated to your
code. And then equating different AMI's to containers is vastly
oversimplifying things.

With docker I can just get up and running on my local box and every dev I
share this with has the exact same environment and config.

I can package up those images and send them up to ECS/Kubernetes and not deal
with the same headaches. The learning curve is a bit steeper, but absolutely
worth it.

I walked the same path here (starting out with single VMs for environments,
installing services locally on my laptop) and the headaches down the line are
not worth it and it isn't even a case of over-thinking the solution. You don't
need to ride the cutting edge but modern tooling saves a ton of time.

It was a PITA migrating our old VM stuff over, but absolutely worth it. If you
don't want to deal with maintaining systems there are other solutions
mentioned ranging from full PaaS to something like GKE.

~~~
randallsquared
> _With docker I can just get up and running on my local box and every dev I
> share this with has the exact same environment and config._

Well, that's the dream. The reality is like that sometimes, but also sometimes
like "I ran `docker-compose up -d` like you said, and the local-dynamodb
container seems fine, but the app container output 'Cannot find
/volumeforsomething' and died...?" and then there's a slack thread for a
couple hours about which version of Docker and is it native or Docker for Mac,
and whether to try upgrading Docker for Mac first or just `docker volume rm -f
volumeforsomething` or even `docker volume prune -f`...

~~~
diminoten
Yeah but some of us have paid those dues over time and don't have that level
of issue with Docker any longer.

I used to really struggle getting Docker and docker-compose to do what I want,
but after a few years working with it, I'm not blocked by the various volume
or network or what-have-yous that used to come up.

Alternative proposal: run your containers in docker-compose environments on an
ec2 instance. Everyone wins?

~~~
wanabroboticist
If it takes years to figure out how to do things "properly" with Docker, then
maybe it isn't very good.

~~~
diminoten
Maybe, but now that I have, it's very very good.

It's also totally reasonable to say, "It shouldn't have taken you years." It
probably shouldn't have!

------
merlincorey
I am in agreement with everything generally except for CloudFormation and
setting up Route53 zones manually.

I use Terraform to setup my infrastructure as service on AWS, including
Route53 zones.

After the latest 0.12 upgrade to the language, Terraform is quite a bit more
user friendly than CloudFormation, and importantly, not locked down to just
Amazon -- it supports multiple clouds and on-premises solutions for
declarative orchestration of resources.

~~~
joshpadnick
My company[1] has written 100's of 1000's of lines of Terraform code as part
of a commercially maintained library of prod-grade Terraform modules. We've
also used Terraform to setup 100+ teams on AWS with prod-grade infra, and I
can confirm that Terraform works very well for robustly launching on AWS.

There's also a fast-growing ecosystem around Terraform: lots of open source
modules, automated testing frameworks, and a growing number of tooling
solutions. In the early years of Terraform, bugs and stability were major
issues. With 0.12, the maturity factor is becoming very compelling.

On a separate note, I'm surprised the author endorses plain old EC2 over
Docker. I get the point of "choose boring tech," but it seems like launching
your app on EC2 requires a whole bunch of rework around automation that's
already done for you in ECS, EKS, or dare I say even Elastic Beanstalk +
Docker.

[1] [https://gruntwork.io](https://gruntwork.io)

~~~
rukenshia
What are the downsides of using terraform? We are currently in the process of
redoing a lot of our infrastructure and are considering Terraform. We had some
bad experience in the past with AWS (probably 12-18months ago) and Terraform
especially when it comes to manual changes to resources for environments where
manual changes for testing purposes are common (think changing security group
rules for example). It resulted in us having a broken state and being unable
to apply changes to our Terraform deployment without tracing the manual
changes and undoing them, so I'm a bit cautious about moving forward with
terraform. Have you experienced this recently? I'm intrigued by your comment
and would love if you could expand on it.

~~~
scaryclam
Ideally, don't allow manual changes to happen. It's not that hard to setup for
different environments and testing, so IME, it's not been much of an issue.

However, if you really can't change your ways of working, which I understand
if you can't, then try out the "terraform refresh" command. I've been
importing state recently, to move some of our own infrastructure over to TF,
and have found it to be quite useful for things like manual security group
changes. Basically, I'm building things up bit by bit, and when one of my
states gets out of sync I've been updating the local config and running that
command, which brings the state back in line.

In general, once you get your workflows sorted out and running for a while,
you're unlikely to have any major issues with Terraform. Just make sure to use
remote states and version them whenever you can (for example, turn on
versioning on the S3 bucket if you use S3 as the remote).

------
manigandham
Completely agree on just using a server instead of the various lambda-style
systems. All modern languages have great web/app frameworks that make it
incredibly easy to build, whether it's a single endpoint or a giant app. The
ability to just include whatever code you need and deploy it atomically is
massively underrated. Also agreed on scale, servers are fast and cheap and the
savings from Lambda rarely pays off in the extended effort. When you do scale,
Lambda becomes more expensive anyway.

I do recommend using Docker though. Containers are more portable and easier to
deploy and replace on a running server, along with the ability to run multiple
instances, mount volumes, setup ports and local networks, and eventually
migrate to something like ECS/K8S if you really need it.

~~~
tus88
Extended effort to push up a lambda function and not have to worry about
automating deployment and configuration and patching and monitoring and
upgrading and fail over-ring and, yes, scaling? Maybe its just me but I'd
rather not see the backend of a server ever again for anything other than
development.

~~~
manigandham
That's why I recommend containers, because automating deployment and config
would be the same regardless of destination, right? Monitoring also seems to
be the same if you're using built-in cloud stuff.

As for scale, I think that's massively overstated. Servers are really fast and
most apps aren't anywhere near capacity. Even a $10 digitalocean server is
plenty of power, and there's no cold starts. Even YC's advice is to focus on
features and dev speed, and worry about scaling when it truly becomes an
issue.

~~~
AmericanChopper
But a lambda is just a container that you don’t have to manage.

I don’t get this sort of anti-serverless sentiment. If you have even one good
SRE, then it’s an absolute breeze. Writing a lambda function is writing
business logic, and almost nothing else. I can’t see how you could possibly do
any better in terms of development velocity. I don’t get this ‘testing
functions is hard’ trope either. Writing unit test that run on your local is
easy.

~~~
takeda
Your code becomes AWS specific, it is more expensive if you need to scale, it
has higher latency, it is harder to test locally etc etc.

IMO lambda is awesome to handle infrastructure automation.

~~~
AmericanChopper
> Your code becomes AWS specific

Not really, aside from the other AWS services you consume (KMS, parameter
store...). A cloud function takes an event, executes your business logic, and
returns a response. The structure of the event can change slightly, but
they’re remarkably portable, and I’ve moved them before. If you’re doing it
right, most of your API gateway config will be an OpenAPI spec, and equally
portable.

> it is more expensive if you need to scale

This is context specific.

> it has higher latency

Again context specific, and likely not something actually worth caring about.

> it is harder to test locally

This is one I simply cannot understand. You can run your functions locally,
they’re just regular code. I’ve never had a problem testing my functions
locally. If anything I’d say it’s easier.

There’s upsides and downsides to any architecture design. Serverless models
have their downsides, but these anti-serverless discussions tend to miss what
the downsides actually are, and kinda strawman a bunch of things that aren’t
really.

I’d say the most common downside with serverless is that the persistence layer
is immature. If you want to use a document database, it’s great, if you want
to use a relational one, you might have to make a few design compromises. But
that said, this is something that’s improving pretty quickly.

------
jamestimmins
Perhaps I'm alone here, but I'd say 50% of my time with AWS goes to
(attempting to) properly configure the VPC as well as the IAM roles. For the
majority of small-ish projects, the hardware isn't nearly as important or
difficult as properly configuring access rules between services and outside
parties.

~~~
rjurney
IAM is a killer. You’re absolutely right. Are GCP permissions easier? I
haven’t used them as much.

~~~
jsmeaton
I’ve stopped worrying about minimising IAM permissions and tend to just use
the built in AWS roles for most things now.

~~~
DVassallo
Yes, I do the same. For service roles I just use the PowerUser managed role. I
don’t see the need to put access control on Amazon’s ability to call it’s own
services. I only restrict my EC2 instance profile, since that’s a bit more
vulnerable, and I tend to know very precisely what it should have access to.

~~~
poxrud
What if you have a lambda with a full admin role that is not sanitizing its
inputs? Or maybe it's using an outdated file parsing library (csv/yaml) with a
vulnerability. Now your entire AWS account could potentially be compromised.

~~~
DVassallo
Yes, I would use a restricted role for Lambda too. Anything that gets creds in
user space gets restricted permissions: EC2, Lambda, ECS, etc.

------
benjaminwootton
“Step 1: Forget that all these things exist: Microservices, Lambda, API
Gateway, Containers, Kubernetes, Docker.

Anything whose main value proposition is about “ability to scale” will likely
trade off your “ability to be agile & survive”. That’s rarely a good trade
off.”

I don’t see these as being about scalability. Rather, they’re about fast time
to market and ability to change. Moving up the stack and adding managed
services such as API Gateway will definetly give your product a better chance
of survival.

~~~
DVassallo
The disadvantages I’m highlighting are about the restrictions of the Lambda
abstraction.

What if you want to send telemetry to a third party? Or use a cache? Or deploy
something bigger than 250MB? Or handle WebSockets (without having to
read/write state in DDB on every message)? Or buffer something on the
filesystem? Or run something for more than 15 mins? etc etc.

How do all those things not impact agility? (In the web app/service space at
least.)

~~~
nexuist
I've been working with Lambda for about two years now, I'd like to answer all
of your concerns. I don't work for AWS but I do love Lambda and I think it has
its place amongst everything else. Previously I built node.js+Postgres apps
hosted on DigitalOcean for ~4 years.

>What if you want to send telemetry to a third party?

Can't you do this from your own Lambda code? Sure, your code could crash
before it can reach your telemetry service - but isn't this a concern on a
server based app as well?

>Or use a cache?

Elasticache or Mongo or whatever NoSQL 3rd party service you want to use works
straight from Lambda. If you're talking about caching Lambda responses, you
can again add custom code, which you would also have to do in a server
environment.

>Or deploy something bigger than 250MB?

Yeah, you're SOL here. 250MB is huge for any non-GUI software and it would
take a long time to get set up on Lambda's containers. If you're in this spot,
I wholeheartedly recommend ditching Lambda for EC2. However, don't count
Lambda out entirely - you can still have it take over repetitive, simple tasks
so the server hosting your monster 250MB backend doesn't get overwhelmed!

>Or handle WebSockets (without having to read/write state in DDB on every
message)?

I'm sure when PHP introduced sessions there were a bunch of devs complaining
about having to maintain a MySQL table for session keys. Ultimately, if you're
making a web app, 95% of its routes are probably glorified Excel formulas, so
you'd need to pull and push state through a database anyways.

Where else do you store WebSocket state? In memory? What happens when you got
millions of connections at once (think slither.io scale)? At the end of the
day you have to put it into something that can scale. I'm assuming if you're
using Lambda you care about scaling up - otherwise you could literally run
your backend on an IoT toaster with a MySQL database hosted on some IoT coffee
maker that had "admin" as its root password and nobody would tell the
difference. Again, if this is you - Lambda wasn't meant for your use case. Go
buy a coffee maker.

>Or buffer something on the filesystem?

I'm not really sure how to answer this one. There is obviously no permanent
file system on Lambda unless you count S3 (although I'm sure you know that).

I did some searching and found this:
[https://stackoverflow.com/a/31660175](https://stackoverflow.com/a/31660175)
If you're talking about uploading huge files, direct upload to S3 seems like
your best bet.

>Or run something for more than 15 mins?

Lambda wasn't meant for this. Set up a server and schedule a cron job. I'm
guessing if it takes >15min it's probably some kind of backup, statistical
analysis, ML model training, database dump parsing, yadda yadda yadda...Lambda
is for handling events. Everything I mentioned seems like an internal business
operation the users have no part in, so that also seems like a good candidate
for just having one server instance floating around and throwing all your odd
long jobs onto it.

I haven't used Lambda Step Functions, but I vaguely recall hearing something
about being able to run long tasks with those? Not sure. I wouldn't bother,
though, I'd just head straight for a server (the example Amazon gives is
starting a job to retrieve a specific item in a warehouse, sending an order to
an inbox, and waiting for a warehouse worker to mark the item as retrieved...I
don't know why Amazon chose that specific example, but it sounds like they
have a very specific target audience!)

>How do all those things not impact agility?

I'm of the opinion that tools do not impact agility, decisions do. If you
decide to use Lambda in a situation where a server would prevail, you're
wasting time on the wrong thing. If you decide to a use a server in a
situation where Lambda would prevail, you're...not really doing anything
wrong, I think. It'll likely cost you more than a Lambda function but those
numbers only start to matter once you actually have to care about scale.

Like I said, it's ultimately about what you're trying to do. Don't put a
square hole through a rectangular screw, or something like that.

~~~
DVassallo
Thanks for writing and explaining all of that! I’m sure one way or another
there’s a workaround for everything. But my point was exactly that. The fact
that you need workarounds hinders agility. I don’t disagree with the benefits.

BTW, about this “What happens when you got millions of connections at once
(think slither.io scale)? At the end of the day you have to put it into
something that can scale.”

API Gateway has a hard limit of 500 WebSocket connections per second. It can’t
be increased!
[https://docs.aws.amazon.com/apigateway/latest/developerguide...](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html)
—- That’s about the capacity of 1 C5.4XL instance :)

The whole scalability argument of API Gateway and Lambda is highly overrated
IMO. There are all sorts of soft and hard limits, and you still have to
monitor utilization of concurrency rate and invocation frequency and manually
request limit increases when approaching them. Doesn’t sound much different
than using EC2.

~~~
timf
> API Gateway has a hard limit of 500 WebSocket connections per second

That particular limit is new connections per second, not total connections per
second. Starting from zero, you could have 1.8M connections after an hour (per
account, per region).

------
rjurney
I can’t really imagine going back to scripting servers remotely to boot up
without docker. It sucked. And if you’ve got docker you want something like
ECS, cause why manage your own servers? It’s a bit rough to learn but man life
sucked before PaaS. You had to build your own. Now I write a simple install
script for docker on my machine, edit a menu and as many servers as I want run
that app.

Otherwise I dig.

~~~
DVassallo
Author here. Docker is the only one in that list I’m on the fence on. However,
I feel that the EC2 AMI can be the equivalent of the container image, and
Docker would only be adding another layer and another OS to deal with. Sure,
the AMI is not as portable as a docker image, but all in all I prefer working
directly on the EC2 VM just for the sake of reducing layers.

~~~
sien
Very interesting point.

But what about maintenance?

Working in a place where we have hundreds of small apps, the issue of
maintaining umpteen servers is a great pain. Docker reduces the footprint of
what has to be maintained substantially.

~~~
greyskull
Curious, what does "maintenance" mean for you?

~~~
sien
Patching servers can be part of it. Also fixing minor issues when no major
development is being paid for.

~~~
greyskull
FWIW, there is tooling in place such that you can just recycle the hosts in
your fleet and they'll be brought up with the latest image. so you can become
just another scheduled and automated event.

Don't quite follow the latter.

------
k__
_Anything whose main value proposition is about “ability to scale” will likely
trade off your “ability to be agile & survive”. That’s rarely a good trade
off._

Interesting that he sees micromanaging infra on AWS as more agile than using
managed services.

Also, serverless systems don't cost you money when you(r customers) don't use
them, so that's a huge plus on survival.

Finally, code you write is always a liability, if you got a good architecture
(hexagonal, etc.) you can just swap out one service that saved you time in the
past with another service that will save you time in the future.

~~~
freehunter
>serverless systems don't cost you money when you(r customers) don't use them

All the horror stories I see surrounding huge surprise Lambda bills always
bring me back to this point. If I have to pay $5/mo for a server I only use
for two hours once every few months, _that 's_ something that should go on
Lambda. If it's something that I'm using constantly, all day every day, a
server will be cheaper.

If I only use my car once every few weeks, Uber makes a lot of sense. If I use
it every day back and forth to work and the grocery store, Uber's gonna be a
lot more expensive.

Lambda is for small tasks that don't execute often. And using it right can
save startups gobsmackingly large amounts of money.

~~~
k__
Yes, it comes all down to risk assessment.

Maybe using serverless technology is too hard for your corp, because you don't
have the skills, so it could lead to problems in the future (surprise Lambda
bills).

But it could also be that your competitors get a huge advantage by investing
in the serverless paradigm and run you away in the future.

To me that Twitter thread sounded too much like a guy who invested in some
tech over the last 11 years and now tries to convince potential customers of
him to use the tech he knows about.

He could be right, he could be wrong. I don't know. I started back-end
development with serverless, so I'm biased in the other direction, haha.

------
fovc
Here's the thread link more readably:
[https://tttthreads.com/thread/1154516910265884672.html](https://tttthreads.com/thread/1154516910265884672.html)

------
obulpathi
Or use Google Cloud, which has 90% good parts. Documentation can be a bit
pain, but the services themselves are rocksolid. There are no 3/4 queuing
services, just one. GKE rocks! Cloud Console is a breath of fresher, compared
to AWS. Cloud Shell makes it easy to bypass firewalls for logging into
instances and no messing with public keys. It's all managed for you. Use
firebase if you are looking specifically for Web and Mobile Apps. Scaling to
millions of users or Petabytes of data is no big deal and you don't have to
rearchitect everytime your customer base grows by 10x.

~~~
rjurney
The google services are just way easier to use but once you’ve got expert
level proficiency at AWS it’s tough to let go.

~~~
evilmushroom
I have to use both. I find google easier to do simple things.. but I find it
lacks some of the flexibility AWS has for less simple things.

Well, that and Google's managed k8s solution was down for multiple days awhile
back when I was doing a comparison. Another reason I use EKS atm.... despite I
think GKE is a bit better.

~~~
obulpathi
I don't know when this happened. Give GKE a try, it's really amazing. Blows
EKS out of water. As per the flexibility is concerned, once you learn how to
use Google Services, you can get the flexibility with simplicity. AWS services
are too complex and even things like billing require a PhD degree in finance
to optimize for anything non-trivial.

------
cosmotic
Step one: don't use Twitter for this sort of thing

~~~
xtracto
Thank You! I thought I was the only one that hated the format of these things.
Why not writing a proper article and linking it on Twitter? It would be way
better

------
nijave
I tend to disagree with the EC2 sentiment. Any cost savings using VMs over a
higher abstraction will likely be wiped out by the time wasted becoming a
sysadmin unless you just like doing sysadmin stuff (patching, access
management, config)

~~~
ericcholis
This is a very solid point and something to consider inside of the entire
infrastructure planning stage. Opportunity cost could be huge, or net zero
depending on your needs.

------
etaioinshrdlu
I find the characterization of RDS Aurora as not proven, or unstable, as odd.

I thought it looked like one of the more mature-seeming choices on the market.

Any thoughts?

~~~
vasco
From the point of view of someone who's used Aurora Postgres since it came
out, though you sometimes are exposed to bugs, AWS support has always been
great and we never faced anything super serious. It's been almost a year since
we had any problems though. This on a somewhat large aurora instance at around
5TB, so it's already representative for a lot of people.

~~~
takeda
Right now if you use Aurora Postgresql 9.6 you don't have an easy way to
migrate to 10, and 11 is not even available. They supposedly are working on a
solution, but won't disclose when it will be available.

------
greyskull
The CDK[0] will hopefully help with the cloudformation story.

[0] [https://github.com/aws/aws-cdk](https://github.com/aws/aws-cdk)

------
whycombagator
And if you want something even easier (albeit with more fixed/upfront costs)
just use Heroku.

~~~
DVassallo
I agree. I have no direct experience with Heroku so I can’t comment on the
experience/restrictions, but if a PaaS works well for what you’re doing, I’d
go for it.

~~~
bubble_talk
I was also thinking the same, especially considering that you start with this
as your main reason:

"Anything whose main value proposition is about “ability to scale” will likely
trade off your “ability to be agile & survive”."

------
foxhop
I'm a core dev of stacker, it's worth taking a look at if you maintain
Cloudformation templates. Makes life a lot better.

At Remind (remind.com) we have nearly 600 separate Cloudformation stacks which
build and maintain our stage and prod environments. This would be insane
without stacker.

As for this tweet, I'm fairly confident he is talking about a side projects or
early startups and I agree with most of what he said.

Personal, I don't use AWS for side projects, I use offerings from Linode,
Digital Ocean, and vultr. They are cheaper and scale up vertically with a
click of a button, which is really what you need for the first couple of
years, unless you hit the growth/scaling lottery, which isn't typically the
case.

For example, over the last two weekends I was able to use Digital Ocean Spaces
(alternative to AWS S3) to build my wife a secure digital downloads store.

All uploads and downloads use presigned POST and GET urls created via Boto3! I
was suprised how perfect Digital Ocean implemented S3's API in their Spaces
offering.

[https://russell.ballestrini.net/pre-signed-get-and-post-
for-...](https://russell.ballestrini.net/pre-signed-get-and-post-for-digital-
ocean-spaces/)

------
nijave
Lambda is absolutely essential for glueing together Amazon services and many
Amazon articles recommend it (for instance, getting your ASG to drain ECS
instances before scaling down). It's also helpful for building
alerting/monitoring workflows since Cloudwatch/SNS is pretty simplistic on its
own

I think the sentiment is more "don't build your entire app on lambda"

~~~
wongarsu
I think "don't have any customer facing Lamdas" is a good rule of thumb. It's
great for glue code, if-this-then-that code and cron jobs.

------
aequitas
> But Autoscaling is still useful. Think of it as a tool to help you spin up
> or replace instances according to a template. If you have a bad host, you
> can just terminate it and AS will replace it with an identical one
> (hopefully healthy) in a couple of minutes.

I really started to enjoy AWS when I got into the habit of deploying all EC2
instances using Autoscaling groups. Even single node instances and
configurations that don't need any scaling, just everything. Autoscaling is
free and it forces you into the immutable/disposable infrastructure paradigm,
which just makes administration of the nodes just so much easier. And for all
stateful stuff there is RDS, S3, dynamo, SQS, etc.

------
fnord77
> Forget that all these things exist: ... Containers, Kubernetes, Docker.

Not sure how this guy can give this advice.

containers make rapid development so much easier. There's no distinction
between your dev environment and your prod environment except for a couple
properties/env variables and some scaling factors.

Also "no" to cloudformation and "yes" to terraform.

And if I were to write a simple application, I'd probably try with lambda next
time around.

------
abiro
I don't think is good advice in general. OP is obviously used to doing things
in a certain way and good for him for working the way he is most productive.
But looking at things objectively, everything has a learning a curve and once
you get past that, Lambda is way more efficient even when working alone due to
the agility it allows and time saved on maintenance.

------
mgamache
All depends on what your goal is. Are you looking for large scale out? Or are
you iterating fast and don't want to mess with managing databases, CDN's, KV
stores etc...? Note: A version of premature optimization is premature scaling.
So beware of those offerings that require you to use a certain technology to
enable AWS scale-out it can slow you down.

------
cwyers
Why not just use Linode or Digital Ocean at this point?

------
philwelch
I wouldn’t recommend using single EC2 instances in production ever, but the
rest seems solid.

------
otabdeveloper2
> 1/25

Is there a more user-hostile webapp than Twitter? I think not.

------
diminoten
Sort of related to this, but can someone explain to me why Zappa[1] isn't a
bigger deal? It _completely_ delivers, as far as I can tell, on the promise of
deploying a WSGI app into a Lambda in a way that requires minimal
configuration.

And yet, the project seems to have stagnated somewhat. Maybe it just did
everything it set out to do? I'm not sure, but when people say "forget about
Lambda because you'll have to roll your own shit" I wonder if they've heard of
Zappa or not.

[1] [https://github.com/Miserlou/Zappa](https://github.com/Miserlou/Zappa)

~~~
The_Amp_Walrus
I've tried to use Zappa for a non-trivial personal project to run a Django app
and I found:

\- Super nice, low config API

\- easy to deploy

\- great out-of-the-box support for async tasks (no Celery! woo!)

but:

\- It was a nightmare getting certain Python libraries (eg. Pillow, psycopg)
to work in the AWS Lambda environment

\- It really sucked having to deploy to AWS in order to debug issues which
cannot be reproduced locally (eg. library)

\- It seems hard to get away from using AWS tools to observe your code in prod
(eg. CloudWatch for logs)

\- I still needed a database to maintain state, DynamoDB didn't work with
Django's ORM and was surprisingly expensive, and if I'm going to shell out
$10/mo for a Postgres RDS instance then I may as well run the whole thing on
EC2 anyway

I think Zappa is a really nice tool for some niche use cases, and I'd
definitely turn to it if I needed to stand up some small, stateless serverless
service, but I would hate to support and debug it as a web app in prod.

project: [https://memories.ninja](https://memories.ninja)

zappa version: [https://github.com/MattSegal/family-photos/tree/lambda-
hosti...](https://github.com/MattSegal/family-photos/tree/lambda-hosting)

ec2 version: [https://github.com/MattSegal/family-
photos](https://github.com/MattSegal/family-photos)

------
foobar_
Docker is bullshit. AWS images are alright IME. Spinning multiple large
instances will give you more bang for the buck than dealing with tiny docker
containers.

