
The Fargate Illusion - ingve
http://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html
======
scarface74
The author is purposefully making things more difficult.

\- using Terraform instead of cloud formation

\- not using the default create a VPC wizard like anyone new would.

\- he said if we wanted to use lambda functions he would still have to setup
networking. It depends on what he was doing. If he used a regular database,
yes he would have had to setup a subnet for his RDS cluster, but if he used
DynamoDB, he wouldn’t need to do that. But even then, the default VPC wizard
that most people would have used would have been good enough. But either way,
that has nothing to do with K8s vs Fargate.

No even when you run a lambda “inside your VPC”, you don’t get more security.
The lambda is never running inside your VPC. It is running inside an AWS VPC
and connecting to your VPC via a network interface (ENI).

\- If he were to use lambda, he would not have had to worry half as much. He
wouldn’t have needed a load balancer. Also, he was using a third party ssl
certificate when he could have had a free one that was automatically managed
by AWS with Amazon Certificate Manager. Using API Gateway and just using a
lambda proxy would have done the same thing. No, there is no “lock-in” from
using lambda and a lambda-proxy interface. You write your same C#/WebAPI,
Node/Express, Python/Django code just like you always would and add the proxy
on top of it.

~~~
mirceal
this is spot on. drop terraform and go with cloudformation. initially set it
up throgh the console to learn what needs to be done and after that write the
template or even better find one someone has already written.

pro level startup: set it manually and use cloudformer.

+1 on the comments regarding wiring and lambda in non-vpc vs vpc context

also, if you want K8s, there is this thing called EKS.

~~~
KaiserPro
> even better find one someone has already written.

Alas, that never really works with multi account setups. Its sometimes useful
for seeing which magic line one is missing, but the level of incorrect
boilerplate out there on github is staggering.

~~~
scarface74
I have multi account code pipeline/Cloud Formation setups for deploying
lambdas and related resources. Once I got the hang of it (with a lot of help
from AWS support admittedly), it’s not that bad.

I probably could have figured it out, but why bother when we are paying for
the business support plan?

~~~
jjeaff
Funny, I was just talking to an experimental quantum physicist today and one
of the other people there asked him, doesn't the complexity just make your
head spin? He said, "no, once you get the hang of it and you are dealing with
it every day, it's not that bad.

~~~
scarface74
Oh I completely agree. I felt completely incompetent when it came to AWS 18
months ago. But now with a lot of studying for 5 certifications (just as an
organized company paid study plan), a lot of late nights beating my head on my
desk, some green field initiatives, and abuse of our business support plan
with AWS’s excellent live chat support,I’m pretty comfortable with most of the
non obscure parts Of AWS outside of the Docker and Big Data/ML parts. I’ll be
working on those over the next year.

~~~
mirceal
I personally “grew up” while AWS was born and grew and have been slowly
exposed to most of the services as they were released / enhanced.

I agree that It can be confusing as f for a newcomer. This is definitely an
opportunity for training courses/labs/moocs to figure out how to make the
learning curve less steep.

------
haxterstockman
Nice breakdown from a personal perspective, but I see the author struggling
against full cloud-native. Terraform will never be good for leading-edge AWS
technologies. If you need something more advanced than Cloudformation to
define your infrastructure as code, try Troposphere or CDK. I think we're past
the point where Terraform can exhibit the "cloud arbitrage" value that was
previously sold to us.

Working through permissions in AWS does suck, and the transparency problem in
why a container fails to start in Fargate is especially frustrating.

Serverless and devops in general isn't about putting ops folks out of a job.
It's about leveling the playing field on both sides of the coin so that
someone who focuses on application development and someone who focuses on
networking and infrastructure can collaborate easier, with more solid
contracts and interfaces.

So long as the more adversarial perspective persists, we'll see a lot more
digging in of the heels around kubernetes. As usual, I implore my friends in
ops not to rely on the inherent bus factor that the mastery of kubernetes
requires for job security.

~~~
robrtsql
> If you need something more advanced than Cloudformation to define your
> infrastructure as code, try Troposphere or CDK

The author has already hit a use-case that is not currently supported by Cloud
formation (creating a SecureString SSM parameter)--that's something that
Troposphere or the CDK won't help you with (unless you use them to create the
resource via the API rather than Cloud formation).

~~~
scarface74
He was willing to use external Terraform modules why not a CF custom resource?
A quick Google search found this:

[https://svdgraaf.nl/2018/04/13/CloudFormation-ssm-secure-
str...](https://svdgraaf.nl/2018/04/13/CloudFormation-ssm-secure-string-
support-boto3-custom-resource.html)

~~~
Varriount
Pre-built custom resources (like the one linked) are under not all that
common. The fact that there's one that exists for this particular situation I
would ascribe more to luck, then a generous amount of public material.

~~~
scarface74
Not being able to create a secure string parameter was the first problem I ran
into with CloudFormation. That’s how I happened to know that a custom resource
for it existed.

But, creating a custom resource is relatively easy. I’ve had to create a few
for things that are really “custom” to our environment.

I used these as templates —-

[https://github.com/stelligent/cloudformation-custom-
resource...](https://github.com/stelligent/cloudformation-custom-resources)

One issue that really seemed like an oversight is that you can’t add a event
subscription to an existing S3 bucket. I had to write a custom resource to do
it.

~~~
tie_
> Not being able to create a secure string parameter was the first problem I
> ran into with CloudFormation.

How about 'NoEcho' type of CFN parameters?

~~~
scarface74
What do you mean?

~~~
tie_
CloudFormation supports the "NoEcho" option specifically to allow password-
type parameters, which are not inspectable. How is that not a secure string
parameter?

~~~
scarface74
I realize this response is late.

The term “parameters” is unfortunately overloaded.

CloudFormation parameters are used within CF. We were referring to parameters
in Parameter Store.

[https://docs.aws.amazon.com/systems-
manager/latest/userguide...](https://docs.aws.amazon.com/systems-
manager/latest/userguide/systems-manager-paramstore.html)

But then, how do you get the secret value from CF to parameter store? If you
put the value of the parameter in your template, then it is stored unencrypted
in your template that is probably in source control.

For that, I use a combination of NoEcho in CF and use that user entered value
as a !Ref when creating the parameter store. Run the template manually one
time and then you can have it default to the existing value.

But you need a custom resource to create a secure string type.

------
jacobkg
The author is on point with regards to the overall complexity of the setup.

I wanted to add that we switched from ECS without Fargate (managing our own
EC2 instanced) to ECS with Fargate and it has been a pure improvement. It’s
simpler and easier to manage with less moving parts. I would recommend to
anyone currently on ECS to give Fargate a try

~~~
l33tman
But can you specify spot instance pricing for Fargate containers? I don't
think you could last I checked. Spot instances are 0.25x the cost of normal
EC2 instances often, so ECS with Spot launch configs are super-cheap.

~~~
village-idiot
Fargate offers no variable pricing to my knowledge. No spot instances, no
reserved. You pay per CPU and memory unit per hour, that’s it.

------
robrtsql
ECS/Fargate and Kubernetes are different, for sure, but it seems like the
author is conflating the one-time setup costs of Fargate with the continual
maintenance costs of running Kubernetes on VMs which you maintain yourself,
which doesn't seem like a fair comparison to me.

~~~
jkaplowitz
He acknowledges your point in the article but says that most people don't have
to administer their own cluster VMs - GKE makes it easiest but he also
mentions a smooth experience with DigitalOcean's managed Kubernetes, for
example.

~~~
maldeh
Fargate's purpose was never to magically make container orchestration go away,
though - it's just a managed compute layer you plug into via the other
orchestration frameworks (ECS or EKS). Really the apples-to-apples comparison
should be DigitalOcean's managed Kubernetes with EKS (and there are a lot of
valid points to raise there, to be sure).

The real value-add of Fargate is pay-for-what-you-use, when your task queue is
idle and your bill drops to zero with no extra orchestration (something that
the article seemingly completely missed).

~~~
YawningAngel
The level of orchestration required to achieve this on GKE is pretty minimal.

------
kolanos
There's a nice CLI tool that makes FarGate feel like a PAAS [0]. It configures
ALB, ECR and your FarGate task definitions and services for you. All you need
to deploy a FarGate service is a Dockerfile and some code. Hoping this project
gets more love.

[0]:
[https://github.com/jpignata/fargate](https://github.com/jpignata/fargate)

------
reilly3000
This mirrors my experience. Infrastructure as code is hard. I’ve tried to rely
on VSCode extensions for some help getting configuration values, but at the
time the best that was available was some snippets. I want the editing
experience to have less memorization and alt-tabbing to docs, and more
expression and validation.

Fargate isn’t less config than a helm chart, but you’re on your own, in that
there isn’t much of a public config ecosystem that I’m aware of. I also wrote
it off due to 2.3-3X instance pricing vs EC2.

One of the biggest factors that intimidates developers in my experience is
needing to determine resources. It would seem that such a task could be done
with automatic testing, but I haven’t found or developed a technique to do so.
In theory I would have my script test different RAM/CPU values against a load
tester, have live cost data and compute an optimum. I suppose one can come
from the opposite angle and determine the program’s needs based on data
structures and libraries. That is beyond my skills.

I suppose network interface throughput would be another factor, but
unfortunately with Fargate and Lambda that is largely opaque ( and I believe
linked to CPU allocation ).

This just goes to say that in addition to managing the complexity of
configuration code, the underlying abstractions are harder to grok than a VM
or even Kubernetes nodes and resource allocations.

------
hnzix
Trying to migrate my $work from hosting on manually configured onprem physical
servers, I evaluated OpenShift, K8s, Docker Swarm and AWS.

I had drank the Docker kool-aid pretty hard and spinning up individual nodes
was easy, until I started projecting forward what the final architecture would
look like - patching, failover, backup, disaster recovery etc.

Eventually I convinced my boss to pay for Heroku and went home early.

------
etaioinshrdlu
So I migrated everything to Fargate.

Then I started questioning, is this any better than 1 container per ec2
instance and running docker myself?

Assume that 1 fargate task definition == 1 ec2 instance.

I can't really think of any way that fargate would win in cost.

The management overhead for ec2 seems to be limited to starting docker daemon
and launching your containers.

Should I migrate off of fargate onto ec2?

~~~
sitharus
If you're running the tasks 100% of the time and they take enough resources to
require one EC2 instance worth of resources then running ECS targeting EC2 is
more cost effective. In general most of AWS's "serverless" solutions are only
cost effective if you can switch them off at some point.

Where Fargate shines is situations that require rapid scaling or where the
infrastructure can be more on-demand. The startup time for Fargate is lower
than EC2 in my experience.

I would advocate for continuing to use ECS instead of self-managed docker,
just use reserved EC2 instances.

~~~
etaioinshrdlu
Interesting. Most of my containers clock in at around 4GB total, stored in
ECR.

Average task start time is about 3min on fargate. And there is no caching at
all built-in.

I wish it was more like 5secs... :(

Really I wish I could run my app on Lambda but my app is too large at the
moment, and has a lot of large/streaming/otherwise unusual HTTP requests that
may be a bear to integrate with lambda...

------
leowoo91
Just a quick note article didn't mention: smallest Fargate instance you can
get is 512mb, around 15$ per month (+ vpc costs). But, If you have a compose
file, then you can fit multiple containers in a single instance.

~~~
benmanns
Not sure if this is based on their earlier pricing, as I know there was a
price reduction, but I'm calculating ~$9 per month at 0.25 vCPU + 0.5GB
memory.

(0.25×$0.04048+0.5×$0.004445)×24×30=$8.89

[https://aws.amazon.com/fargate/pricing/#Pricing_Details](https://aws.amazon.com/fargate/pricing/#Pricing_Details)

~~~
leowoo91
It might be, I relied on a google search. Thanks for the update.

------
mark242
I can't help but think that using the Now framework would have made the author
way, way happier. Point now at your Dockerfile, and bam, everything is running
in a minute.

