
How Far Out Is AWS Fargate? - kiyanwang
https://read.iopipe.com/how-far-out-is-aws-fargate-a2409d2f9bc7
======
KaiserPro
The problem with fargate is that like all AWS "compound tools" that are meant
to be an answer to competitor x, is that they are painful to use if you don't
have an infra team

take elastic beanstalk, its meant to be a competitor to heroku(thats how it
was pitched to us in $big company), but my word it misses the mark.

Lambda and API gateway is a massive faff to setup manually, but serverless and
zappa make it really really simple to do. That and its cheapness is why its
caught on, its fast(to iterate), simple enough and super cheap.

Fargate is kinda aimed at people who have out grown lambda, but then you will
be evaluation the whole hosting ecosystem. But if you are evaluating hosted
K8s, I don't know why you wouldn't just plump for GKE.

Mind you, avoiding K8s and sticing with lambda + ECS for long running stuff is
far simpler to understand, even if updating it with CF is a massive ball ache.

~~~
tootie
I'm pretty sure Beanstalk predates Heroku. I'm curious how you think it misses
the mark though. I've found it to be really easy to operate although that's
mostly low-traffic systems.

~~~
sokoloff
Heroku was 2007. Beanstalk was 2011.

Salesforce purchased Heroku in 2010 for $212MM in cash before Beanstalk was
even announced.

------
londons_explore
The best way to describe a service, rather than 1000 words of buzzword heavy
text, is a service diagram, a hello world example of the config, and a more
complex example showing off the best/main features.

I don't want to hear that 'it's a bit like kubernetes but not quite'. I want
to try it inside 30 seconds and see for myself.

~~~
brento
> I want to try it inside 30 seconds and see for myself.

So let me get this straight, you want to try something as complicated as
Kubernetes, inside of 30 seconds?

I'm thinking you might have some unrealistic expectations.

~~~
joseph
I think it is a realistic expectation. Think of EC2 in 2006. Getting started
was almost immediate if you already had an Amazon account. You downloaded a
CLI tool and your credentials, and started a VM. Prior to this, it might have
seemed unrealistic to provide such a streamlined experience.

I would love to see the same done for Kubernetes. What I want is a kubeconfig
file that links me to a paid account somewhere, and whenever I run `kubectl
apply -f foo.yml`, I pay by the millisecond for whatever resources get
created. Zero ops for me the customer, and all the complexity will be on the
side of the company offering this service.

~~~
yebyen
I think Okteto is an example of what you're describing. (This is for K8s, I
just learned about this service yesterday and it does what you're describing,
kind of.)

okteto.com

I'm not sure this is relevant, but it's a great example of a service that has
a tutorial, that covers multiple use cases, and lasts less than about 2
minutes, leaving you with a pretty clear understanding of what else you're
meant to do.

This is how all elevator demos should be.

~~~
joseph
Thanks for the link, I will check it out.

~~~
yebyen
See also, k3sup

Since it was already easy to get a cloud machine on the Internet!

------
holografix
Disclaimer: I work for Salesforce, Heroku’s parent company.

When Fargate was released I was very curious about it as it seemed like AWS
was moving up the ladder towards PaaS and I wanted to know how it compares to
Elastic Beanstalk and also Heroku.

I started going through this AWS written tutorial:

[https://aws.amazon.com/getting-started/projects/build-
modern...](https://aws.amazon.com/getting-started/projects/build-modern-app-
fargate-lambda-dynamodb-python/module-two/)

The app they use as an example is of course meant to demonstrate how a
plethora of AWS services can work together.

I still couldn’t help but be surprised at the sheer number of different things
I had to learn and configure, sometimes involving editing yaml.

For long time AWS users: is that tutorial representative of how you really
build apps in AWS?

~~~
013a
AWS couldn't build a Heroku-like PaaS if their entire business depended on it.
They're organizationally incapable of something so simple, beautiful, and
productive.

I legitimately believe there's a checklist at the bottom of every new AWS
service launch, and among their internal requirements includes a line like "Is
this product so amazingly incomprehensible that customers will take weeks to
integrate with it and be forced to upgrade to a technical support plan?"

The closest any major cloud player has gotten is App Engine. And its pretty
close; it has issues, and some inherent complexity, but generally I recommend
any new startups look to either App Engine or Heroku for hosting. Avoid AWS
like the plague.

~~~
auslander
> Avoid AWS like the plague

Autoscaling: Heroku - 2017, AWS - 2009.

------
codeulike
Watching this from a distance, I vaguely understood Docker. Then everyone was
talking Kubernetes, and I vaguely grasped it was some kind of meta solution
for something or other. And evidently really complicated but really buzzwordy.
And then came Serverless which seems to really mean 'sortof stateless' and
reminds me of MTS, an early Microsoft effort that let you run COM objects as a
service on an NT server, but only if they were stateless. And now this thing,
which I can't begin to fathom. In short, I am lost, but I'm beginning to
wonder if all the cloud complexity might one day wrap back around to something
very simple again.

~~~
goatinaboat
_wonder if all the cloud complexity might one day wrap back around to
something very simple again_

Yes, all these solutions are gradually converging back to CGI.

~~~
twic
CGI? CICS!

------
mullingitover
> "Compare that with AWS EKS pricing where the Kubernetes cluster alone is
> going to cost you $144 a month. A cluster isn’t terribly useful without
> something running on it, but just wanted to highlight that Fargate doesn’t
> come with a cost overhead."

That $144 a month is a deal considering it's just the price of a few instances
that you'd normally provision anyway if you were rolling your own k8s cluster,
and it manages the control plane administration for you. The moment your hand-
rolled k8s cluster goes the way of so many horror stories [1] that savings
evaporates into hours of expensive engineering labor hours wasted.

[1] [https://k8s.af](https://k8s.af)

~~~
symfrog
EKS has its own share of problems, and in several cases they can't be
mitigated due to the limitations that EKS imposes. For example:

\- Service cluster IP range is not definable, making it difficult to integrate
with an existing network topology [1]

\- Limited choice of CNIs, e.g. Calico can not be used

\- No accessible etcd snapshots for recovery

I tend to avoid EKS and prefer to create most clusters by using kubeadm as the
foundation.

[1] [https://github.com/aws/containers-
roadmap/issues/216](https://github.com/aws/containers-roadmap/issues/216)

~~~
markbaikal
Calico can be used on EKS!?
[https://docs.aws.amazon.com/eks/latest/userguide/calico.html](https://docs.aws.amazon.com/eks/latest/userguide/calico.html)

~~~
micah_chatt
EKS Engineer here.

Calico policy can be used with the AWS VPC CNI, but you can remove the default
CNI and install Calico or any other CNI plugin you’d like.

~~~
symfrog
In theory, you could replace the CNI on worker nodes, but is that something
that is practically useful (when it can't be done on master nodes in EKS) and
supported? How would the kube-apiserver, for example, communicate to the
metrics-server if it is not connected to the Calico network?

~~~
micah_chatt
You are correct that the API server is only aware of the VPC network, and not
any overlays. One solution to the metrics-server or other webhooks is to use
host-networking mode so the API server can have connectivity.

------
lukasLansky
As much as I like the described move from "functions" to "containers",
solutions like KNative / Cloud Run are still inherently more hackable than
solutions like Fargate.

It's good to have properly defined, stable, open interface for compute
workloads (containers as defined by OCI), so we don't have to lock ourselves
into whatever shape of runtime environment our current cloud provides for
their flavor of FaaS. So we don't have to learn cloud-provider-specific
tooling, getting certifications for handling various tasks at AWS, becoming
2010s variants of Cisco-certified network engineers of the previous age.

But it's even better to have a properly defined, stable, open interface for
orchestration too. To be able to run stuff locally. To be able to extend
things. To ease the lock-in cloud providers currently have. To be able to
actually understand what happens under the hood. And last but not least: to
enable rise of open source solutions for higher-level abstractions, like
KubeDB.

~~~
k__
Funny that both sides has its gripes with Fargate.

For a serverless proponent like me it's still too much config.

For a no lock-in proponent like you it's not enough config.

~~~
StavrosK
The way to satisfy both camps is sane defaults.

~~~
shabda
“A good compromise is when both parties are dissatisfied”

~~~
jjoonathan
A bad compromise leaves both parties dissatisfied, too.

------
chadash
Fargate is a very useful service for anyone hopping on the serverless/lambda
train.

Here's a simple use case: say you have a simple banking website with a
frontend and an API and whatever. People log in and check their balances, etc.
Typically, this requires requests that take on the order of milliseconds, so
lambda works for this [0]. However, every month, you want to go in and
calculate what you owe in interest for everyone's account. Say that you have
100,000 users and it takes 0.01 seconds to do each calculation, so this job
will take 1,000 seconds, or 16 minutes to run. Lambda functions are
automatically killed after 15 seconds, so that won't work.

That's where Fargate comes in. You dockerize your environment and then run a
command in Fargate. Now, it runs for 16 minutes for the task itself + 1 or 2
minutes to wind up and then it automatically kills itself when it's done. You
pay for the 16 minutes of run time and call it a day.

[0] One issue with lambda/serverless is dealing with cold starts when your
application hasn't been used in a while, or you need more concurrent instances
running, so it takes some time to "warm up". This is a good reason _not_ to
use serverless frameworks in many cases, but I won't get into that.

~~~
alpha_squared
Mild correction, but AWS lambdas are limited to 15 minutes of execution time
as of October 2018 [0]. Prior to that, they were limited to 5 minutes.

> One issue with lambda/serverless is... when you need more concurrent
> instances running...

Lambda can have 1,000-3,000 concurrent burst invocations for most of the
popular regions and 500 for others[1]. Are you saying that's not enough?

[0] [https://aws.amazon.com/about-aws/whats-new/2018/10/aws-
lambd...](https://aws.amazon.com/about-aws/whats-new/2018/10/aws-lambda-
supports-functions-that-can-run-up-to-15-minutes/)

[1]
[https://docs.aws.amazon.com/lambda/latest/dg/scaling.html](https://docs.aws.amazon.com/lambda/latest/dg/scaling.html)

~~~
chadash
> Mild correction, but AWS lambdas are limited to 15 minutes of execution

Whoops that's what I meant

> Lambda can have 1,000-3,000 concurrent burst invocations for most of the
> popular regions and 500 for others[1]. Are you saying that's not enough?

No, that's plenty for most use cases. I'm referring to "cold starts". When you
haven't used a lambda function in a while, it takes a few seconds for it to
get going. But when you hit it a second time, it responds more quickly.

------
mharroun
I really dont get the hate or disinterest with ECS or fargate.

It took me all of 4 days to come up with a CI/CD pipeline/production
environment using ECS and fargate for the startup I was the cto for.

We are now looking at kuberneties or reserved instances to save some money as
we have scaled to angel->seed->series A. Though tbh its looking like the extra
cost is worth it given the out of the box 1 click scaling, blue/green
deployments and "conatiner as a server" level monitoring and logging.

~~~
bastardoperator
What took you 4 days takes seconds using tools like Jenkins X and Kubernetes.
Just sayin...

~~~
freehunter
If you don't know Jenkins or Kubernetes it definitely will not take you
seconds to learn them, spin them up, configure them, and put them into
production.

------
mosselman
Fargate is pretty useful and it works well... when it works. One of the most
annoying things is that you can't (as far as I know and I hope someone will
correct me and show me the light) see the errors from the docker engine. This
means that if there is some issue while spinning up your docker instances that
you'd normally see in the docker engine's log, you won't know what is going on
when using fargate. This makes debugging these issue a nightmare.

Again, I hope I am just stupid and that there is some obvious way with which I
CAN see the logs and someone will point out to me how.

~~~
mr-karan
If your container feels to even run, then no logs are generated. Logs are only
generated after your container spawns. So if you get errors in your
`ENTRYPOINT` or `CMD` commands, the only way is to override these commands
with something like [1] and debug why it's failing.

[1]: `command: ["/bin/sh", "-c", "sleep 36000"]`

~~~
zomglings
"tail -f /dev/null"

~~~
mosselman
That is a nice trick.

------
joemccall86
One thing I'm keeping my eye on is the fargate virtual kubelet [1]. It seems
to offer a way to use the familiar kubernetes tooling with a managed
"clusterless" offering like fargate.

[1] [https://github.com/virtual-kubelet/aws-
fargate](https://github.com/virtual-kubelet/aws-fargate)

~~~
kylek
Why deal with the complexity of one when you can have both!

(/s...)

------
lvh
We use Fargate a lot for internal tooling. It's pretty great! Being able to
schedule containers instead of Lambdas is nice, and genreally not having the
restrictions of a Lambda environment is nice.

We have, no joke, Lambdas that kick off Fargate tasks to do "actual work". Why
do we have Lambdas? Because that's the integration point AWS has everywhere.
"Run code when X happens" on AWS means a Lambda, so that's what we use.

~~~
poxrud
I think that this should be the only use case for lambdas. They should respond
to events and act as the glue between aws services. If you're doing anything
that requires a running time of over 200ms you should be using a proper app
server, probably inside a docker container.

------
joshstrange
Small note if OP is author or author is in the thread: There are two typo's of
the word "Firecracker" spelled as "Firecrracker". It's spelled correctly in
between and right after the second misspelling.

> Under the hood, they share the same virtualization technology called
> Firecrracker. Firecracker is a KVM-based virtualization layer that creates
> and manages minimalistic container-like virtual machines called “microVMs”.
> Firecrracker is still relatively new and was designed from the ground up to
> address issues identified with the previous generation of AWS Lambda.

~~~
adjohn
Fixed, thanks for the heads up!

------
scarface74
My experience with Fargate:

I had a Node/Express microservice running in lambda using the lambda proxy
integration to send and receive files from/to S3.

I knew in advance that the 6MB request limit for lambda was going to hit us,
but it was good enough for what we needed right then.

When it came time, I knew I was going to have to convert to Fargate but I
didn’t know anything about Docker or Fargate.

It took less than a day for me to get the API up and running in Fargate
following this tutorial.

[https://medium.com/@ariklevliber/aws-fargate-from-start-
to-f...](https://medium.com/@ariklevliber/aws-fargate-from-start-to-finish-
for-a-nodejs-app-9a0e5fbf6361)

And automating it with Cloud Formation

[https://github.com/1Strategy/fargate-cloudformation-
example](https://github.com/1Strategy/fargate-cloudformation-example)

To be fair, I did have experience with the other fiddly bits of AWS and
CloudFormation so I knew how to modify the template to work in our own
environment.

------
jknoepfler
I stopped reading when the author claimed Fargate gave Kubernetes a run for
it's money, which is blatantly false. They don't even occupy the same niche.
Also, no, shitty lambda for containers does not give a hyperscaling container
/ virtual networking / etc. orchestrator designed to manage millions of
persistent containers a "run for its money"

------
blowski
What's the Azure equivalent of Fargate? I have to migrate a bunch of small
containerised PHP apps from AWS ECS into Azure, but while ECS has been easy,
I'm sceptical of our ability to work with Kubernetes. I've looked at App
Service, but I don't know anyone that's had experience.

If anyone has any feedback or advice I'd happily hear it.

~~~
tripue
There's also a service called Azure ACI which is pretty much the same thing as
fargate

------
JakaJancar
Approachability-wise, wouldn't an even better approach be to drop ECS Fargate
(in terms of API) and simply add to EC2 the ability to execute Docker images,
instead of AMIs?

Feels like if AWS started with dedicated boxes, they'd now be calling EC2 the
"Elastic VM Service Fargate".

~~~
jon-wood
Running Docker images directly on EC2 instances is also supported by ECS, in
fact it the first iteration of ECS required you to run a cluster of EC2
instances. The API to do so is identical to the one used for Fargate.

~~~
digianarchist
Right but with ECS on EC2 you are still responsible for maintaining the
underlying AMI. Security patches etc.

This was the major reason we wanted to move to Fargate in our org.

------
yebyen
> I’ll also introduce you to a CLI tool that is Fargate’s equivalent to
> Kubernetes’ kubectl that can take you from a Dockerfile to a running web
> service in just two commands.

Is anyone aware of this tool? Is it something I can check out now, or do I
have to wait for the next installment? This sounds compelling, and even if I
don't want to use Fargate, it will help me to relate with people that have
used Fargate, and give us more common ground so we can talk on equal footing.

(Subscribe me to your newsletter!)

~~~
kolanos
Author here, the CLI tool is:
[https://github.com/jpignata/fargate](https://github.com/jpignata/fargate)

Part 2 is still planned and is now being fast tracked since there's interest
in Fargate.

~~~
yebyen
Thanks, this is awesome!

I am not a fan of Fargate based on personal bias mostly towards K8s and a sour
taste I got when Fargate and EKS were first announced together, (with Fargate
"available now" and EKS coming soon, it felt very desperate, but that's
neither here nor there...)

Was very impressed by this screencast and will definitely try Fargate now
thanks to your work here.

When I got to the point in the screencast where you build the image locally, I
was on the edge of my seat hoping it would include some CodeBuild setup
automation too; maybe you have already planned that, (or maybe that would ruin
its simplicity.) That was a great demo, even without this part.

------
guidedlight
AWS Fargate seems very similar to what Hyper.sh used to offer (before they
shutdown)

~~~
sejtnjir
are you implying Fargate will suffer the same fate?

------
hendry
Iterations are still crazy slow compared to AWS Lambda. :(

[https://s.natalian.org/2019-08-15/cdk-
containers.mp4](https://s.natalian.org/2019-08-15/cdk-containers.mp4)

------
ewindisch
AWS Serverless Community Hero and CTO/Co-founder of IOpipe here.

AMA!

------
auslander
You can run pure ECS without using Fargate, all it takes is to write a bit of
extra Cloudformation code. Worked for me, with autoscaling.

