
Building blocks of Amazon ECS - ifcologne
https://aws.amazon.com/blogs/compute/building-blocks-of-amazon-ecs/
======
ajsharp
"Let's begin with an analogy"

[4 paragraphs of fantasy explorer game references]

All I've learned is my brain hurts.

~~~
kinkrtyavimoodh
Yeah seriously what the hell.

And it's not as if the game references are universals.

All this obfuscation for an obtuse and needless analogy.

~~~
linkmotif
This is like the whole GraphQL Star Wars schtick. I really don’t enjoy Star
Wars, so reading their documentation was a total drag.

~~~
RexM
There are dozens of us!

~~~
linkmotif
So good to know. <3

------
dionian
Even if i skipped the spaceship whole analogy section (I did), all in all
really glad to see more easily accessible explanations for AWS technology than
the main documentation. I love how approachable this is, and the diagrams
help.

------
nikkwong
What this post doesn't explain well is the benefit of using ECS rather than
just uploading a docker image to an EC2 instance and running it.

~~~
sudhirj
Even the smallest EC2 images can run multiple containers if your containers
are small enough, and either way it makes sense to add a level of abstraction
over your pool of EC2 instances - i.e. treat them as pool as opposed a set of
discrete blocks. ECS lets you specify exactly what resources each container
needs, and then lets you make your pool shrink or grow as necessary. The pool
can also be a heterogenous mixture of instance types, even a combination of on
premise (?) ones. And could also be spot instances, which are way cheaper.
Could also be whichever spot instance types happen to be cheapest at the
moment.

So yeah, being able to pool your EC2 instances is useful.

~~~
matwood
> And could also be spot instances, which are way cheaper.

This what we are currently using ECS for. Keep a set of on demand instances to
handle a base load and scale out on spots. Any compute that is not time
dependent is on spots 100%.

We have been using AWS for a long time, and wrote/still have a system to scale
out instances with AMIs. This system is heavy weight though because multiple
AMIs cannot share an instance.

Kubs is our next step.

------
turdnagel
I've had a lot of good experiences with ECS so far, except for their scheduled
tasks system. With cron, you check /var/mail. When a scheduled task doesn't
run on ECS... you're SOL.

~~~
dguo
I ran into this problem a few weeks ago and couldn't believe how poor the
error handling is. I eventually found the FailedInvocations graph in
CloudWatch, but I didn't see any way to actually find out what went wrong.

~~~
turdnagel
How did you solve it?

~~~
dguo
I haven't yet. It wasn't a priority, so I stopped looking into it. I did find
this forum thread though:
[https://forums.aws.amazon.com/thread.jspa?threadID=269884](https://forums.aws.amazon.com/thread.jspa?threadID=269884)

I tried tweeting @AWSSupport, but they just referred me to the scheduled tasks
docs that I had already read.

I suspect the issue is that I set up my tasks to run in Fargate mode, which I
didn't realize at the time was brand new. Maybe it's not compatible with
scheduled tasks yet.

------
drewjaja
For those looking for a more in-depth explanation of Amazon ECS, I highly
recommend the videos here [https://awsdevops.io/p/hitchhikers-video-guide-aws-
docker](https://awsdevops.io/p/hitchhikers-video-guide-aws-docker)

------
hardwaresofton
Why even use ECS when you can just start an Elastic Beanstalk docker-based
cluster, and get superior, more focused web UI (at least the UI for EB is
superior at present) along with much easier configuration?

If I sound bitter, it's because I am -- I recently spent about 2 days straight
trying to build an ECS cluster reliably with CloudFormation, and while I must
admit I was newer than normal to CloudFormation templates, the amount of
errors I ran into (and incredibly slow provision/creation/update time), along
with the broken edge cases (try and make an existing EC2 bucket with
CloudFormation) was infuriating. Don't read any further if you don't want to
read a rant.

While ECS is great in theory (or if you set it up by clicking tons of buttons
on AWS's UI), it's significantly harder to automatically deploy than BeanStalk
is, from where I'm standing. All I have to do is get a container with the eb
command line tool installed, do a bunch of configuration finangling (which
honestly is ridiculous as well, just let me give you a file with all my
configuration in it, or read it from ENV, and stop trying to provide
interactive CLIs for everything, for fucks sake) and I can run `eb deploy` and
push a zip file with nothing but a docker container (muilti container setups
are also allowed) specified up to the cloud. Later I'm going to look into
using CloudFormation to set up EB, but I know even that will be simpler,
because I won't have to try and make any ECS::Service objects or
ES::TaskDefinitions.

Trying to use ECS made me so glad that Kubernetes exists. Unfortunately in my
work I can't use it currently because that would mean the rest of the team
stopping to learn it, but Cloudformation + ECS is a super shitty version of
setting up a kubernetes cluster and using a kubernetes resource config file. I
think the best part about kubernetes is that if the community keeps going the
way it's going, cloud providers will finally be reduced to the status of
hardware vendors -- all I'll need from EC2 is an (overpriced) machine, network
access, and an IP, not to try and keep up with it's sometimes asinine
contraptions and processes, or be locked in to their ecosystem.

~~~
boogiewoogie
My understanding is ECS can run multiple tasks and services while a beanstalk
just runs one.

~~~
hardwaresofton
Yeah maybe it just wasn't for me -- with RDS as the database and ElastiCache
for redis, all we needed was to run the API server (one container, we don't
even have any like queue job workers or anything). When I started looking into
how to run containers on AWS ECS seemed like the best fit.

------
ravenstine
ECS is pretty nice. I wish they did what Rancher did by allowing docker-
compose.yml and an extra YML file for Rancher-specific setup.

~~~
bdcravens
ecs-cli let's you do that, where you specify a compose file that's mostly
compatible with docker-compose.yml, and an additional yml file for ECS-
specific parameters:

[https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-
ecs-cli-compose.html)

~~~
mcheshier
Except it doesn't work with anything complicated, like volumes, in my
experience.

~~~
dsmithatx
We use one docker-compose.yml for local dev and another for deploying to ECS.
Locally the docker-compose.yml configures volumes, with ECS we do it in the
task definition. For shared volumes to gather logs we are using EFS which
seems to work and mount in the same way as NFS.

------
fapjacks
So many things are janky in ECS that it ends up costing too much time and
energy to get anything done. I've posted at length before about all the
specific problems I hit while building on ECS, but lately I've just been
telling people that it's like that janky toolbox in the funny old "PHP is a
fractal of bad design" blog post [0]. Of all the AWS services I ever used, I
find ECS _by far_ the most terrible. Even worse than Amazon's documentation.
So this really stunted and awkward spaceship analogy is pretty hilarious,
given that it's about ECS.

[0] [https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-
design/](https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/)

~~~
sudhirj
That’s probably why EKS is well under way. ECS was built at a time when there
was no clear winner in the orchestration game. Now there is. AWS pretty much
admitted that they would throw themselves behind hosted Kubernetes going
forward.

~~~
NathanKP
AWS employee on the container services team here.

We are definitely building out a hosted Kubernetes solution with EKS, but its
important to realize that Kubernetes is just one part of the container
services ecosystem at AWS, living alongside ECS.

Just like we have multiple database solutions (DynamoDB, MySQL/Postgres in
RDS, Aurora, Redshift, etc) we also have multiple orchestrators (EKS managed
Kubernetes and ECS).

ECS is like DynamoDB: it is the AWS native solution to the problem. Built to
be incredibly scalable, and built for AWS patterns and best practices that we
know will scale for our customers.

EKS is like RDS: its the AWS managed hosting for the open source software,
designed to solve some of your scaling and configuration problems, and it
gives you more flexibility, but that flexibility also gives you the power to
do things that may not scale as well.

In ECS you may not be able to do everything you can do with Kubernetes (just
like DynamoDB limits what you can do compared to the capabilities of SQL), but
just like DynamoDB can scale far past SQL because of its limits, ECS is more
scalable than Kubernetes because of its limits.

You can run a much larger cluster in ECS than you can in Kubernetes. We would
actually still recommend to our largest customers to use ECS instead of EKS if
they have any significant scale. The thing to realize is that ECS is a multi
tenant control plane: it already keeps track of every single container on
every host in every cluster from every ECS customer per region, and that scale
is tremendous, far larger than anything Kubernetes can do. For reference at
re:Invent we shared that we have millions of EC2 instances under ECS
management each month, organized into more than 100,000 customer clusters, and
we launch hundreds of millions of containers every week.

With ECS we can add a new customer with >2k instances into the multi tenant
control plane with ease. On the other hand configuring and managing Kubernetes
to handle this is a fun challenge. ([https://blog.openai.com/scaling-
kubernetes-to-2500-nodes/](https://blog.openai.com/scaling-kubernetes-
to-2500-nodes/)) For a lot of orgs on AWS they would rather have the boring,
simple power of ECS.

Anyway I hope this provides some perspective on it. We are excited about
Kubernetes and excited to offer EKS, but ECS is still our recommendation for
the largest customers.

~~~
fapjacks
But with the inefficiencies and limitations of ECS versus something like Kube
or Swarm or whatever, you're scaling your costs way up, too. Also just
preemptively I realize I came off pretty hostile in my parent comment and I
want to apologize and say it's not meant personally against anybody working on
ECS. Having interacted with ECS engineers from quite early on in the lifetime
of ECS, I could tell that you folks were under a lot of pressure to get
something out the door as quickly as possible.

~~~
NathanKP
No problem! I'd love to hear more from you about which aspects of ECS you find
most inefficient or limiting though, because obviously one of our goals is for
ECS to reduce your costs, that's why it is a free service after all.

Every bit of feedback we get from the community helps us make the service
better. Please email me at peckn@amazon.com (and anyone else who reads this
and wants to chat feel free to as well).

------
dkobran
This makes me want to die

------
cagenut
I happen to be working on this and since there's a thread why not ask...

Has anyone gotten "multi-tenant" (default-restricted) overlay network(s)
working on ECS in a way that they like?

