Hacker News new | more | comments | ask | show | jobs | submit login

Having used ECS quite a bit, I do not recommend anyone building a new stack based on it. Kubernetes solves everything ECS solves, but usually better and without sveral of the issues mentioned here. Last time I checked, AWS was still lagging behind Azure and GCP on Kubernetes, but I have a strong feeling they're prioritizing improving EKS over ECS.

If you're already invested in ECS it's a different story, of course.




AWS employee here. I don't think there is anything fundamental that makes Kubernetes avoid autoscaling issues. Just like with ECS if you don't setup the right horizontal pod autoscaling settings in Kubernetes you can easily end up with under or over scheduling of your Kubernetes pods. Ultimately no matter whether you use ECS or EKS you will need to do some finetuning and testing to make sure your autoscaling setup matches with your real world traffic patterns.

AWS is committed to improving both ECS and EKS. You can see our public roadmap with many in progress improvements for both container orchestration platforms here: https://github.com/aws/containers-roadmap/projects/1

Feel free to add your own additional suggestions, or vote on existing items to help us prioritize what we should work on faster!


We use a ton of ECS, for running batch processing for our data pipeline, 4ish small internal webapps/microservices, and our Jenkins testing compute.

Some of the problems we're seeing is task placement and waiting is too hard (we had to write our own jittered waiter to not overload the ecs api endpoints when asking if our tasks are ready to place). Scaling the underlining EC2 instances is slow. The task definition=>family=>container definition hierarchy is not great. Log discovery is a bitch.

Are these all solved under K8S? I've no experience with kubernetes, but if so, might need to rethink where we run our containers. ECS was just so easy, and then so hard.


> Scaling the underlining EC2 instances is slow.

Do you mind if I ask how long this takes right now with you and how long you expect it to take to be fast?


Disagree. We've been running on ECS for years and it's a very economical and reliable way to run containers on AWS. The service itself is free, the agent over time has become very reliable, and the integrations with AWS services like ELB, and CloudMap are seamless.

It's significantly more complex to host kubernetes infrastructure, and EKS is significantly more expensive.

ECS would be a first choice, with EKS a second choice if my needs dictated it (perhaps a hybrid or milticloud scenario).


Kubernetes is way more complicated if you just need to run one or two services using Docker, Fargate is brand new so it has a lot of things to prove...


I think that Fargate is the “improvement” for ECS. I never understood the appeal of ECS in the first place, seemed (and still does) really half baked.


AWS employee here. Sorry to hear that you feel ECS is half baked. Feel free to reach out directly using the details in my profile info if you have any feedback you'd like me pass on to the team.

To clear up the confusion on the relationship between Fargate and ECS, think of Fargate as the hosting layer: it runs your container for you on demand and bills you for the amount of CPU and GB your container reserved per second. On the other hand ECS is the management layer. It provides the API that you use to orchestrate launching X containers, spreading them across availability zones, and hooking them up to other resources automatically (like load balancers, service discovery, etc).

Currently you can use ECS without using Fargate, by providing your own pool of EC2 instances to host the containers on. However, you can not use Fargate without ECS, as the hosting layer doesn't know how to run your full application stack without being instructed to by the ECS management layer.


From my perspective Fargate offers the functionality I would have expected from ECS in the first place. What ECS provides OOB requires too much janitoring and ultimately isn't terribly different in effort compared to running your own k8s or mesos infra on EC2 instances you provisioned yourself. You still basically needed an orchestration layer over ECS.

Which is why, I assume, Fargate is now listed as an integral feature of ECS on the product page.


Yeah to be clear the Fargate container hosting was always the vision for ECS, from the very first internal proposal to build this system. But its necessary to build something that keeps track of container state at scale first, and that is ECS. We built ECS so that it can keep track of container state both in Fargate and containers running on your own self managed EC2 hosts. This gives you the most flexibility if you have really specific needs for your container hosts that Fargate can't cover for you.


Fargate is almost what the marketing team said ECS was going to be.


Agreed. ECS has several limitations that you don't really discover until you are fully into the weeds. Don't use it if you are just starting with AWS unless your use case is a straightforward website stack. Do not use it for complex microservice architectures.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: