
Amazon Elastic Container Service now supports Amazon EFS file systems - ifcologne
https://aws.amazon.com/blogs/aws/amazon-ecs-supports-efs/
======
txcwpalpha
I see a lot of notes about EFS's performance in the comments. I figured it's
at least worth noting, for anyone considering using ECS with EFS, that just
last week EFS had its read throughput on its general purpose tier increased by
400%.

That probably won't solve all EFS performance issues, but it's a pretty big
boost and a nice announcement to come alongside ECS support.

[https://aws.amazon.com/about-aws/whats-new/2020/04/amazon-
el...](https://aws.amazon.com/about-aws/whats-new/2020/04/amazon-elastic-file-
system-announces-increase-in-read-operations-for-general-purpose-file-
systems/)

------
finaliteration
This is great news!

Yes, these containers are supposed to be stateless, but I was tasked with
converting an app at my previous job over to using ECS on Fargate and we hit
so many issues because of the limits on storage per container instance. We
ended up having to tweak the heck out of nginx caching configurations and
other processes that would generate any "on disk" files to get around the
issues. Having EFS available would have made solving some of those problems so
much easier.

I've also been wanting to use ECS on Fargate for running scheduled tasks with
large files (50gb+) but it wasn't really possible given the previous 4gb limit
on storage.

~~~
dekhn
containers shouldn't necessarily be stateless; most existing codes don't know
how or want to talk to services via RPC interfaces. In some sense, a mounted
remote filesystem is just a standard API the OS provides you to access state
in a convenient way that happens to be high performance, indexed, etc.

~~~
lsaferite
Yeah, I think they are conflating stateless and ephemeral in this case.

~~~
finaliteration
> conflating stateless and ephemeral in this case

You're totally right, I was mixing up stateless and ephemeral. My mistake and
thanks for pointing it out!

------
jboggan
Oh man, awesome. We had a rather janky workload where ECS would spin up an EC2
that would then mount an EFS volume and then write a file over to S3. This is
going to make that so much easier and cleaner.

If you're wondering why you'd ever have to do something like that, the answer
is SAP.

------
koolba
This is going to make a _lot_ of container workloads that were possible, but
inconvenient to setup, suddenly trivial to deploy. Very nice!

------
mark242
This is the single biggest blocker to running something like Postfix in ECS.
This is a huge, huge win.

~~~
sciurus
I think postfix would perform horrifically on EFS, which has absymal latency
and is terrible for workloads with lots of random i/o.

~~~
geertj
(One of the product managers on the Amazon EFS team here). We have many
customers that use EFS for a wide variety of use cases, including hosting
Postfix. As with all applications, performance needs are relative. Use EFS if
your application requires consistent low single digit ms latencies, shared
POSIX file system, and a pay as you go elastic usage model. As with all AWS
services, EFS is always launching greater performance capabilities, including
IOPS, throughput and lower latencies to meet the needs of our customers. As
example, on 4/1, EFS launched a 400% improvement in read IOPS for its General
Purpose performance mode, from 7,000 to 35,000. Given the type of file system
operations that Postfix performs, it should nicely benefit from this
improvement.

------
zapita
How's the performance on EFS? Has anyone used it in production that is willing
to share their experience?

We evaluated it for a relatively simple use case, and the performance seemed
abysmal, so we didn't select it. I'm hoping that we made a mistake in our
evaluation protocol, which would give me an excuse to give it another try.

~~~
codeduck
It's terrible. Very slow when we tried to use it. There are ways to work
around this, and ways to tune the performance, but honestly it was not worth
it for our use case and instead we found a way to make EBS work.

EFS is a great way to get a lot of iowait on your cpu graphs. Would not
recommend it for anything that had to be fast.

~~~
mwcampbell
> we found a way to make EBS work.

Can you say more about what you did with EBS? It seems like it would be
necessary to make some compromises in availability and disaster recovery
because any given EBS volume is restricted to the availability zone where it
was created.

~~~
codeduck
We were hosting third party software in an EKS cluster and needed a way to
share state between components of this system. We tried EFS initially but it
actually killed the EKS cluster with iowait under load. We found a way to
divert most of the systems requirements to local emptydir volumes, leaving
only infrequently accessed media files on EFS

------
geerlingguy
Technically it supported it before, but you had to configure everything
manually (or with your own automation). Having it native is a lot nicer, and
brings provisioning of NFS-style volumes up to par with the current Kubernetes
experience.

------
WatchDog
This is going to make running teamcity or jenkins from fargate, much simpler.

------
rkwasny
EFS performance is just horrible. Running containers on it is asking for
problems.

My advice, stick to EC2 + EBS, it works.

------
djstein
FINALLY!!! edit: thanks a lot ECS team

------
nnx
Hope this is added to Lambda soon. EFS scalability would shine with Lambda.

