Absolutely worth the effort to learn how to build a Docker image and put together bash scripts to push the image and update an ECS service. We have a base project which includes nginx and uwsgi set up to simply serve the Django app and static files on port 80.
Having an ECS Docker cluster with spare capacity greatly assists getting new projects up and running quickly, especially if we can run the test site under an existing domain managed in AWS. Then we can just add new rules on an existing ELB and piggy back on the SSL certificate.
With this workflow we spend the time getting a good image running locally (with local environments vars/secrets) and we know the deployment side is taken care of, and can be scaled if and when is needed. This also has the benefit of forcing dependencies and environments to be fully documented from the start (in the Dockerfile).
The next step, if the project gains traction is to move the Docker build/push/service update into CI space.
We've moved away from ansible now largely, there's a use case for more complex setups for sure but am interested in K8 capabilities to help there.
Agree with your tips, I’m personally hoping EKS becomes good enough that we can just skip the ECS step, but honestly translating basic ECS to Compose to Kubernetes (or any variation on that) is all all easy, so any of them work great to start
May I ask how you’re managing secrets with this setup? For example, the secret key or database URL.
Our current setup deploys an env file which is sourced before starting the process. I’m less sure how this translates to a production docket deployment.
Docker shouldn't affect your secrets management because secrets shouldn't be baked into your image anyways. I tend to expose them as environment vars and currently use Kubernetes to expose a keyvault url/password so the app can automatically grab all the secrets
We manage them with env vars now. But we use salt (encrypted) to deploy a secrets file onto the server which is sourced before the app is started. With something like ECS I didn’t know if you could feed it an env file, and if you could, how you could make it available.
I've only been using Docker/ECS on side projects, but I generate the production build Docker image with the production .env included, push it to a private repo on Amazon ECR, then pull it in to ECS from there.
My understanding is the best practice for Docker/ECS production deployment is to create Docker images containing the full app build, rather than managing the app deployment separately from image deployment. As opposed to development Docker images that rely on docker-compose to mount the host filesystem in the Docker container, the production image Dockerfile includes COPY instructions for the production build files, to be included in the image.
Absolutely worth the effort to learn how to build a Docker image and put together bash scripts to push the image and update an ECS service. We have a base project which includes nginx and uwsgi set up to simply serve the Django app and static files on port 80.
Having an ECS Docker cluster with spare capacity greatly assists getting new projects up and running quickly, especially if we can run the test site under an existing domain managed in AWS. Then we can just add new rules on an existing ELB and piggy back on the SSL certificate.
With this workflow we spend the time getting a good image running locally (with local environments vars/secrets) and we know the deployment side is taken care of, and can be scaled if and when is needed. This also has the benefit of forcing dependencies and environments to be fully documented from the start (in the Dockerfile).
The next step, if the project gains traction is to move the Docker build/push/service update into CI space. We've moved away from ansible now largely, there's a use case for more complex setups for sure but am interested in K8 capabilities to help there.