Hacker News new | comments | show | ask | jobs | submit login
Writing Dockerfiles for Node.js Web Apps (hasura.io)
82 points by praveenweb 42 days ago | hide | past | web | favorite | 33 comments

I completely get the purpose and benefit of Docker. More and more though I'm becoming convinced that for many settings and problems Docker actually creates more problems and meaningless ritual because Docker rapidly is becoming the standard way of deploying software with hardly anyone asking about the reasons anymore.

Docker recreates many of the benefits the Java platform has had from the start (in addition to some other aspects but that's the main selling pont). So, if you have a non-Java application Docker might indeed be a worthwhile choice. The thing is, even with Java platform-independence and deployment standardization for web apps / SaaS-type applications has always been a minor selling point.

Those features are certainly important if you need to rapidly provision and scale your application on many different machines in a distributed environment you don't completely control.

In reality, for many applications today this simply isn't the case though. The main motivator behind that feature after all was the concept of desktop applications that are deployed to each user individually.

So, with Docker we now indiscriminately add an additional layer of complexity on top of existing applications regardless of the actual problems this might solve.

> So, with Docker we now indiscriminately add an additional layer of complexity on top of existing applications regardless of the actual problems this might solve.

I agree that it adds an extra layer of complexity but it also provides a lot of things out of the box (like isolation, separation of concerns, platform agnostic deployments, reproducability and lot more) which makes the life of a developer so easy. If he wants to deploy his application, instead of SSHing into the server, installing the dependency, managing the lifecycle of his application. He just needs to dockerize his application and rest assured that it will run irrespective of the platform on which it is put to run as long as it has docker running.

I think what it boils down to is what you expect a tool to do and if it really solves some hard hitting problems you have with your development life and took away your nightmares then it is totally fine to invest in it and chill.

The problem lies with the "just needs to dockerize his application" part. These days I see many developers regularly servicing Docker containers where they rather should be adding value to the product they're creating. It almost seems as if Docker has become an end in itself rather than just a means.

In some ways with Docker you also only move concerns from one layer to another. While in the past you had to manage dependencies on the target machine you now have to manage Docker dependencies and environments on that machine on top of the dependency management for your application.

Fair point. Again it is the call that the developer has to take depending upon certain parameters. If he is shipping multiple applications built with Python with one requiring Python 2.7 and the other requiring Python 3.4. He has to somehow make sure that both the application will run peacefully together or he can just create two docker images on his machine one contains Python2.7 installed and the other contains Python 3.0 installed and deploy it the docker way without any hassle. At the end of the day it is the call he has to take depending upon how much time each of the flow is going to consume and which one makes his life easy.

Isn't the selling point more about the dev perspective?

The problem is more about your dev machine being different from your deployment machine.

Not your deployment machines are all different.

I like Docker because it makes it more explicit what dependencies my software has. Most Dockerfiles start with an apt-get update, but I prefer specifying the exact versions of the packages to install. This way I notice changes before they break my stuff.

Ideally I'd like the same Dockerfile used for both local dev as well as deployment. To do this, we're using something similar to the first Dockerfile, with the `nodemon` install.

Then in a docker-compose.override.yml,

    command: nodemon --inspect= /code/bin/www
To allow for debugging as well. We delete that .override.yml file as part of the CI build/deploy.

Nice post, and very close to my experiences with Dockerfiles for node.

To the author: could you elaborate on the differences between carbon and alpine base images please? I can obviously go look this up, but this was the one part of the guide that was new to me. Your description wasn't enough to give me a complete understanding, so perhaps you could explain a bit more about carbon and alpine? Thanks.

carbon is the latest LTS (Long Term Support) version of Node. Instead of using the latest version, which can have breaking changes and possible drop of support in the future, it is recommended to use LTS version just to ensure using a stable + well supported version. alpine base images are generally used to cut down the image size for production, because they are very minimal and works well for most use-cases. But you would generally use carbon or specific full blown version of node in development, because you can install system dependencies for bundling code.

Thanks! You've given me something new to try out :)

No mention of using a non-root user? Any reason why or just oversight?

With Debian based images like node:carbon you could do something like this:

  ENV HOME /home/nodejs
  RUN groupadd -r nodejs \
  && useradd -r -g nodejs nodejs \
  && mkdir -p $HOME \
  && chown nodejs:nodejs $HOME
  USER nodejs
With alpine something like this:

  RUN addgroup -S nodejs && adduser -S -G nodejs nodejs
  USER nodejs

I'm surprised they haven't included a process manager for the production build. In my experience it allows for much easier 0-sec downtime updates and load balancing.

We're using pm2 [1] for this if anyone's interested.

[1] https://github.com/Unitech/pm2

If you use container orchestration platforms like Kubernetes, this is not required, as restarts load balancing etc. are handled by the orchestrator.

Does Kubernetes support graceful reload of containers, with 0 downtime? I'd be very interested if so.

Yes, it does. Kubernetes can perform rolling updates [1].

One scenario is that when you rollout a new version, the old container will only be killed when new one is up and running. If there are many replicas running, Kubernetes will replace them with new versions one by one. And all of this behaviour is highly configurable.

[1] https://kubernetes.io/docs/tutorials/kubernetes-basics/updat...

Thanks! I think for our scenario it's a bit overkill as we're only running one container and pm2 seems a better fit for that.

It would be interesting to know how this works under the hood though, is this just built on top of docker swarm load balancing?

Kubernetes will definitely be an overkill if you run only one container. But, if your application follows microservice architecture and have multiple containers, Kubernetes is the best solution to run them.

Kubernetes architecture is entirely different from that of Docker Swarm.

A Kubernetes Service object [1] can load balance traffic to various Kubernetes Pods (think containers) [2] as defined by it's " pod selectors". It can choose to direct traffic only to running containers.

[1] https://kubernetes.io/docs/concepts/services-networking/serv... [2] https://kubernetes.io/docs/concepts/workloads/pods/pod/

I like the new multi stage Dockerfile concept in the later Docker versions. It makes it easy to define Dockerfiles that both builds and runs your code, while still separating build and run dependencies. I have used this for both Node and Go projects. As long as the build output is not too spread out in terms of paths copying your build artifacts over to the other Dockerfile stages should not be a big issue.

> As you can see above, we are using the npm package serve to serve static files.

I would recommend using nginx or apache instead. You get more performance and better configurability.

> root@id:/app# nodemon src/server.js

Why is this needed when "CMD [ "nodemon", "server.js" ]" already exists?

CMD will work in detached mode. i.e - "docker run -d". In an interactive mode i.e - "docker run -ti", you can land up with shell access of the container, where you can run commands / perform other tasks as required. In this case, we are running nodemon.

or not writing dockerfiles at all, using stock docker images. as in this article http://engineering.opensooq.com/on-the-fly-ad-hoc-docker-com...

Instead of writing Dockerfiles, this approach requires me to write a docker-compose.yaml file. How is it more advantageous?

> Using an appropriate base image (carbon for dev, alpine for production).

Isn't using the same container everywhere the actual selling point for containers?

Depends on the use-case. For development, you would need to install some system dependencies, access to bash for debugging etc and hence the full blown base image. In production, you want the image sizes to be minimal and hence alpine base image. Also reduces the attack vector.

But why are you doing minute-to-minute development within a container? What's so hard about installing the official Node package for your development system's distribution? When the developer is finished working on a feature, then and only then build a production Docker image from the codebase that now implements that feature, maybe run a few smoke tests, and push to CI. Building special containers for development purposes is sheer madness.

I am developing in a container - most of the time - as this allows me to run the (micro-)service within it's defined ecosystem, so no need to mock any service boundaries.

Note: New services and major functionalities are usually developed locally with unit-tests ensuring compliance with the spec. But once this is done and stuff needs to be integrated with the real system - or debugged later on - there is IMHO nothing that can beat a complete copy of the real system running locally where one can easily manipulate whatever is needed.

you're right, plus docker-compose supports extending services in compose files. In the projects i dockerized, i have a docker-compose.yml, docker-compose.dev.yml, docker-compose.test.yml and docker-compose.prod.yml, no override files.

When does it make sense to run Node on docker vs Heroku dynos or something similar? Is it just a cost thing at a certain number? Are these significantly specialized containers? The types mentioned in the doc seem like basic web apps.

I've used Docker for things like complicated java dev environments. But it seems over-applied for basic web apps.

> But it seems over-applied for basic web apps.

Generally your app will be composed of multiple services (like Java, Node.js, some database etc) and there are benefits to run them as microservices (isolation, easy to develop, easy to scale) and hence Docker makes sense in those use cases. Though the app seems basic and overkill to use Docker, the architecture enforces this and adds the above mentioned benefits.

I had the impression Heroku is generally more expensive.


FROM life:latest

CMD ["kill", "-n", "self"]

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact