Docker recreates many of the benefits the Java platform has had from the start (in addition to some other aspects but that's the main selling pont). So, if you have a non-Java application Docker might indeed be a worthwhile choice. The thing is, even with Java platform-independence and deployment standardization for web apps / SaaS-type applications has always been a minor selling point.
Those features are certainly important if you need to rapidly provision and scale your application on many different machines in a distributed environment you don't completely control.
In reality, for many applications today this simply isn't the case though. The main motivator behind that feature after all was the concept of desktop applications that are deployed to each user individually.
So, with Docker we now indiscriminately add an additional layer of complexity on top of existing applications regardless of the actual problems this might solve.
I agree that it adds an extra layer of complexity but it also provides a lot of things out of the box (like isolation, separation of concerns, platform agnostic deployments, reproducability and lot more) which makes the life of a developer so easy. If he wants to deploy his application, instead of SSHing into the server, installing the dependency, managing the lifecycle of his application. He just needs to dockerize his application and rest assured that it will run irrespective of the platform on which it is put to run as long as it has docker running.
I think what it boils down to is what you expect a tool to do and if it really solves some hard hitting problems you have with your development life and took away your nightmares then it is totally fine to invest in it and chill.
In some ways with Docker you also only move concerns from one layer to another. While in the past you had to manage dependencies on the target machine you now have to manage Docker dependencies and environments on that machine on top of the dependency management for your application.
The problem is more about your dev machine being different from your deployment machine.
Not your deployment machines are all different.
Then in a docker-compose.override.yml,
command: nodemon --inspect=0.0.0.0:5858 /code/bin/www
To the author: could you elaborate on the differences between carbon and alpine base images please? I can obviously go look this up, but this was the one part of the guide that was new to me. Your description wasn't enough to give me a complete understanding, so perhaps you could explain a bit more about carbon and alpine? Thanks.
With Debian based images like node:carbon you could do something like this:
ENV HOME /home/nodejs
RUN groupadd -r nodejs \
&& useradd -r -g nodejs nodejs \
&& mkdir -p $HOME \
&& chown nodejs:nodejs $HOME
RUN addgroup -S nodejs && adduser -S -G nodejs nodejs
We're using pm2  for this if anyone's interested.
One scenario is that when you rollout a new version, the old container will only be killed when new one is up and running. If there are many replicas running, Kubernetes will replace them with new versions one by one. And all of this behaviour is highly configurable.
It would be interesting to know how this works under the hood though, is this just built on top of docker swarm load balancing?
Kubernetes architecture is entirely different from that of Docker Swarm.
A Kubernetes Service object  can load balance traffic to various Kubernetes Pods (think containers)  as defined by it's " pod selectors". It can choose to direct traffic only to running containers.
I would recommend using nginx or apache instead. You get more performance and better configurability.
Why is this needed when "CMD [ "nodemon", "server.js" ]" already exists?
Isn't using the same container everywhere the actual selling point for containers?
Note: New services and major functionalities are usually developed locally with unit-tests ensuring compliance with the spec. But once this is done and stuff needs to be integrated with the real system - or debugged later on - there is IMHO nothing that can beat a complete copy of the real system running locally where one can easily manipulate whatever is needed.
I've used Docker for things like complicated java dev environments. But it seems over-applied for basic web apps.
Generally your app will be composed of multiple services (like Java, Node.js, some database etc) and there are benefits to run them as microservices (isolation, easy to develop, easy to scale) and hence Docker makes sense in those use cases. Though the app seems basic and overkill to use Docker, the architecture enforces this and adds the above mentioned benefits.
CMD ["kill", "-n", "self"]