
Lessons from Building a Node App in Docker - JohnHammersley
http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html?r=0
======
z3t4
The article should mention _why_ you would like to run something in docker.
What many forget is that when putting stuff in a container, you create future
work for yourself to manage not only your own stuff, but also all the
dependencies in the container. If you're just after isolation, that could be
accomplished with Linux name-spaces and apparmor.

~~~
jdleesmiller
(Author here.) That's a good question. In this case, the main benefit I was
after was easy setup of consistent development environments. We have in the
past (and still do) use vagrant + ansible for the same purpose, but
Dockerfiles are a lot simpler, and people have been less afraid of changing
them vs. the slightly crazy ansible playbooks.

It think it's worked pretty well so far, though we have had a few
difficulties:

\- Particularly in the early stages of development, we've been changing
dependencies a lot, and that requires a lot of image rebuilds. They are very
consistent, but also a bit tedious.

\- For those of us on macs, using docker machine for development hasn't been
all that great, because inotify doesn't work for automatic code reloading
(watchify, nodemon, etc.). However, they're hard at work on that with the new
Docker for Mac.

I'm hopeful that technologies like kubernetes will make it easier to deploy
these containers, too, but I haven't really got there yet. Maybe another
article some day!

~~~
hillbilly
Check out docker-osx-dev which uses rsync to more efficiently synchronize
container volumes with your local files and enable watchers.

~~~
saronia
That seems to work only w/ boot2docker. Is it still applicable in the docker-
machine era?

------
misterdai
I think it's brilliant that this gives security more prominence with setting
up the unprivileged user. Pretty much every Docker post / article I've seen
tends to skip over details like that.

~~~
antihero
What exactly do you achieve with this? It's running in a container. What's a
hacker to do? Screw up the app in the container, which they could do with the
app user anyway?

~~~
jdleesmiller
(Author here.) So far as I can tell, it's not that there are known, specific
things that one can do to break out of a docker container as root; it's just
that the space of possible things you can do is larger, so there is more
surface area for you to attack. So, following the principle of least privilege
[1], you should avoid running as root, if you reasonably can, and in most
cases it's not that hard to do.

[1]
[https://en.wikipedia.org/wiki/Principle_of_least_privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)

------
glifchits
This is incredibly clear and well-written. Great article!

------
rhinoceraptor
If your NPM installs are mysteriously slow in Docker, try adding this line
before 'npm install':

RUN npm config set registry
[https://registry.npmjs.org/](https://registry.npmjs.org/)

I don't know why it works, but it does.

~~~
opendomain
Can anyone tell me what this does? This looks like a configuration files in
JSON, but why do these setting make it work in docker? If my docker image size
is different, which parameter in this would I change?

~~~
bhanu423
Well this just tells the npm client to download the modules from official
registry which is "[https://registry.npmjs.org"](https://registry.npmjs.org").
Extra info: If you want even faster downloads within an organization(or
wherever recurring npm install might occur), I would recommend setting up a
local lazy mirror. PS: I maintain one such lazy npm mirror
app-[https://github.com/bhanuc/alphonso](https://github.com/bhanuc/alphonso) .
It only stores the modules that has been requested and makes subsequent
installs much faster.

------
voltagex_
[https://imagelayers.io/?images=node:4.3.2](https://imagelayers.io/?images=node:4.3.2)
\- I wonder how small we could get an image that's still capable of running
node and having an extra user (Buildroot is root-only by default)

~~~
okramcivokram
If the image size is your primary concern, there are many alpine linux images
which excel at this, for example: [https://github.com/mhart/alpine-
node](https://github.com/mhart/alpine-node)

~~~
joshmanders
mhart's images are great, but I have found one issue with them. They do not
contain the dependencies needed to build node-gyp based packages, more
specifically and critical bcrypt package. I have fixed this issue and have
pushed and maintain up-to-date node images that are built on alpine and have
make, g++, etc that is needed.
[https://github.com/stackci/node](https://github.com/stackci/node)

~~~
voltagex_
Should g++/make really be included in a container that's used in production
though?

~~~
joshmanders
If you use the bcrypt node package you need g++/make to compile native
extensions so there's really no way to avoid it. You can remove it after you
run `npm install` in your own dockerfile, but it won't help container size at
all.

~~~
okramcivokram
I'm solving this problem by having a `dev` container with make, gcc, g++,
python, and nodemon included and which is then used to install node_modules
with `ENV NODE_ENV dev` set. When I'm ready to deploy, the node_modules gets
deleted, installed in production mode and a production container gets built.
For dev container, node_modules is a volume, and it gets copied into
production container so it's self-contained.

------
tayo42
Any reason why you wouldn't use some kind of supervision for your process when
running it in production?

~~~
ploxiln
Docker itself can restart containers when the parent process inside the
container exits for any reason. That behavior is not enabled by default but it
can trivially be enabled for a container (started via "docker run", or in the
compose yml file similarly).

[https://docs.docker.com/compose/compose-file/#cpu-shares-
cpu...](https://docs.docker.com/compose/compose-file/#cpu-shares-cpu-quota-
cpuset-domainname-hostname-ipc-mac-address-mem-limit-memswap-limit-privileged-
read-only-restart-shm-size-stdin-open-tty-user-working-dir)

~~~
Matt3o12_
The last time I tried to use that it failed when coupling containers. Ie that
nginx requires, my web app, which requires the database. Every process but the
database failed because they all depended on each other. I ended up using
shell scripts which used an arbitrary wait of 2 seconds after each process.
that fixed the problem so that the containers came up after a reboot. Did
docker fix that yet?

~~~
ploxiln
I think the implementation of links between containers did change
significantly in the past year and a half.

But that "links break when containers restart or are replaced" aspect was a
deal-breaker for me when I started using docker "near production". I just use
--net=host ... so I don't use docker network-related functionality at all. For
my purposes server-level firewall settings are fine.

