
Show HN: DevLab – Docker Containerization for Local Development - ksafranski
https://github.com/TechnologyAdvice/DevLab
======
nate908
I stumbled upon Docker Compose yesterday, so please excuse my ignorance since
I'm only familiar with it at a high level overview level, but Docker Compose
looks very similar to DevLab.

How is DevLab different than Docker Compose?

[https://docs.docker.com/compose/](https://docs.docker.com/compose/)

~~~
TomFrost
Member of the DevLab team here :)

Compose is actually part of the reason we wrote this tool. We tried so hard to
make it make sense for a TDD workflow, but it was always cumbersome.

Compose wants you to build your application into a container, and build and
run that container every time you have a task to run. This takes time and a
fair amount of cleanup, especially when you want a clean environment to run
your tests that doesn't persist to the next run. DevLab just wants you to
specify the environment to plug your application into, and doesn't build a
container at all.

The result is:

\- No manual cleanup

\- No pile of images or processes that stack up

\- Your project doesn't have to be a Docker project. It doesn't need a
Dockerfile. You can use this for something you plan to clone on an EC2 node
and run from an upstart script.

\- No Vagrant/Ansible/Chef/Puppet, server config, anything. You pick out an
image that matches your environment's needs (node:4.2, wordpress, go, etc.,
hub.docker.com is a great place to start) and DevLab plugs your app into it.

\- More development-oriented features outside of the pure Docker ecosystem
that Compose stays confined to. For example, soon, Mac users running docker-
machine will get to enjoy ports bound to localhost much like Linux docker
users do [1].

To get this functionality outside of Compose, we had a monolithic makefile to
maintain all of this for us. It wasn't DRY and it wasn't smooth. I hope you
enjoy DevLab!

[1]
[https://github.com/TechnologyAdvice/DevLab/pull/10](https://github.com/TechnologyAdvice/DevLab/pull/10)

~~~
kanzure
> build and run that container every time you have a task to run

You don't have to rebuild each time, just use docker-compose rm -f and then
run again. The old containers will be nuked, built docker images will be kept.

~~~
TomFrost
Fair point; I was speaking more toward the case of needing a rebuild any time
your container gets changed, due to the change-on-run ethos of DevLab versus
the change-on-build ethos of Compose.

But where cleanup really comes into play is when you have multiple tasks. You
can specify multiple command paths in Compose, but each one will be a newly
built image. With DevLab, you could `lab install` a node project, followed by
a `lab lint`, or a `lab test`. You could `lab build` to compile your app at
any time. And when you do this, whether it's the first time or the twenty-
first time, overhead versus what you'd experience locally is just a second --
and the only image you have is your environment container. No build process,
ever.

~~~
kanzure
Sounds like you could get the same benefits by installing sshd into each
container and using ssh to execute inside of a single container, rather than
using docker/compose "commands". This would allow you to keep your `pip
install` updates, and only when you run `docker-compose rm` would the changes
be nuked. Would that fit the workflow you've been talking about? Thanks!

------
awalGarg
Sorry I am pretty much a noob (specially at containerization) compared to most
of the peeps here, but I can't resist asking - what is the need for this? What
exactly does the author mean by "services (like databases) without needing to
setup and maintain these environments manually"? Don't we just install
node/mongo once ever followed by only updates?

My question is not just for this particular thing, but all "lets dev in a
container" thing like python's virtual env, npm's "local hard copy of all the
modules" etc.? Does it really happen that your node binary or $language
interpreter/compiler gets infected/changed somehow? Or one of your crucial
modules gets borked? And if it does, shouldn't we be looking at investigating
_why_ such a thing really happened in the first place instead of duplicating
everything?

As for maintenance, what kind of maintenance does node (or any $compiler or
$interpreter) require? Don't they just sit waiting for invocation with some
code to return the output/compiler binary? The only thing I have ever needed
is stuff like nvm/rvm for using multiple versions of the same
package/interpreter/compiler. Can anyone educate me?

~~~
Natanael_L
As far as I can tell, the entire point is to ensure your code is platform
independent from the start instead of making subtle assumptions derived from
how your development boxes are configured. Making the test environment as
similar to the deployment environment as you can.

~~~
awalGarg
Begging for an example ;)

I do understand this. And I do agree. But the more I think about it, the more
I find that those arguments don't matter _in real life_. Code written in high
level languages/platforms like node/ruby/python/etc. is _meant_ to be platform
independent since the underlying VM abstracts the low-level details. Can you
give me an example of the "configuration" differences that you (can?)
encounter on different systems while developing on such platforms? (Except for
the obvious and documented ones which containers can't solve either).

~~~
drumdance
Here's an example:

A few weeks ago we had an issue where a db migration would not work on some of
our developer machines (it worked fine on mine and in staging and in
production). After a lot of sleuthing, we found that it was because their
version of the database was slightly different because they had installed it
months after I did. Furthermore, it was a real pain to roll back to the
appropriate version.

A container that locked down the db version would have avoided all that.

~~~
maccard
We just have the particular versions of tools we rely on checked into
perforce, and when there's an upgrade, you don't have to do anything different
and everyone gets the upgrade.

------
cabirum
I think it just adds complexity by introducing yet another external dependency
to dev environment. Not sure it will save time on projects with custom
services (not available on Docker w/required configuration) vs running
traditional VM.

