

Docker at Lyst - Peroni
http://developers.lyst.com/devops/2014/12/08/docker/

======
fndrplayer13
I'll repost my question from the blog here (And thanks again Lyst team!):

Really enjoyed the write-up, Lyst team! Thanks for the info.

As a member of another large-ish Python dev shop with a number of
dependencies, one of things that have kept us from moving fully in the
direction of Docker is total deployment time. Since Docker is a
1-process/1-container model, our dependency sizes can be large since we run a
number of processes on a single host. This means that our network into a newly
launched host would be theoretically very large. How did Lyst deal with that?
Or was the flexibility and slight slowness just preferable to something like
shipping pre-built AMIs?

~~~
boffbowsh
We (hi, I work at Lyst too!) see Docker images as fairly similar to pre-built
AMIs conceptually, just a lot more lightweight. We bake our Docker host AMIs
relatively regularly and use them in the ASG that runs the Dockerized website.
Building AMIs (and booting new instances from them on each deploy) takes a lot
longer than building a new Docker image and launching containers from it.

I'm not entirely sure I understand what you're asking though. We've so far
only Dockerized the main Python application, and not many of the supporting
applications. We don't think we'd go down the route of running Postgres,
ElasticSearch, Redis etc under Docker.

To give you an example of times taken, Pull Requests take about 10 minutes to
test, a full master build (including uploading build artifacts) takes around
15 minutes, and a deployment to 8 Dockerized web hosts is about 5 minutes,
including a smoke test on each host.

EDIT: Just realised what you're asking. If you're running many processes on
each host then yes it can take a while to deploy. We deploy our Celery workers
under separate Docker containers and this can take a while to stop and start.
For the website though we run uWSGIs in forked mode, so it's only one
container per host. This will probably change in the future, we're evaluating
Mesos + Marathon for a more PaaS-like deployment where we can say "14 web
processes please!"

Hope that answers your question.

~~~
fndrplayer13
Thanks so much for the answer!

------
zwischenzug
"Dockerfiles are great for simple cases but need a lot of external help to use
in more complex situations. Even simple variable expansion would be helpful."

Interesting that many who use docker "for real" have the same problems we did.
This is why we ended up building this:

[http://ianmiell.github.io/shutit/](http://ianmiell.github.io/shutit/)

There's a long thread on this here:

[https://groups.google.com/forum/#!topic/docker-
user/3pcVXU4h...](https://groups.google.com/forum/#!topic/docker-
user/3pcVXU4hgko)

~~~
mpdehaan2
Here's a nice way to do Docker builds with Ansible, which allow using the
template engine and all of that good stuff:

[https://github.com/ansible/ansible-docker-
base](https://github.com/ansible/ansible-docker-base)

(Disclaimer: wrote Ansible)

~~~
__alexs
I attempted to do something like this when figuring out our build problems but
it didn't really work for me. It doesn't solve the problem of actually getting
configuration into the build very well because it still relies on Dockerfiles
and has the added side effect of making the cache mostly useless. Maybe I was
just doing it wrong?

I also attempted to try Ansible for deploying containers and had issues with
both of these bugs:

[https://github.com/ansible/ansible-modules-
core/issues/27](https://github.com/ansible/ansible-modules-core/issues/27)
[https://github.com/ansible/ansible-modules-
core/issues/188](https://github.com/ansible/ansible-modules-core/issues/188)

Do people actually use Ansible to control Docker in practice? It would be
great to have something that was a step up from building random Python scripts
from scratch :)

(Disclaimer: I'm not very good at Ansible.)

~~~
zwischenzug
Our experience was that Dockerfiles were almost completely useless except for
the most trivial of cases. We effectively ditched them, spitting out
Dockerfiles that implant the ShutIt functionality and no more, eg:

    
    
      curl https://raw.githubusercontent.com/ianmiell/shutit/master/library/osquery/dockerfile/Dockerfile | docker build --no-cache -

------
jonatron
Hello Lysters from the side of the square to your left! Can you give an
example of what you replace in your Dockerfiles using Makefiles? We haven't
needed to modify ours dynamically.

~~~
boffbowsh
_waves_. We don't change anything in the Dockerfiles, it's more that we use
the dependency logic in Make to ensure that x is done before y and before z.
We also use it to actually run a lot of our containers with the ability to
switch in configs for production-like databases if the developer needs it.

Make was the perfect tool for the task at the beginning, but it's getting a
little unwieldy and will probably need rewriting in something else soon,
probably Python.

------
ecaron
It is amazing how many in this community are able to just dismiss this "Oh
yeh, boot2docker performance is still awful." If Docker was a product by
Google or Microsoft, we'd have pitch forks out by this point. Yet somehow
Docker is still able to skate by on having incredibly subpar Mac OS X
performance.

"How Docker Got The Apple Community To Love It, Despite Horrible Performance"
<\- will someone please write that post, since that's what I really want to
know.

~~~
tedchs
If I understand correctly, Docker's performance on OS X is a function of
running in a Linux virtual machine, because Docker requires Linux. Is there
something about Docker that you feel like inhibits performance beyond the VM
itself?

~~~
shykes
One area of possible improvement is that boot2docker's default mechanism for
sharing volumes with the mac is virtualbox guest additions, which are
notoriously slow. As for performance, to my knowledge it's simply a matter of
the underlying vm config.

tldr: boot2docker should expose ways to leverage faster vm configurations.

~~~
boffbowsh
I'd love for it to work over NFS. It should be easily doable, but I guess it
means adding another dependency to the boot2docker ISO.

~~~
shykes
We're mostly looking at supporting remote volume sync in docker itself, with a
pluggable transport - could be smb, nfs, periodic rsync, sftp... bradfitz
started an awesome contribution with a full remote-fuse-protocol sync :)
pretty awesome but we haven't been able to merge it yet because the volume
system needs to be made sufficiently clean and modular first.

The advantage of doing it in Docker proper is that it works in all distros
without outside dependencies, instead of just boot2docker.

Note: we are looking for help on this, if you feel willing and able. Even
better if you can invest some recurring time to maintain and help others
contribute.

------
arthurcolle
Are Lyst and Lyft friends?

~~~
boffbowsh
We haven't yet had the pleasure.

