

Drupal on Docker - kstaken
http://robknight.org.uk/blog/2013/05/drupal-on-docker/

======
gexla
The article touches on this towards the end, but I think the Docker (or
container) way would be to create separate containers for each process. I
could very well be wrong on this though. I'm still trying to figure out the
best way to use containers.

One container would contain the Drupal files along with an Nginx or Apache
process.

One container would run PHP (probably PHP-FPM.)

One container would contain MySQL.

This would allow for better scaling options. You could then run multiple
containers for PHP processes and have the web server do a round robin between
PHP containers. You could do the same for the container which has the Drupal
files and put a load balancer in front of them. Any file based caching options
would probably be best switched to something like Memcache so that the cached
items are in sync.I suppose some content management systems would better
accommodate this setup than others.

I'm not sure yet the best way to do development in the above setup. I suppose
one way would be to run an SSH server on the Drupal container and then perhaps
mount the files you need access to as a drive using SSHFS or something. I
haven't played with this yet, but you can also create volumes on the host OS
to be shared with the container (I think I got that right.) Perhaps you could
share the files that you need access to.

Am I right on this? Does anyone else do things differently? This project moves
so fast it's hard to keep up with.

Edit: I also wonder if containers create a performance hit for MySQL. You
would also need make backups. Maybe you would be better off running a big DB
on its own dedicated server / VPS outside a container?

I would also save all user generated files (for something like Drupal, this
would be mostly images) to something like S3.

------
FooBarWidget
Docker is wonderful. I'm using it to build a continuous integration system
that can run my unit tests as root. It's completely isolated and it uses
almost no resources, compared to spawning up an entire VM. There are still
some bugs and security issues here and there but they're coming along quickly.

Finally a sane Linux answer to FreeBSD jails. It's about time.

------
teekert
As a newb on this, can I use this on Arch Linux for example to give people a
sort-of chrooted SSH environment with their own Nginx instance?

In this way I could still let people SSH into my box and put up their website
with 0 risk of them every seeing the things I have on my SMB share? (yes this
can be achieved with groups but this seems better.)

~~~
willvarfar
Hmm no. Although Linux was built as a multi-user operating system, it is not a
secure one. Privilege escalation exploits are routinely found and fixed, and
there are likely a lot more out there.

So if your stated aim is to use the LXC as a speed-break on hackers, don't
rely on it.

~~~
FooBarWidget
Heroku uses LXC for privilege separation between customers.

------
mmgutz
Docker I get. What's the point of Vagrant? Isn't it easier to just download a
premade VBox image and then you don't have to install all the dependencies of
Vagrant? Moreover, Vagrant has opinions built-in into an image nothing like a
bare OS I get in the cloud.

I use ansible to provision my box into its role.

~~~
kstaken
Vagrant is intended to make starting, provisioning and then destroying and
rebuilding virtual machine instances quick and easy. It makes it much easier
to bring up VMs in different configurations for different projects and
provides a way to store the runtime environment configuration with the code
for a particular project. It's primarily a tool for software developers but as
it gains more provisioning capability it's roll seems to be expanding and you
can use ansible as a tool to provision a vagrant VM.

------
Nux
I'm a long time sysadmin and I don't get all the hype about docker. Is this
bad? I have nothing against the project, at the contrary, but it's as if
people haven't yet heard about lxc (which docker indeed uses), openvz, UML or
linux-vserver. Some of these have been around for almost a decade. :-/

~~~
FooBarWidget
OpenVZ and Linux-VServer require kernel patches. UML does not, but it doesn't
perform terribly well, and requires the user to run a sub-kernel, leading to
slightly higher resource usage. It is unknown how well-maintained UML is; I
remember that Linode used it in the past but they moved away from it.

Docker is also, pardon my French, more "web 2.0"-ish software than all other
3:

* The Docker website is clean, modern, and attracts. It clearly tells me how to get started. They even invested time/money in a proper logo. Now look at the other websites. They look old, archaic, too formal, and looking at them don't give me the feeling that I should try out their software. The UML website gives off the feeling that it is unmaintained. I realize this isn't a purely technical reason, but presentation is important.

* Docker feels like it has more community around it. It uses Github and you can see that it's very active there. Docker also has an easy to use image repository. Apart from being extremely useful to pull in an image with a single command, it gives off the good feeling that there's an active community around it and that it's actively developed.

