

Docker + Joyent + OpenVPN = Bliss - jpetazzo
http://blog.docker.io/2013/09/docker-joyent-openvpn-bliss/

======
motiejus
I never tried Docker (looking around now), but the approach is extremely dirty
for a few reasons.

1\. Process supervision should be handled in Docker (or something that is
designed to do supervision), not in the `while true` loop. Idea on hanging on
*.log files is terrible. Maybe there is a reason to stop the container when
application stops?

2\. There must be a better way to handle docker logs of multiple programs. If
not, run both instances in different containers.

3\. The thing serves configuration files to arbitrary clients that ask for
them. It doesn't even log multiple downloads of the same key (though serves
the private keys over SSL, which makes it hilarious on the purpose of SSL
here).

Point 3 is openvpn specific and can be acceptable for example (though I am
still lost why bother with SSL). However, points 1 and 2 show how to seriously
misuse Docker. Either Docker or the setup is flawed (I suspect the latter).

Please do not take this article as an example how to do things in Docker.
There must be better way, in more or less every step.

~~~
jpetazzo
Thanks for your feedback!

Agreed, the approach is far from perfect.

1\. Process supervision is kept very simple, because there are many other
tutorials and demos out there using Supervisor and other systems. I'd love to
know what is inherently wrong with using `while true` and hanging on logs.
Personally, replacing Supervisor (or monit, or god, or
$yourfavoriteprocessmonitor) with 3 lines of shell script is fine with me,
especially as the focus is to show that Docker can now run OpenVPN.

2\. Handle multiple log streams: indeed! I considered running two separate
containers, and the only reason why I didn't do it is to simplify the setup
process. I also explored the other option (using two separate containers for
TCP and UDP connections), and it works almost the same; it looks like:

docker run openvpn genkeys docker run -volumes-from $CID openvpn tcp docker
run -volumes-from $CID openvpn udp docker run -volumes-from $CID openvpn
serveconfig

Hopefully, the way the project is laid out makes it fairly trivial for someone
to change it to run each openvpn process in its own container.

3\. Here again, it's a (not-so-good) trade-off. In many cases, you need
something simple and straightforward to move the configuration file to the
device, e.g. when the device is a phone or tablet. Downloading a combined file
over HTTPS is the simplest solution I found (even if in the Android case, it
is slightly crippled by the Download Manager bug). It uses SSL to prevent
eavesdropping. It protects against a system which would passively record e.g.
WiFi traffic. It doesn't protect against a system which would actively detect
such a transfer, and immediately initiate the same transfer, before you have
the opportunity to stop the `serveconfig` container. I considered adding some
HTTP basic auth, but that would have required a "real" HTTP server in
`serveconfig` (or to hack further with socat, but I'm already quite ashamed of
the current hack, TBH).

I agree that the setup can be greatly improved (what can't?), but I disagree
with "there must be better way in every step". This is certainly not the setup
that you want if you have a large number of clients, or if you are in an
extremely compromised network (then again, it's only a concern during key
exchange), but I don't see why it can't be acceptable for 99% of the personal
users out there.

------
noonespecial
Very nice. I do something most similar.

One tiny nitpick: you might want something like: _dhcp-option DNS 8.8.8.8_ to
go with your _redirect-gateway def1_ .

This is because all of your DNS traffic will be redirected over your VPN as
well. If you happened to have been assigned a local only DNS by your home
router or cable/dsl provider, DNS will be broken when your VPN connects. Use a
globally accessible one like 8.8.8.8 and the dhcp-option to tell openVPN to
switch your DNS on connect.

------
biturd
There has been lots of talk re ; docker lately.

I don't fully get it, but am trying to follow along. Basically, s small box
you can install 1-x s-Linux software apps on, and deploy it on another
machine, or inside a VM on that machine.

It's not a real box, or hardware, but a small chunk of software that essential
is a pre-made .iso with whatever single, or multi-packackaged goody you
desire?

I don't get, with spinning up a VPS being a few clicks, what is the advantage?
Can't you make images of your VPS on amazon, or any of the other cloud
providers, and save a snapshot of your config? What is the difference?

These are totally posable? Where do they get their real hardware resources
from, such as RAM and drive? If I set up a LAMP server in a Docker container,
allocate 50GB of memory to Apache, and drop that docker into a micro instance
on Amazon, what is going to happen?

~~~
Ixiaus
There are numerous reasons for why containerization (even containerization on-
top of a vm) is a wonderful thing.

The first thing is that containers allow you to securely isolate software and
its configuration without impacting that software's usage of resources the
same way a vm does.

Isolation of configuration is extremely valuable here, you now have a
collection of containers that aren't polluting your host system with its
dependencies and configuration, now containers can inter-operate with each
other over known protocols and upgrade/downgrade their configurations or
software without any impact to the surrounding host system.

Security is a big one too, containerization allows you to completely isolate
the permissions of the given application - if something is compromised only
that one thing and its container are compromised. Fix the issue, throw away
the old container, build a new one, deploy it, voila!

The second reason containers are awesome is the ability to build, test, and
"deploy" your software. Your build process (if you've actually worked on it!)
with Docker is now to have it part of your automated testing cycle, build
cycle, and packaging cycle where when you, say, tag a release - an automated
build gets kicked off that builds the whole application in a Docker image with
the explicitly listed dependencies (as they may be updated!!!), if that
succeeds without error then go on to automated/QA testing, if that passes then
packaging as a docker image ready to be installed. You drop it on the server
and you're ready to go!

[EDIT] I use a similar set-up to the author but I'm on FreeBSD; in FreeBSD we
have ZFS and ezjails which provides most of the same functionality (but
slightly more "cool" because of ZFS) as Docker but as a set of shell scripts,
you can have certain jails start up with the host system.

I run OpenVPN, TOR, a web server with OwnCloud on it, and a few other
experiments on the machine and it's a joy to be able to throw away a jail and
build a new one, or create new jails whenever I want to experiment but never
worry about it polluting the host system and also not worry about performance
impact on my cheap hardware.

~~~
biturd
Thanks for the info, I appreciate it. Sorry about my terrible spelling and
grammar. Not my day I suppose.

------
zeckalpha
I was hoping this would be a mapping between containers and zones.

~~~
polynomial
I don't see how you could map containers to zones without actually porting
docker to Solaris.

~~~
jpetazzo
... That's something that I would _love_ to do once Docker has pluggable
backends.

However (please don't throw stones at me if I'm wrong) I thought it wasn't
possible to do "zones within zones"; did that change?

------
res0nat0r
This looks fun, but if this is the only thing you are going to be using on
your extremely small Joyent VM, why waste time and complexity putting your
OpenVPN setup inside Docker? Seems unnecessary.

~~~
jpetazzo
That's not the only thing I'm doing on this VM; as I pointed out, I'm
consolidating lots of little hacks and projects that are running all over the
place in different environments. I put each thing in its own container.
Benefits:

\- I can shutdown one of those "experiments" without affecting others. \- If I
want to migrate one of those things elsewhere (because it turns out to need
more resources), it's already "Dockerized" (i.e. trivial to redeploy
elsewhere). \- When I remove one of those things, I don't have to worry about
leftover packages, dependencies, files, etc. lying around.

Otherwise it would indeed be a poor use of my time :-)

------
j_s
Funny to see this specifically w/ Joyent, the SmartOS zone gurus.

------
zenocon
> Joyent Ubuntu image comes with an “optimized kernel”. It might be optimized,
> but it doesn’t have AUFS support, so you want to install an official Ubuntu
> kernel instead

Why AUFS?

~~~
res0nat0r
Docker currently is dependent on AUFS I believe.

~~~
polynomial
y

------
dingaling
IPSec is one of the few instances where I have encountered consistent (
irregular ) kernel panics and when a panic ensues from a containerized app..
it is of course the 'host' kernel that is panicking. So all your containers
are hosed.

I only run VPNs through virtualized kernel instances now; if they fail, the
hypervisor restarts them. Nothing else affected.

~~~
jpetazzo
Fair point; but this is not using IPSec. It's using OpenVPN, which relies on
TUN/TAP.

------
zobzu
oh look, the daily HN spam from docker blogs

/has karma, uses it.

------
somberinad
Just trying to understand Docker and hence the question. How is it different
from the HPUX or Solaris Package managers? Maybe this question itself shows my
age :)

------
iancarroll
What about DigitalOcean? 2x the RAM, cheaper.

~~~
DarkStar851
A lot of the Docker commuity use DO, myself not excluded, but I'm sure
jpetazzo had his reasons. Too many eggs in one basket perhaps.

