
Docker and Canonical Partner on Commercially-Supported Docker Engine for Ubuntu - AJAlabs
https://www.docker.com/docker-news-and-press/docker-and-canonical-partner-cs-docker-engine-millions-ubuntu-users
======
taeric
I really want to like docker for end user applications. However, until the
problem of sanely sharing users into the container is solved, it is something
that merely works well right up to the point you try to do something useful.

I suppose this can be sidestepped by allowing root in all of your containers
for the applications. I am curious if that actually provides security
benefits, though.

~~~
saganus
What do you mean by sanely sharing users in the container?

~~~
taeric
An example is easiest. I have my machine setup to provide who I am to machines
I ssh to. Now, launch a container that you want to pull data from a machine
that you have ssh access to.

First, you'll find that the user used to setup the container was not you. So
you can't even just map in your .ssh dir.

So you'll try modifying the image to work with specified user at start up.
Only, again, it wasn't setup that way. You will start modifying the entire
image to work, but will hit tons of assumptions on user name. (You may get
lucky here. I didn't.)

So then you think to just run as root so that the user in the container will
have permission to your .ssh files. At first you forget to specify user name
on ssh commands, since the command thinks you are root now. easy enough, at
least.

Only, you forgot you have proxy commands in your config and other scripts that
you now have to edit because they rely on your user name. So you can fix that.

Now you can finally do what would have been trivial for an app installed on
your machine.

~~~
Diederich
This is well stated.

I have been making end-user apps for myself and for folks at work that require
such identity, in one case, ~/.ssh, and in another, ~/.gnupg.

My solution isn't particularly novel or clever, but it works well.

The docker image of the command-line app is the same for all users, and so
lacks their identity built in.

The hack is to drive invocation of the docker image with a shell script that
makes a temporary directory, copies in the necessary identity files from ~,
and does a docker run that maps those identify files into the docker image.

After the docker image exits, the bash invocation script cleans up.

It's a hack, but it works surprisingly well. In my tests, it adds about 100ms
of invocation latency for a python program. That is, running the docker image
containing a python program that copies some files in as described is about
100ms slower than just running the same python program directly.

It would be nice to have a more elegant solution to this, but it's not too
bad.

~~~
tomjakubowski
disclaimer: I am not a security expert. Reader beware!

If you're using ssh-agent, maybe you could bind-mount your host's $(dirname
$SSH_AUTH_SOCK) into the container, and then set the SSH_AUTH_SOCK environment
variable to point at it when you run the Docker container. That way you're not
even sharing the private key with the container.

I imagine you could do the same with gpg-agent, too.

~~~
Diederich
I didn't think about that, thanks!

I didn't mention it, but for one of the apps, I also needed ~/.gitconfig,
which I don't think has an agent. :(

------
therealmarv
Don't be confused. CS Docker Engine is not the public available Docker Engine:
[https://docs.docker.com/cs-engine/install/](https://docs.docker.com/cs-
engine/install/)

~~~
hackcrafter
Can anyone enumerate the differences between this and the public Docker
Engine?

Does it just have a different release process/QA process to allow for more
stable use in deploy environments bundled with a support contract?

------
darren0
Did I read that correctly? This will be delivered as a snap package?

~~~
darren0
As I read this again I'm quite confused what this announcement means. The
mention of snap package makes it seems like "apt-get install docker..." would
be a separate binary. A wild guess would be that 99% of Ubuntu users will
never buy CS Docker Engine so will that 99% of users be running the
debian/ubuntu packaged docker.io binary that exists today?

~~~
simonkamronn
I think Docker through snap is still free, they'll just sell support.

~~~
nickstinemates
Correct.

At a not so distant future point, Docker will be delivered as a snap package.
Users installing docker via `apt-get` will ultimately be installing the snap.

We are working through the final technical details of this portion now. We'll
make sure to keep everyone updated as this transition happens, but current
best practices should continue to Just Work.

~~~
sandGorgon
i had a quick question - do you see a convergence of Flatpak and snap at some
point ? because it seems that RedHat and Fedora are beginning another
divergence on static packaging.

~~~
dustinkirkland
Actually, there's quite a bit of cross-distro compatibility around Snaps.
Beyond Ubuntu, Snaps are known to work in Arch Linux, Debian, Gentoo, Fedora,
openSUSE, Yocto, and OpenWRT. Snaps simply require a modern systemd and snapd
daemons. With the appropriate SELinux profiles and an updated snapd, it's
entirely feasible for the same docker.snap to run on both Ubuntu and Fedora
(as well as others). You can learn more about Snaps and Linux distributions at
snapcraft.io.

~~~
sandGorgon
And the same thing is true for Flatpak...but are we really looking forward to
another 20 years of multiple packaging formats for Linux ?

The push towards static packaging is a great time to unify. The problem is
that snap is based on deb..not sure about Flatpak. So we again have a
political split.

I really think this is the opportunity to unify Linux packaging - anything, I
don't care..but let us please have one package format.

[http://arstechnica.com/information-
technology/2016/06/here-c...](http://arstechnica.com/information-
technology/2016/06/here-comes-flatpak-a-competitor-to-ubuntus-cross-platform-
linux-apps/)

~~~
mhall119
Snaps are descendant from deb, but they are now just squashfs images with some
metadata

------
hackcrafter
I'm worried about Docker living up to its valuation, and I haven't seen the
business model that will meet the expectation of their valuation long-term
yet.

They have built a great open source product, but now that there has been a
collective shift to understanding the benefits of containers, the docker
runtime + container format will/should be commoditized by infrastructure
companies (Google, RedHat, MS etc).

So where is the value-add of Docker the company going to come from?

edit: s/evaluation/valuation

~~~
itomato
_E_valuation?

------
ausjke
thought Canonical is doing its own "docker", e.g. SNAP, lxd etc that are not
totally identical but very similar to docker, what's going on here.

~~~
marcoceppi
There's more than one type of container. Docker, and docker flavors (runc,
rkt, etc). LXD is a machine container, it's the same technology that Docker
was first built off of, but it's a hypervisor for really dense machines that
are as light as process containers.

Snaps is a package format that gives you a cross (linux) platform
distribution, atomic updates, security, and isolation. It's not really like
docker as it's not a density story, there's no unique TCP/IP stack, etc.

~~~
nepotism2016
like xen? I sat on a 10 minute presentation during openstack meetup, Ubuntu
dude presented LXD...then I asked myself...xen does all this...then again
choice is always welcomed

~~~
markshuttle
The VM experience ("guests") without the VM overhead. A virtual machine like
Xen or KVM or ESX lets you run a guest kernel of a different OS, like WIndows.
LXD avoids the overhead of hardware virtualisation and the guest OS, which
means it only supports Linux guests, but they run at native speeds.

~~~
walterbell
_> without the VM overhead_

or the VM security

~~~
tyingq
LXD does at least (unlike docker) default to unprivileged containers

~~~
bonzini
Any local kernel vulnerability will let you attack other containers on the
same machine. This is a much bigger attack surface than Xen or KVM. It's nice
in that it gives the same experience as "traditional" hypervisors (at least
the basic features), but it's only applicable if you trust that the
application inside the container will not be compromised.

~~~
dustinkirkland
On the flip side, when there is a kernel vulnerability, there's only 1 kernel
to update! If you're running the Canonical Livepatch Service, for instance,
critical security vulnerabilities are patch in real time, without reboot, and
_all_ containers running immediately benefit from the patch. Conversely, if
you're running 50 Linux Xen or KVM machines, you have _51_ kernels to update.
So yeah, do think about what "attack surface" actually means, when comparing
LXD and Full Virtualization.

~~~
bonzini
If you manage so many hosts you had better use Ansible, or Satellite or
whatever so that updating 1 or 50 or 500 hosts is the same effort.

------
bogomipz
Does anyone know if this means anything significant in regards to LXD?

~~~
markshuttle
Docker and LXD don't compete. Docker is great for running clustered processes
- cloud-native apps - where Docker gives you hyperelasticity. CS Docker Engine
provides more coordination facilities for those cloud-native apps.

LXD is more like KVM in that gives you "guests" that feel like a full OS. You
can run existing apps in there in exactly the same way you would run them in a
VM.

So these are two counterparts in the container continuum, and it's useful to
understand them both so you use the right thing at the right time.

~~~
bogomipz
I'm well aware of LXC, I do understand them both. And LCX is not more like
KVM. KVM is full virtualization, it emulates hardware, and nothing like LXC.

LXC is based on cgroups and kernel namespaces - the exact same things that
enable Docker-based containers. LXD and Docker engine are competing
"container"-based virtualization engines.

LXC can can be run as app-based containers just like Docker. This is what lxc-
execute does. You don't have to run init as pid 1 in LXC.

By the way "Cloud-native" is little more than a marketing term.

------
lima
This makes me feel good about going with CentOS for Docker. Red Hat has a
custom, stable Docker version which is rock-solid (albeit somewhat old, 1.10).

~~~
EtienneK
Center for Internet Security Docker Benchmark, rule 1.5: Keep Docker up to
date [1]

[1]
[https://benchmarks.cisecurity.org/tools2/docker/CIS_Docker_1...](https://benchmarks.cisecurity.org/tools2/docker/CIS_Docker_1.12.0_Benchmark_v1.0.0.pdf)

~~~
tcrews
"Keep a tab on these product updates and upgrade as frequently as when new
security vulnerabilities are fixed."

[https://www.docker.com/docker-cve-database](https://www.docker.com/docker-
cve-database)

~~~
lima
Backports. I saw Docker break more than once after supposedly stable releases
with security fixes.

------
ronjouch
Can OP or an admin de-abbreviate "CS" to "Commercially-Supported"?

~~~
LeoPanthera
Oh so it's not Counter Strike running in Docker. That makes much more sense.

~~~
ASalazarMX
My first guess was closed source :/

~~~
geerlingguy
Or Computer Science, Creative Suite... so many things before I would think of
'Commercially-Supported".

