

App Container and Docker - thousandx
https://coreos.com/blog/app-container-and-docker/

======
nstott
This is useful for us.

We've been using containers rather heavily in our infrastructure for a few
years now (neither rocket, nor docker) and we've developed our own toolset to
handle the container images, and to manage the containers.

Even though although it kind of deprecates a lot of our work, I really see the
value in having a standard that can be used with different container runtimes,
and I'll be looking at migrating our internal format to the app container
specs. Having tools like this to handle migrations makes a lot of sense to me.
We can continue developing our tools, without marrying a specific backend.

~~~
josh2600
You can take a look at the work we've done with containers if you want over at
Terminal.com. You can run it on your own metal too if you'd like.

We wrote a blog post about running docker containers on it too a while ago:
[https://blog.terminal.com/docker-without-containers-
pulldock...](https://blog.terminal.com/docker-without-containers-pulldocker/)

~~~
nstott
That looks like a useful tool

We're running on (mostly) raw lxc, with networking via openvswitch, cgroups,
yada yada. so I don't think it's applicable to us at this point

A containerized world makes a lot of sense, but it still seems like a really
young ecosystem. It's really the 'wild west'at this point.

To be honest, I'd rather back an accepted standard, then a specific
implementation.

Don't get me wrong, Tools like this are super valuable, and generally make my
day to day life easier

------
Goopplesoft
Shykes latest comment on that github thread has a point:
[https://github.com/docker/docker/pull/10776#issuecomment-743...](https://github.com/docker/docker/pull/10776#issuecomment-74346219)

Interesting move by CoreOS here to create what will likely be a false
dichotomy for docker in the public sphere (as an indicator of their openness).
If you truly believe docker is fundamentally flawed you'd be doing your users
a disservice writing this. If its transitionary, create your own docker
fork/binary instead of a public scene to try to force dockers hand. Lots of
fragmentation to come, which sucks because the ecosystem is so important.

~~~
panarky
Shykes > _Can someone explain to me how the user benefits from this?_

As a user, it would be fantastic to run my App Container images on Docker
hosts, and Docker images on Rocket hosts.

If only I could move my virtual machine images this easily and avoid high
switching costs between platforms.

Shykes > _just do it in your own project and let the best project win ... you
have to choose one or the other_

Bullshit. If it's open, let the best idea win. If this is a bad idea, then let
the community examine it and it will lose on the merits.

Don't force me into a false dichotomy.

~~~
Goopplesoft
> just do it in your own project and let the best project win > Bullshit. If
> it's open, let the best idea win.

So are you suggesting docker should merge and maintain support for a container
spec they weren't involved with which, was created because docker is
"fundamentally flawed"?

I'm sure the pull-request author knew that this would do nothing more than
cause fuss in the community. Shykes comment seems to me like it was a response
to what seems like a hardly legitimate PR and much like a PR stunt.

Docker has 722 contributors on github, I'm sure the community will discuss and
decide what to do with this while I watch this battle play out and work with
both products.

~~~
burke
I can't speak for panarky, but there are two separate issues here:

1\. Rocket implementing the docker image format

2\. Rocket PR'ing the rocket image format to docker.

It is 100% reasonable and likely a good business move to reject the PR, but
the only reason to be mad about (1) is if it benefits the end user in a way
that weakens Docker's market share, which it does.

------
efuquen
> At the same time as adding Docker support to Rocket, we have also opened a
> pull-request that enables Docker to run appc images (ACIs).

I really hope this lands and something constructive can come out of it. There
is a lot more that can be gained by these communities working together and not
promoting divisiveness.

~~~
AndrewHampton
I agree, I think it would be much better for everyone if the two container
specs merged.

However, after reading "This is a simple functional PR..." in the blog post, I
was surprised to see the PR adds over 38k lines of code. Seems like that will
take a while to review.

~~~
shawnps
Most of those lines added are appc getting vendorized

------
ABS
little bit of snark: wasn't Docker "fundamentally flawed"? If that was really
the premise to launch Rocket why bother with this humongous PR?

Don't get me wrong, I totally see how this is good for Rocket, just be honest
and admit the "fundamentally flawed" argument was mainly smoke and mirrors to
justify a defensive-offensive move by a VC-backed, for-profit company launched
against another VC-backed, for-profit company.

Again nothing wrong with that, it's business and in fact a good move but in my
eyes CoreOS lost quite some trust when they tried to potray Rocket as a
selfless act of kindness towards the community that needed to be saved.

~~~
philips
All of us want containers to be successful, they solve a ton of problems. But,
part of that success is getting the format and the security correct. And we
want to have that technical discussion and settle on those best practices for
all implementations.

There are things in the App Container spec that we would like to see in
Docker, this is why we put in the work to make a spec, write the code to make
it work and start a technical discussion. This has been the goal since the
beginning. The problems that exist in the current Docker Engine that we would
like to address are technical and real:

1) We believe in having a decentralized and user controlled signing mechanism.
In the appc specification and rocket we use the DNS federated namespace. See
the `rkt trust` subcommand and the signing section of the ACI spec.

2) We believe that image IDs should be backed by a cryptographic identity. It
should be possible for a user to say: "run container `sha512-abed`" and have
the result be identical on every machine because it is backed by a
cryptographic hash.

In rocket another thing we wanted to do was enable running the download/import
steps outside of being a root user. For example today you can download and
import an image from disk in the native ACI format with rkt. And in the next
release `rkt fetch` will be runnable as a user in the same unix group as
`/var/lib/rkt/`.

~~~
dmourati
"they solve a ton of problems"

Name two?

~~~
m_mueller
Deploying to a cleanly defined fresh state without paying any performance
penalty. Documenting your dependencies by writing the deployment script
(=Dockerfile) and not having to reinvent the wheel everytime (image
inheritance). Sandboxing linux applications without paying any performance
penalty. Creating a PaaS where your services internally always see the same
standard port, externally they're linked together through docker, thus
separating the routing concerns from your application logic.

~~~
dmourati
These are all great but I get the most of the same benefits from VMs and many
more:

fresh state/ no performance penalty (AMI+autoscaling) Document dependencies,
not reinvent (Packer file) Sandboxing (same) Always use same standard port
(easier with VMs as 1:1 map)

I know most people think that containers/docker/whatever new stack does these
things better and they may be right. The benefits however don't outweigh the
costs in weaker toolset and less mature stack.

For my use cases, the biggest problem is that containers don't solve the
"where does this run" question. Whenever I ask this, people loudly exclaim
"anywhere!" which is the same as "I don't know" to me.

AWS AMIs run in 11 regions x N AZs around the world. This solves a much bigger
technical problem for me than "it's lighter weight and easier to do
incremental releases on top of" which seem to be the only things in favor of
containers.

Many people, including Amazon, say "run containers on VMs!" This seems
unnecessarily complex for little additional gain.

I'm really curious if the containerization folks are using Packer and if not
why not.

~~~
eropple
I run containers in Amazon. Not using their service, because their service is
silly, but on Mesos.

I am not locked into a 1:1 tenancy between applications and instances (though
I could have it if I wanted). Multitenancy is trivial. I have the ability to
spin up new instances of my applications to combat spike loads or instance
failures in single-digit seconds rather than in minutes. My developers can run
every container within a single boot2docker VM instead of incurring the
overhead of running six virtual machines. It's easier to integration-test
because my test harness doesn't have to fight with Amazon, but can rather use
a portable, cross-service system in Mesos. In addition, I don't have to
autoscale with the crude primitive of an instance in production. Multitenancy
means that I can scale individual applications within the cluster to its
headroom, and only when the entire cluster nears capacity must I autoscale. I
can better leverage economies of scale, while allowing me to leverage more
vCPU power to applications that need it (running two dozen applications on a
c3.8xlarge is very unlikely to bring to bear at any given time _less_
computational performance to a given application than running each application
on its own m3.medium).

I could do this without containers and with only Mesos. It would be worse, but
I could do it. I could not do this at all with baked AMIs and instances
without spending more money, doing more work, and being frustrated by my
environment. I know this I've built the same system you describe (I preferred
vagrant-aws because when something broke it was easier to debug, but we moved
to Packer before I left) and I would never go back to it. It was more fragile
and harder to componentize than a containerized architecture with a smart
resource allocation layer. The running context of a container _should_ be
"anywhere", and it _should_ be "I don't know", and you caring about that is a
defect in your mental model.

~~~
dmourati
Thanks for the reply.

------
Gigablah
I think this is the first "PR PR" I've ever seen :)

------
i_have_to_speak
Wow, almost as easy as "apt-get install redis".. (ducks)

~~~
lclarkmichalek
Oh, containers don't improve on apt-get install. They mostly improve apt-get
purge.

