We've been using containers rather heavily in our infrastructure for a few years now (neither rocket, nor docker) and we've developed our own toolset to handle the container images, and to manage the containers.
Even though although it kind of deprecates a lot of our work, I really see the value in having a standard that can be used with different container runtimes, and I'll be looking at migrating our internal format to the app container specs. Having tools like this to handle migrations makes a lot of sense to me. We can continue developing our tools, without marrying a specific backend.
We wrote a blog post about running docker containers on it too a while ago: https://blog.terminal.com/docker-without-containers-pulldock...
We're running on (mostly) raw lxc, with networking via openvswitch, cgroups, yada yada. so I don't think it's applicable to us at this point
A containerized world makes a lot of sense, but it still seems like a really young ecosystem. It's really the 'wild west'at this point.
To be honest, I'd rather back an accepted standard, then a specific implementation.
Don't get me wrong, Tools like this are super valuable, and generally make my day to day life easier
Interesting move by CoreOS here to create what will likely be a false dichotomy for docker in the public sphere (as an indicator of their openness). If you truly believe docker is fundamentally flawed you'd be doing your users a disservice writing this. If its transitionary, create your own docker fork/binary instead of a public scene to try to force dockers hand. Lots of fragmentation to come, which sucks because the ecosystem is so important.
As a user, it would be fantastic to run my App Container images on Docker hosts, and Docker images on Rocket hosts.
If only I could move my virtual machine images this easily and avoid high switching costs between platforms.
Shykes > just do it in your own project and let the best project win ... you have to choose one or the other
Bullshit. If it's open, let the best idea win. If this is a bad idea, then let the community examine it and it will lose on the merits.
Don't force me into a false dichotomy.
So are you suggesting docker should merge and maintain support for a container spec they weren't involved with which, was created because docker is "fundamentally flawed"?
I'm sure the pull-request author knew that this would do nothing more than cause fuss in the community. Shykes comment seems to me like it was a response to what seems like a hardly legitimate PR and much like a PR stunt.
Docker has 722 contributors on github, I'm sure the community will discuss and decide what to do with this while I watch this battle play out and work with both products.
1. Rocket implementing the docker image format
2. Rocket PR'ing the rocket image format to docker.
It is 100% reasonable and likely a good business move to reject the PR, but the only reason to be mad about (1) is if it benefits the end user in a way that weakens Docker's market share, which it does.
Don't get me wrong, I think it's great the coreos guys are trying to make a bridge between the two projects, but so far, I don't see a need for this.
If I were a Docker user, and found some awesome app that only came packaged in App Container format, it'd be very valuable to have this compatibility.
(Docker's beefs here feel more like a company defending their turf than an open-source project, and that troubles me.)
I really hope this lands and something constructive can come out of it. There is a lot more that can be gained by these communities working together and not promoting divisiveness.
Adding a PR with working code was simply to show that adding this feature is something that is possible. It is OK if nothing from this implementation gets merged.
Gained by who though? These are for-profit enterprises and there are real-money gains involved in controlling the spec. For better or worse, CoreOS controls the App Container spec and make no mistake that the primary reason they want it in Docker is because it benefits them. This of course does not exclude the possibility that users benefit.
Still, I'm of the opinion that all of this fighting behind the scenes (which, if you've been paying attention, this is) is kind of bad for everyone and a waste of resources.
But I think that this VC-backed model might not be beneficial for OS as a business in the mid-long term. The race to $0 is greatly accelerated.
However, after reading "This is a simple functional PR..." in the blog post, I was surprised to see the PR adds over 38k lines of code. Seems like that will take a while to review.
Don't get me wrong, I totally see how this is good for Rocket, just be honest and admit the "fundamentally flawed" argument was mainly smoke and mirrors to justify a defensive-offensive move by a VC-backed, for-profit company launched against another VC-backed, for-profit company.
Again nothing wrong with that, it's business and in fact a good move but in my eyes CoreOS lost quite some trust when they tried to potray Rocket as a selfless act of kindness towards the community that needed to be saved.
There are things in the App Container spec that we would like to see in Docker, this is why we put in the work to make a spec, write the code to make it work and start a technical discussion. This has been the goal since the beginning. The problems that exist in the current Docker Engine that we would like to address are technical and real:
1) We believe in having a decentralized and user controlled signing mechanism. In the appc specification and rocket we use the DNS federated namespace. See the `rkt trust` subcommand and the signing section of the ACI spec.
2) We believe that image IDs should be backed by a cryptographic identity. It should be possible for a user to say: "run container `sha512-abed`" and have the result be identical on every machine because it is backed by a cryptographic hash.
In rocket another thing we wanted to do was enable running the download/import steps outside of being a root user. For example today you can download and import an image from disk in the native ACI format with rkt. And in the next release `rkt fetch` will be runnable as a user in the same unix group as `/var/lib/rkt/`.
Not sure I want containers to be successful (unless of course the main business is building and marketing containers). I want my problem solved but whether they are solved with containers, mocks, jails, VMs, and so on doesn't matter as much.
fresh state/ no performance penalty (AMI+autoscaling)
Document dependencies, not reinvent (Packer file)
Always use same standard port (easier with VMs as 1:1 map)
I know most people think that containers/docker/whatever new stack does these things better and they may be right. The benefits however don't outweigh the costs in weaker toolset and less mature stack.
For my use cases, the biggest problem is that containers don't solve the "where does this run" question. Whenever I ask this, people loudly exclaim "anywhere!" which is the same as "I don't know" to me.
AWS AMIs run in 11 regions x N AZs around the world. This solves a much bigger technical problem for me than "it's lighter weight and easier to do incremental releases on top of" which seem to be the only things in favor of containers.
Many people, including Amazon, say "run containers on VMs!" This seems unnecessarily complex for little additional gain.
I'm really curious if the containerization folks are using Packer and if not why not.
I am not locked into a 1:1 tenancy between applications and instances (though I could have it if I wanted). Multitenancy is trivial. I have the ability to spin up new instances of my applications to combat spike loads or instance failures in single-digit seconds rather than in minutes. My developers can run every container within a single boot2docker VM instead of incurring the overhead of running six virtual machines. It's easier to integration-test because my test harness doesn't have to fight with Amazon, but can rather use a portable, cross-service system in Mesos. In addition, I don't have to autoscale with the crude primitive of an instance in production. Multitenancy means that I can scale individual applications within the cluster to its headroom, and only when the entire cluster nears capacity must I autoscale. I can better leverage economies of scale, while allowing me to leverage more vCPU power to applications that need it (running two dozen applications on a c3.8xlarge is very unlikely to bring to bear at any given time less computational performance to a given application than running each application on its own m3.medium).
I could do this without containers and with only Mesos. It would be worse, but I could do it. I could not do this at all with baked AMIs and instances without spending more money, doing more work, and being frustrated by my environment. I know this I've built the same system you describe (I preferred vagrant-aws because when something broke it was easier to debug, but we moved to Packer before I left) and I would never go back to it. It was more fragile and harder to componentize than a containerized architecture with a smart resource allocation layer. The running context of a container should be "anywhere", and it should be "I don't know", and you caring about that is a defect in your mental model.
As container runtimes Rocket and Docker have different design goals though. As one example, Rocket is designed to be a standalone tool that can run without a daemon and works with existing init systems like upstart, systemd and sysv. We needed this standalone property because on CoreOS containers are our package manager and the daemon can get in the way of doing that correctly.
It is ok that Docker and Rocket have different design goals and both can exist for different purposes. But, I think we can share and converge on an image format that can be used by multiple implementations that includes cryptographically verifiable images, simple hosting on object stores and the use of a federated DNS based namespace for container names.
1) As you very well know, Docker is already working on cryptographic signature, federated DNS based namespace and simple hosting on object stores. If you "would like to see convergence", why didn't you join the effort to implement this along with the rest of the Docker community? The design discussion has been going on for a long time, the oldest trace I can find is at https://github.com/docker/docker/issues/2700 , and the first tech previews started appearing in 1.3. Yet I can't find a single trace of your participation, even to say that you disagree. If you would like to see convergence, why is that?
2) You decided to launch a competing format and implementation. That is your prerogative. But if you "would like to see convergence", why did you never inform me, or any other Docker maintainer, that you were working on this? It seems to me that, if your goal is convergence, it would be worth at least bringing it up and test the waters, ask us how we felt about joining the effort. But I learned about your project in the news, like everybody else - in spite of having spent the day with you, in person, literally the day before.
3) Specifically on the topic of your pull request (which we also received without any prior warning, conveniently on the same day as your blog post). So now we have 2 incompatible formats and implementations, which do essentially the same thing. Once we finish our work on cryptographic signature, federated dns based naming etc, they will be functionally impossible to distinguish. How will it benefit Docker users to have to memorize a new command-line option, to choose between 2 incompatible formats which do exactly the same thing? I understand that this creates a narrative which benefits your company, CoreOS. But can you point to a concrete situation where a user's life will be made better by this? I can't. I think it's 100% vendor posturing. Maybe it's bad PR for me to say this. But it's the truth. Give me a concrete user story and I will reconsider.
User here. I couldn't care less about a new command-line option, but it would be worth a lot if I could run any image on any platform.
If you claim this is "all about the user" then talk more about what the user gains or loses.
Is the biggest downside really just another command-line option? Docker already has a metric fuckton of command-line options, what's one more?
Impugning the motives of your competitor is at best an irrelevant distraction, and at worst an indictment of your own motives.
That technology exists, it is called a VM. Any platfrom that supports x86 for example will run any x86 compatible image. You can use wrappers and scripts like Vagrant on top of it.
Or if you want all hosting managed as a pool of resources (storage, CPU) try something like oVirt.
I was responding to one reason -- which is "running any image on any platform".
> why people prefer containers to VMs for some types of workloads
Sure but there are no magic unicorns underneath, knowing what you get from a technology requires some understanding on how it works. Saying things like "I want very lightweight but also want it to run any image on it" is asking for a trade-off. Or a complicated multi-host cabaility based platform.
- you can simply use Dockerfiles and build your own images,
- apparently, it seems like you can host your own registry 
- you can even use a service run by CoreOS, ie Quay, to host your Docker images 
I'm not sure I understand what you mean by "I really don't want to run that software". Does it mean you don't want to use Docker ?
It's maddening, because I love Docker-the-concept but not Docker-the-implementation nor Docker-the-ecosystem. I honestly do understand how many would find the UX of "Docker, Inc. at the center of things" to be a refreshing convenience, but to me that notion is frustrating and repellent, as much so as if Git remotes defaulted to GitHub.
Is there something I'm missing that you couldn't just use wget? If you have the URIs, I can't imagine how pulling down an image by name would be more than a quarter-page Python script, even if you include the untarring and such.
They've made it much harder than necessary.
By the way I don't have a dog in this race and am not rooting for either side. Just from purely a technical perspective and resource use the fragmentation is now starting to feel something that is mostly driven by public relations and marketing. As someone that tries to use the best tool for the job I now have no compelling reason to choose either format and run-time which means I'm just going to wait it out and both sides are going to lose contributions from independent open source developers because their effort is going to be wasted.
And that's where the problem is. I can very much understand why Docker sees holding onto that as a great advantage to them, but it's not an advantage to me as a user.
The federated nature of image identity that CoreOS is pushing for is a direct challenge to the special status that Docker has given index.docker.io, and that they have been strongly resisting attempts to change.
I don't care much if Rocket or Docker "wins", but I really hope the App Container federated approach does.