The amount of wasted development effort caused by Docker's willful intransigence on this is sort of staggering. The teams I work with using docker are still tripping over new nonsense that would have been solved by... I don't know, using systemd... after almost a year of trying to work through the kinks.
Docker's design decisions make sense if you're shipping statically linked, standalone binaries everywhere. Which I suspect docker.io and many other people are doing. But that's also sort of a boring edge case where you don't even need filesystem namespaces except for cleanliness.
My personal opinion is that the winner will be the group that successfully gets Enterprises to change their workload design. I also don't think that Rocket is necessarily a superior format to Docker, but I think they're both dealing with the recognition that any big change in Enterprise behavior represents an opportunity for value capture.
There's a real question as to where any of these abstraction layers fit in if Docker wins (and there's some possibility docker is going to win). If that's the case, coreos doesn't want to look back a few years down the road and wish they'd been working on a container format.
It's 2015. One of the battlegrounds for enterprise dollars is containers. It's going to be a delightful thing to watch.
It's also worth noting that many companies have their own cgroups implementations which are neither docker or rocket based. I rather like the position of dispassionate observer in this war (at Terminal we run all of the containers and also apps without containers).
See for example Garden (née Warden), which is the basic building block of Cloud Foundry.
Could anyone explain or link to non biased comparisons of the two, please ?
One example: I was away from the office a couple of days ago and one of the devs had to push to our staging servers instead of me. He logged in, and couldn't pull the image from dockerhub. Error message said the image was not found, which was demonstrably wrong, because he could pull it to his own machine. You just have to know that "image not found" can mean both "image not found" and "you haven't run 'docker login' yet" - that image not found error is given as an auth error message!
There's stuff like this all through Docker, and I can see where the Rocket guys are coming from - Docker is spread too thin trying to do too much. Lots of corners get cut.
404s are pretty common for that. Github, for example, returns "not found" pages for repos you can't access.
Edit - however the login process is quite awkward, though set for improvement.
A similar reasoning is behind why you get a 403, not a 404, when you try to get the index of an empty S3 bucket. Sure, it doesn't exist—but you're also not allowed to know that.
The 404 (Not Found) status code indicates that the origin
server did not find a current representation for the target
resource or is not willing to disclose that one exists.
In my example above, a 403 gives the correct nature of the fault without revealing any hidden information, whereas a 404 is demonstrably misleading.
Well the RFC says a 404 basically means "this item doesn't exist, or I can't tell you if it does". If you ignore part of the definition, then sure, it doesn't make sense. Including that last part, then of course it makes sense for this case!
If you returned 403's, I can see people complaining that they should have access to their own images and they've logged in and checked their password/etc only to find out they've spelled the name wrong. A 403 also does not seem, to me, to cover the case where an item does not exist but a 404 definitely covers the case where it exists but can't disclose that fact.
Really the solution here would have been to, when seeing a 404, say to the user:
"The image iancal/thing either does not exist or you do not have access to see it. If you believe the image exists, please ensure you are logged in and have appropriate access rights"
A 403 for a resource that exists but is unauthorised leaks the information that the resource exists.
Many Github customers don't want people to be able to guess at their private repos, and the 404 is the only code that is legitimately able to express the union of "not here" and "not here because you're not allowed to know it's here".
> request "foo" => 403
> request "bar" => 200
> request "baz" => 403
> request "qux" => 403
The "hiding the existence of resources" purpose has to be carried by something. The RFC says it's carried by 404, and that's that.
> request "lkj3fla3kjf1ljf3jf"
People tend to wrap it with supervisord which does a better job at tracking child processes.
I must say however, I dont like the pod concept too much. That locks you in a bit.
Additionally and in a different register, the deployment systems, while these work fine when you have to the time to do things right, dont work so well in practice.
All companies that I know of which have 1k employes or less (ie most) basically dont update anything automatically because the redeployment might still fail. And of course, manual labor costs a lot of human resources.
We still need a better way to separate the system update process from the deployment, settings, and app.
They didn't make it up, it's straight from Kubernetes. Presumably the Docker team will wind up incorporating something similar soonish, as well.
That said, it's basically a fancy pants way of saying "we are going to run multiple processes in the same namespace", and if you're using runit like so many people are, you're already doing it.
If you want to multi-host deployment, it is similar to CloudFormation. You can still use multiple pods to compose a distributed app.