Not having to run the Docker daemon will also be pretty nice. Currently when I upgrade Docker, all containers have to be restarted on the host due to the daemon dependency. So to maintain uptime with Docker containers currently you better be running your stuff clustered (e.g. via Mesos/Marathon).
Standalone containers was something I felt rkt (alternate container format from CoreOS people) got right, so nice to see it carrying over here through the collaboration.
Microsoft implementing the Linux system call ABI, now that would be truly amazing. I guess it's possible (FreeBSD does some of this), but I guess this is not that?
If there is ever gonna be a page in some book that covers server software development, I believe it will tell about this standard.
I've thought of Docker containers for a long time as gigantic statically linked binaries. This isn't necessarily a bad thing (though it does present issues). In some ways the process of installing the different moving pieces of a service and configuring them is a bit like manually "linking" something -- sub-services like MySQL, Redis, etc. are analogous to libraries.
Now what we're seeing is a runtime for this binary being ported around to different platforms. This could get interesting.
With x86 being the primary server/workstation workhorse (lets exclude mobile platforms for the moment), is all of this abstraction necessary? Doing Infrastructure and DevOps, I definitely see the benefit of containerization for build reproducibility and decreasing the friction developers face for running applications locally as they run in production.
Not everyone needs Borg/Mesos/Mesosphere. Not everyone needs containerization. When you have a hammer though, everything starts looking like a nail.
Insofar as libcontainer can ram as many ways to implement OS-level virtualization across platforms, that is.
Now clearly this is different than traditional full operating system virtualization ala KVM/Xen, but it is absolutely still a form of virtualization. Just because something is different doesn't make it wrong.
The other part of the problem is that this is a rapidly evolving space with a ton of money and attention being pored into engineering and competitive battles playing out, and not so much in the marketing and clear explanations. runC is in part a symbolic "bury the hatchet" moment for a public feud over standards with CoreOS and others that began in December 2014. If you haven't been following the inside baseball, it it's all kind of confusing.
Another key bit of infrastructure moves to a safer language.
"This is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning."
Docker has such prodigious and repeated use of "unsafe" that it makes one wonder what, if anything, using Go has bought them in terms of safety or reliability? It seems like virtually every interface used in the Docker source code relies on unsafe casts to pointers, and I hesitate to say I am not sure how well the Go garbage collector meshes with casting pointers and providing them to the kernel and other libraries.
Something like Rust's native ability to ensure a pointer provided to external code lives long enough would be very useful.
Interesting times …
Yeah, isn't it great? More pragmatism, less BS.
Sometimes I wonder about the strength of this sort of sales behaviour but ultimately I guess they make their own ecosystem by requiring specialized techies to translate even the sales pitch - promoting a sort of "it has got to be super-hyper-vigilantly efficient if I can't even understand what its good for, plus I can wash my hands of it professionally if it fails because anyone in their right mind can see the necessity of having someone else evaluate the usefulness of a system this incomprehensible"-response from certain managers.
I would expect native graphics (including 3d graphics) are a feature we'll see down the road.
> CoreOS remains committed to the rkt project and will continue to invest in its development. Today rkt is a leading implementation of appc, and we plan on it becoming a leading implementation of OCP. Open standards only work if there are multiple implementations of the specification, and we will develop rkt into a leading container runtime around the new shared container format. Our goals for rkt are unchanged: a focus on security and composability for the most demanding production environments.
Yes but even ACI and Rkt were being worked on months before the announcement.
Based on history between Docker and ACI/Appc, I'd wager Docker didn't want to fully submit to a standard they had zero input on (should note they elected to have zero input, snubbed their nose at it, and declared their own "open" standard).
It's likely this new standard was the CoreOS team compromising with Docker to include them in the circle. Ultimately it will yield portable containers and a better ecosystem. A big win for the community and users.
As has been said before, an Open Standard is far superior to an Open Implementation.
Yes they were, but they really should not have been. Docker had started with a rough draft of an open specification, but then removed it. Every time the idea was brought up, it was shot down. Docker viewed it as a strategic move to not have an open spec that anyone could implement and use container images with.
However, with such a critical piece of technology, it was unreasonable to expect no one else to want it to have a common standard format which could be interchangeable with other container runtime implementations.
So I suspect the "surprise" was more "anger" than anything.
> As far as I can tell, the Docker folks weren't consulted when CoreOS was developing Rocket and App Container.
Right, they were not consulted prior to it's public release. However ACI was an open standard and actively requested contributions to help shape it. ACI was very early when it was released and needed implementations to help shake out the bugs. At that point, Docker actively refused to participate, and instead a week later cooked up their own "open" implementation which was nothing more than rough documentation of how Docker behaves internally (not something a person could write an implementation against).
Initially Docker staffers seemed excited to contribute, but it got shot down quickly by shykes.
See these github issues:
And there was shykes' rampage when CoreOS made the initial announcement, which began with this post:
So, it's not surprising that, fast forwarding to today, concessions had to be made in order for Docker to save face and not appear to cave into public demand for a standardized open format that no single for-profit organization controlled entirely. Now both parties get good PR for working together and settling their differences, and the community gets a truly open standard from which, I expect, a great many implementations will arise.
Are layers not part of the opencontainers spec, or is the sample just missing that bit?
This is by design: it turns out layers, as implemented by Docker, are not the only way to download or assemble a container. The appc specification uses a different, incompatible layer system. Many people like to "docker import" tarballs which bypasses layers entirely. The next version of Docker is gravitating towards pure content-addressable storage and transfer (a-la git), possibly eschewing layers entirely.
runC is not concerned by how you got a particular runnable object. It's only concerned with its final, runnable form. The result is a nicely layered design where different tools can worry about different levels of abstraction.
In this way, the notion of layers is external to the container spec.
Again I'm no expert, so corrections welcome.