* Despite Brandon Philips (CoreOS CTO) serving on the Docker governance board, Docker has aggressively expanded their scope well beyond their original container manifesto.
* CoreOS believes the Docker runtime is now too unwieldy and "fundamentally flawed"; the unwritten word that really sprung to mind was that Docker was getting "greedy."
* CoreOS reaffirms their original operating model of being capable of running their infrastructure on and with Docker.
* Rocket is CoreOS's answer to stay true to the "simple composable building block" mantra.
But crucially, they also crossed the business models of many startups (including CoreOS, Weave, Flocker, etc.) that rely on Docker maintaining an Open Platform. So this is an entirely logical response.
I'll be surprised if now Docker in response doesn't unveil an 'enterprise' Docker version that basically just strips away the unnecessary features and has more security by default. The enterprise market is just too valuable to let it just slip away like this. Your move...
A number of third parties had begun work on various (sometimes proprietary) orchestration and management systems for creating a reliable/scalable/easily manageable cluster with Docker as a building block. CoreOS is one. But Docker is pushing towards an official, open-source orchestration/management system that threatens to make all of those companies irrelevant.
IME examining Docker, this is actually the hard problem.
I think Docker orchestration and coreos can coexist - if I had to use COREOS to use the goodness of Docker, then systemd-nspawn would come and eat Docker's lunch.
I wish that Docker bless one of Ansible/Chef as the official orchestration base and take it forward. I really don't want to earn something Docker specific.
Ansible/Chef orchestration IMHO solves a very different problem than container orchestration.
I agree that Chef and Ansible are different than container orchestration today - especially, when you look at low level stuff like networking, mounts, etc... But I guess what I was saying is that it is not hard to add these features to them.
They already have a specification format that works well and check for idempotency at its core. Unless you mean something like etcd is fundamental to container orchestration, which I don't believe it is (we run a couple of containers in production using Fig)
This was where I first started worrying about CoreOS and Docker divergence.
It had no etcd in it and the POC was implemented as part of the Docker API/CLI, as best I recall. There were significant questions in the discussion about etcd not being there.
What seemingly gets mixed up by quite a few commentators on this topic:
Docker is an orchestration, deployment, management, etc solution - the "container" is created by LXC, jails, libVirt or other OS features and now also libContainer.
This discussion also shows how far away / how early we are with "containerization" or container that are exchangeable / movable between different (OS) environment - we are discussing the companies that are building cranes to load and unload the boxes before we even have an understanding how the boxes will really look like.
I wonder if that comes from the partnership with Microsoft.
It has indeed surprised me how quickly a normally-slow-to-accept-new-things community has adopted Docker (even well before it was considered "stable").
I think you're referring to the sysadmin community - but I think the driver for this has been the search for deployment nirvana. Deployment is a much more fragmented field, so it makes sense that a good solution would find fertile ground.
Docker seems decent but I don't think I want them to do ochestration..
You are not "locked in" by Docker Inc if you are using Docker just like you aren't locked in by Github if you are using git.
A much more accurate analogy would be you are not "locked in" by Oracle if you are using MySQL. It may be true today, but no guarantee that will always be the case.
Docker Inc. seems to make a lot of effort to ensure Docker is a truely open project. I get the feeling that people think that handing your project to Apache is the only way to prevent vendor lock in these days.
Yes but I don't have the skill, time, resources and will to maintain MySQL if and when Oracle goes evil (I mean more evil than now ;-)
This is why I choose carefully what companies / groups I depend on for my future computing needs.
It's in a business's best interest, and exceedingly common practice, to "land and expand" with something clear and compelling, and following that add features to compete with alternative solutions. I don't think there's anything inherently altruistic about CoreOS that would keep Rocket lean in the long-run, especially as they begin migrating their various tools away from Docker containers.
It become pretty clear once dotCloud became Docker Inc. that they intended to capitalize on the "Docker" brand to sell an integrated orchestration platform. CoreOS already has enterprise customers for their operating system and related components. They seem like the perfect team to take this challenge on.
What features were recently introduced that it increased Docker's scope?
I hope you can understand that it's frustrating when, after hard work pitching an API to dozens of ecosystem players, spending weeks trying to wrangle a working implementation which makes as many of them as happy as possible, without compromising integrity of design - after all that, in the end, all it takes is one unhappy camper to write a blog post and that immediately tramples everything else.
It's even more discouraging in this particular case, because after this blog post, Alexis and I have discussed this topic extensively, and as a result he has since joined the effort. In fact I will be hacking with him in person on integrating Weave as a native networking plugin in 2 days in Amsterdam.
So, sorry for the insta-snark. But it can be frustrating to see so much good will and hard work be crushed in a second.
No malice, just a friendly tip :)
Since then I've been mulling writing my own standalone 'drydock' utility that would just start a single container and then get out of the way (as opposed to the Docker daemon that insists on being the parent of everything). I'm optimistic that Rocket could be that thing.
Question though: Does Rocket have any concept of the image layering that Docker does? That still seems to me like a killer feature.
What do you think of the filesets concept?
Does Rocket just 'cp' files on top of each other to implement layering? It'd be nice to not require a bunch of copies of the same files. I thought that the hard link implementation in Docker's new overlayfs support was a smart idea.
My personal wish-list down this path includes:
* options in the relevant manifests on which layer is writable ... if I'm doing development on libfoo which is used by several different apps, let me make that layer writable so I can rapidly iterate integration tests and (bad practice) live coding on testing/dev servers.
* tools to help me smash a dev layer or 3 into a single fileset (and similarly dissect a layer into a few new filesets during a refactoring)
* the ability to use filesets and overlays in a way similar to package management is now, but with extended features that are similar to python's virtualenv.
One of the things I see as a boon of the filesets as described is: I can update parts of my system without having to rebuild the whole dang app silo from the get go. Combining this with some of the above features looks like it could be useful for making "thin" images - where I can build all my code in one place, and port only the binaries to the staging and deployment images, just by doing a few fileset/overlay tricks. (no more complicated scripts)
The curly braces and brackets can get ugly when nested.
It may seems stupid but when you're on the terminal with vim and the directory path is long in some nested object/array. Things is really hard to parse out with your eyes.
If you still don't believe me, try implementing a conforming YAML 1.2 parser (oh, and try not to make it a source of RCE ...).
Offering strace logs to developers without feedback and finally it was fixed by someone from outside the project. https://github.com/docker/docker/issues/7348
Allocating ports pops now and then every odd docker release: https://github.com/docker/docker/issues/8714
Even stupidest things like allowing to have more dockerfiles in one folder.
Docker has own agenda and it is clearer and clearer.
Wow. That issue has been open for a long time.
There was a similar disconnect on data volumes a while ago that took some convincing before the discussion moved forward. It has always been trivial in OpenVZ to bind mount into a container (ie: share a very large read-only mount between containers).
It gets quite interesting, and it's still going 14 months later...
The Docker image registry and image management should really be a separate program as well - that is a huge pain point that Rocket seems more likely to get right.
I think all in all, CoreOS has built out a ton of tools to make using Docker easier, and they're all very well defined, and compossible. I'd even say that a lot of docker's features could be completely removed by using some of these tools.
Links? Nah just use ips/dns + etcd for service discovery.
Networking? Need very basic bridged networking, and flannel will handle communication on a single host, or multihost.
Deployment? Use fleet.
Not that all these are 100% perfect like I've made them out to be, but any individual component could be swapped out if you want.
I like some of the ideas behind the CoreOS tools, but until they start playing well with others, they're a non-starter for me. I'm not interested in tools that try to lock me into other, inferior, tools.
I'd like a tool that makes this linking easier outside of Docker, but for now this is one of the features I like about it (although holy moly do Docker links have a lot of baggage you have to bring along for the ride, like giving everything names).
I don't like the sound of locking into one vendor for everything.
(I only ask because it doesn't look like it is your core business)
To clarify, I don't think there's anything inherently wrong with what Docker's doing, but it is at odds with an entirely open, pluggable system. It doesn't make any sense for their business model to truly make it easy to just use their containers and none of the revenue-generating offerings.
It's great to see this problem broken up into reusable pieces though. It totally makes sense to function without a daemon, especially out of the box.
hm.. I don't think that's a given at all! There's been many issues with setuid-root programs. And I've seen that the OpenBSD guys favor privilege separation by breaking breaking up daemons into several parts that communicate using a very strict set of commands. For example a dockerd that does most of the work, but talks to another daemon (dockerd-root) when it needs to do anything privileged.
Initial efort, 2002:
http://www.openbsd.org/papers/openssh-measures-asiabsdcon200... - Page 16 ->
So then, I guess docker could just run the two servers, one internal as root and one public as not? That's a pretty quick fix.
Wrong. With a server, the only thing an attacker has control over is its input. With a setuid-root binary, they still have control over its input, but they also have control over the entire environment under which it executes, including many things that developers generally assume an attacker can't control. Setuid binaries are incredibly scary from a security perspective and much harder to get right than servers.
From one point of view, I'm thinking "why did coreos need to be so aggressive?", and "boy, what a gift Solomon Hykes did to coreos by mismanaging this thing so badly", and "man, all of these guys look sort of immature to me".
From the other point of view, I'm respecting docker and coreos even more, as open source projects and as a companies, because it feels like there are real people behind them.
If this is the new wave of enterprise companies, I really like it. These are people like us, that engage with us and sometimes screw up, without hiding it. They are doing great things, and the fact that they are a bit immature is actually great.
I'm an entrepreneur myself, I've done enterprise software my whole life, and I always thought it's a shame that companies in this space are so distant from their users and have such little humanity.
Looks like things are changing.
Also, even though it is using systemd to monitor and fork processes a design goal is to run on all Linux's that have a modern Kernel.
One thing that I believe Docker has failed at is in taking a purely declarative approach to image definition; rather than specifying the packages that are assembled/inserted to create the container, Docker ships around non-portable Linux binaries.
I can see where for SmartOS & Windows the Docker approach is more flexible. If someone has already settled on Linux, but they have their own ideas about how to manage containers within Linux that have nothing to do with CoreOS, the Rocket model is going to leave them much more flexibility.
Disclaimer: while I work on CF, I'm not that close to the Warden nitty-gritty.
1) Competition is always good. Lxc brought competition to openvz and vserver. Docker brought competition to lxc. And now tools like lxd, rocket and nspawn are bringing competition to Docker. In response Docker is forced to up its game and earn its right to be the dominant tool. This is a good thing.
2) "disappointed" doesn't even begin to describe how I feel about the behavior and language in this post and in the accompanying press campaign. If you're going to compete, just compete! Slinging mud accomplishes nothing and will backfire in the end.
3) if anyone's interested, here is a recent exchange where I highlight Docker's philosophy and goals. Ironically the recipient of this exchange is the same person who posted this article. Spoiler alert: it tells a very different story from the above article.
https://twitter.com/solomonstre/status/530574130819923968 (this is principle 13/13, the rest should be visible via Twitter threading)
EDIT: here is the content of the above twitter thread:
1) interface to the app and developer should be standardized, and enforced ruthlessly to prevent fragmentation
2) infrastructure should be pluggable and composable to the extreme via drivers & plugins
3) batteries included but removable. Docker should ship a default, swappable implementation good enough for the 80% case
4) toolkit model. Whenever it doesn't hurt the user experience, allow using one piece of the platform without the others.
5) Developers and Ops are equally important users. It is possible and necessary to make both happy.
6) If you buy into Docker as a platform, we'll support and help you. If you don't, we'll support and help you :)
7) Protect the integrity of the project at all cost. No design decision in the project has EVER been driven by revenue.
8) Docker inc. in a nutshell: provide basic infrastructure, sell services which make the project more successful, not less.
9) Not everyone has a toaster, and not everyone gets power from a dam. But everyone has power outlets. Docker is the outlet
10) Docker follows the same hourglass architecture as the internet or unix. It's the opposite of "all things to all people"
11) Anyone is free to try "embrace, extend extinguish" on Docker. But incentives are designed to make that a stupid decision
12) Docker's scope and direction are constant. It's people's understanding of it, and execution speed, that are changing
13) If you USE Docker I should listen to your opinion on scope and design. If you SELL Docker, you should listen to mine.
If they just quietly gave an ambiguous non-disparaging statement like "we're forking because we're unhappy with the direction Docker is taking", it would seem frivolous and ill-considered, and nobody would know on what points the fork would be aiming to distinguish itself.
This statement needs to be made, the way it was made, for the same reasons any project announcement is made: it needs to announce that it exists, and why it exists. It's the same as Docker's "debut" blog post(s).
Every schism needs its 95 Theses, and the odds favor the ones who can read them, understand them, and take them into consideration.
Disclaimer (re https://twitter.com/kenperkins/status/539528757711622145): I make edits to my comments after posting, usually posting a line or two then fleshing them out over time. If I make a change that conflicts with a statement in an earlier revision, I'll note it: otherwise I'm pretty much just composing live.
As to everything else, I manage CoreOS clusters with docker for now, and while this came out of the blue (seems like for Docker folks as well) I'm happy to see what happens as a result. I'm not sure why there are hurt feelings over the announcement, didn't find anything particularly in bad taste and what exactly is wrong with promoting your new product?
The CoreOS team isn't under any obligation to docker to contribute however anyone on the docker team want's them too. Even if these issues have been discussed before they've clearly taken a different path and that's within their right, not sure where mud is being slung. Where this will lead who knows, but hopefully there will still be good collaboration between different groups as they pursue their own goals that align with their needs.
EDIT: I haven't actually looked at the code, so if somebody wants to prove what I'm saying wrong please do. I'm basing what I know off the announcement.
Often, starting from scratch is better. This is especially true when the goals or philosophy of the two projects are fundamentally different and incompatible, even if they perform similar tasks. Again, linux vs windows example applies.
That said, you're on point: this is forking the community. A hard fork, too.
"Unfortunately, a simple re-usable component is not how things are playing
out. Docker [much to our dismay] now is building tools for launching cloud
servers, systems for clustering, and a wide range of functions: building
images, running images, uploading, downloading, and eventually even overlay
networking, all compiled into one [big and nasty] monolithic binary running
primarily as root [how insecure is that?] on your server. The standard
container manifesto was removed [those flip-floppers!]. We should stop
talking about Docker containers, and start talking about the Docker
Platform [since we can focus attention on our efforts that way]. It is not
becoming the simple composable building block we had envisioned [which puts
our offerings at a disadvantage]."
"We still believe in the original premise of containers that Docker
introduced, so [unlike those silly Docker people] we are doing something
"the Docker process model ... is fundamentally flawed"
"We cannot in good faith continue to support Docker’s broken security model..."
Or, taking the announcement as intended, "We were interested in the direction Docker started in, they have since pivoted. We were more interested in the direction than Docker itself".
Yes, there is some mild-mannered disparagement in the announcement, but it's hard to characterise it as 'slinging mud', and it's not really fair to disparage it with the name-calling you're injecting.
Right now I'm already taking a Dockerfile, exporting it to a tar, and then running systemd-nspawn -- I love Dockerfiles, I love being able to grab a postgres server and get it up quickly from Docker Hub, but I didn't need or want the rest of docker.
If both Docker and Rocket support ACI, then you have a composable image layer, and that means people aren't locked into either ecosystem just to build images of their applications.
ACI :: Docker-tar-format to me is like QCOW2 :: VMDK. Wouldn't it be cool if projects like Packer didn't have to exist, because the image format of Virtual Machines was open and documented as an independent standard?
 - https://www.packer.io/
However I think it makes more sense to do this on the actual Docker format which everyone already uses... That way you get the benefit of increased openness without the drawback of fragmentation. I have the impression I've been pretty vocal in asking for help in making this happen, and wish these guys had stepped in to help instead of forking. I pretty distinctly remember pitching this to them in person.
So, I'll re-iterate my request for help here: I would like to improve the separation between the Docker runtime and packaging system, and am asking for help from the community. Ping me on irc if you are interested.
Whether the work on a standard container format happens inside or outside of Docker, it would result in a format presumably a bit different from how Docker containers are now (e.g. not overlay-layered by default, since most build tooling wants to just output an atomic fileset.) And either way, work would then occur to make Docker support that standard format.
The only real difference is that, in this approach, the ecosystem also gets a second viable runtime for these standard containers out of the deal, which doesn't seem like a bad thing. You can't have a "standard" that is useful in a generic sense without at least two major players pulling it in different directions; otherwise you get something like Microsoft's OpenDocument format.
One could have just as easily said the same thing when the Docker format was introduced. The OpenVZ template format works well and is very similar to the proposed ACI format. The Docker format hasn't been without issue/problems.
 - http://en.wikipedia.org/wiki/Open_Virtualization_Format
Is what I should have quoted in my reply. Nobody was talking about using one instead of the other. Though, it'd be easy enough to run, for instance, CoreOS from an OVF image to run containers from. Though, I feel like you and I are just stating the obvious at this point. Wouldn't you agree?
This'll be the last version of Parallels that I buy (thanks Yosemite)
Before Rocket/ACI there wasn't even a contender for Containers. Now there is a published spec. Start there. Iterate.
Frankly, shykes and other Docker employees shouldn't be commenting here. It only serves to make them look petty with any attempt of a "rebuttal" and, as shykes put it, "sling mud". CoreOS made a grand announcement, and yes it competes with Docker... but just let it play out.
Frankly, there is a lot of things Rocket aims to do that are more appealing to me. Security being one of them, and a standardized container specification is another. If anything, it will make Docker compete better.
Can you give three examples of this happening?
Interesting to see you resort to calling your users "trolls" simply because they feel it's not good for you, the head of Docker, to respond off-the-cuff and angry about a PR announcement from a competitor.
> that it is better to take the high road and refrain from answering, and let the company make an official answer
Your company already released an official announcement 2+ hours ago (with much of the same rhetoric as your post here). Seems you didn't even follow your own advice.
> I'm just calling you a troll, and it's for implying that
> a cabal of Docker employees somehow manipulates and
> suppresses the public conversation about containers for
> the profit of their employers.
For the first time in Docker's short history, it's future and mission are being directly challenged. This is your response? (it won't be the last time Docker is directly challenged).
Imagine if Microsoft went around rattling the cage every time Apple released some product -- it would make them look pretty petty pretty quickly. Just get out there and compete. Produce a superior product and the market will speak.
You mean like this? https://www.youtube.com/watch?v=eywi0h_Y5_U FIVE HUNDRED DOLLARS FOR A PHONE?
In all seriousness, you made a few blaming statements early on in this thread which is the most likely reason got the reaction you did from Solomon. I'm not opposed to people making observations, but speaking for others really has no place here!
Specifically talking about the "PR machine" comment. Say what you mean!
In line of making lists of things to say. I got 2.
1) Don't use Twitter for having long conversations and public fights. Just don't. No good will come out of it. Engaging in that is feeding the trolls and slinging mud, which you accuse the other party of doing.
2) Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?
Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.
I encourage you to read the twitter exchange I linked to. It predates all of this, and is not at all a fight. On the contrary it is a constructive exchange and I am using it to assert Docker's philosophy in a positive way
> Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?
Surely it's possible to launch a competing tool without resorting to a press campaign like this one: http://techcrunch.com/2014/12/01/coreos-calls-docker-fundame...
> Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.
No, I would definitely not say that.
So, clearly stating their concerns about the direction your company has taken, and why they feel the need to create a competing solution is bad?
The only way that article can be considered "negative" is if your opinion is that Docker, Inc are the gods of containerisation and should be considered the be-all and end-all of solutions to container based software deployments etc.
I think it's safe to say that while your comments here made you feel better, they didn't help your position at all, regardless of how valid your points are.
Why can't we all work something out here?
The problem for the Docker folks was that they were making things into a much bigger deal than they otherwise were. By attacking the CoreOS announcement, both here in comments and in their blog, they only amplified the issue. Consider it a corollary of the Streisand effect.
You even alluded to this earlier in the day when you told them to: Don't do PR, just build the better thing.
There isn't anything here to "work out", and if there is something, it needed to not happen in public. People from Docker just needed to stop talking for a while and take a time out. They weren't helping their situation and didn't seem to get that.
What I felt as cynical about your post were things like:
>and made it seem like you were the leader of the market
I feel this is cynical because it's advocating not for facts and technical solutions, but arguably, willful misleading of the public. Docker should be open and honest about its software and its positions, not trying to create narratives where it 'seems' like you are something that you might not be.
>you just made the PR that much stronger for the CoreOS POV
The reason I advocated for not 'doing PR' is that, in my book, PR is an exercise in charade. Tell us what you feel, what you're working on, and why these things are good. Don't try to 'manage' appearances. If you have a problem with something, let it be known.
I think there are some things that might be able to be worked out. Docker and CoreOS/Rocket may be able to co-exist. Rocket doesn't seem to have the tools to easily produce the ACI's. Dockerfiles are widely used and pretty decent. Docker could focus on tooling while CoreOS handles execution. Both companies have contributed useful technology and it's not exactly clear that one company can/should own the entire solution.
PR isn't just standing in front of a microphone and saying what you're working on or how awesome you are. It's how you act in public, how you treat customers and competitors. You want to be authentic, but you don't have to share everything about how you feel to the public. Similarly, overly managed responses can be just as bad. There are good and bad ways to make an argument. Sometimes, it doesn't matter if you're right or not, if the way you make your argument turns people off, you are going to lose.
I think that the whole Docker/Rocket thing was vastly blown out of proportion, and wasn't the big deal that they made it out to be. Let's see who can make the best solution. But it is a mistake to think that this was a technical issue - it wasn't. The way that the situation was handled clouded what could have been a technical discussion of the merits or need for Rocket. At the same time, don't think that the best technical solution always wins.
Somebody highlighted concerns they have with the direction of your product. You may not agree with their opinions, but that doesn't make them FUD. They have every right to ship a product that adheres to their vision, just as you do.
> 1) interface to the app and developer should be standardized, and enforced ruthlessly to prevent fragmentation
Is one I've been pondering and asking myself about a bit - what does this mean?
Is the interface the API? The docker CLI? Interfaces to libcontainer?
Where does the line "enforced ruthlessly" fall exactly?
Does this mean wrapping the CLI or API in another convenience layer is a no-no if it doesn't expose the docker API directly?
I think the rest of the 13 make perfect sense, and I actually don't think the CoreOS guys we're going against any of those in practice or philosophy; more they wanted something small that did one thing very well.
Anyway, I love you guys and the coreos guys, so I'm only in it for the swag.
What is it that we end users don't know?
They've so far been approaching it from opposing corners, but CoreOS just made the first play at the opponent's territory, and it apparently rattled Docker a bit.
I am excited to have more viewpoints in play.
I'm optimistic that the ecosystem as a whole will benefit a lot from this, no matter how much or how little market share Rocket manages to capture.
Kind of weird that this line from your comment is identical to a line in this comment from another user: https://news.ycombinator.com/item?id=8682864
cutting & pasting (hacking :) is faster, if you're not mother tongue but believe me, in Italian it won't sound so gentle & polite.
Moreover, ... we all hope that like a "plagiarism" it won't became a a common feeling, a meme.
So what about the other 75% of my worry? That's not a cut & past, is my worry, what do you think about:
> ... Kelsey Hightower ... posted on 7 Nov some worries which many Docker users and contributors, already had since the past year, when you dropped off LXC containers, instead of working together with https://linuxcontainers.org/ project to get a better code. That seems already a strategic business decision to decouple your "product value" from his mother and generator: LXC.
Is LXC upset about Docker? I'm not sure how much room is for strategic business decisions like that. The solution is going to be a technical one and it's probably going to go to the first one to get it 'right'. There might not be space for many companies to compete on small parts of the solution (like, how to package a container).
We all are here because of that disruptive vision and willing to cooperate beyond personal & corps interests, not to compete between startups to get funds and be quoted to the NASDAQ. No Linux or Docker or even Google, would be here without that enlightened vision.
That's way I don't agree with @shykes statement one too: "1)Competition is always good ...". No Sir, not always, it depends by what you are competing for and if you follow the competition rules too.
I'm really astonished seeing big CORPs like Microsoft, VMware and others, put their eyes on a relatively small but potentially disruptive projects like Docker, the pressure could be misleading, also for an Hacker like @shykes.
We already saw that traps so many times ..., anyway I think everything it's gonna be good at the end of the day and I saw @shykes on the right pathway already: https://news.ycombinator.com/item?id=8684119
I like Docker as a project, as well as a company. So many times I thought: "that's a company I'd really like to work for".
I'm sure that @shykes (Solomon Hykes), has the strength to find the balance between external Corps pressures and Project wellness to led this Open Source community the right path like he did until now.
There is a strong desire to own and control solutions. The facts are that image registries and container execution are lightweight abstractions over already existing protocols [DNS, HTTP] and technologies [Linux Containers]. There's not much to own in the space other than through having the 'best' technical solution.
Seems to me that post-Docker 1.2, the Docker team has taken Ops concerns much less seriously and is focused almost exclusively on iterating Dev-friendly features.
Hope things change.
It feels to me like a lot of startups, and even smaller tech companies focus completely on developers. I seriously think some people think "DevOps" literally means Developers doing what Operations/Infrastructure people/teams do (or should do).
Any term coined to remove barriers will always be co-opted by middle management to mean something else so they can put them back up. Otherwise, they'd be out of jobs and they can't have that now, can they?
Security and convenience are always at odds: I don't see it as a problem that tools lean one way or the other: It does make me a bit worried if they lean one way or the other by accident -- which is what you seem to imply with your comment. I take it you're trying to "defend" docker -- but I don't know against what, nor do I understand your arguments.
Perhaps you could take a deep breath, and try again? I'm sure you really do have something to say on the matter, that is worth reading.
Our net conclusion is that this is good for the industry as competition induces everyone to work a little bit harder. We anticipate that the end game is that advancements and concepts suggested by Rocket are likely not to make it stand alone, or with very broad adoption, but that there will be enough interest & momentum that we eventually see some sort of alignment between Docker and Rocket. It's what would be in the best interests of all involved, in the end. Seems like the projects could eventually merge.
While some of the tone of the initial announcement had political overtures, which were further amplified by Pivotal (James Watters certainly didn't mind fanning flames), what this could indicate is that there were deep ideological divisions in thought within the Docker community. And instead of the parties finding common ground, the CoreOS team needed to create a new project with PR to gain attention to their ideas. That shows commitment and to a degree, high certainty, in their beliefs. Sometimes it requires one person taking on massive risks to fully convey the power of their position.
But there have been examples in the past of splinters that eventually get mended back into the fold. This wasn't a full fork, this was an entirely new approach. The foundations of what they are proposing are nice gap fills for Docker. So there are many more ways for alignment here than for division.
1. Competition? How can open source software be in competition with anything? It's free, its source code is there; if people want it they'll use it, if not they won't. Why would anyone care what other projects are doing or saying? Just build your tools how you want and go on with life. (Unless you're building your tools specifically to make money, in which case I guess PR and 'competition' does matter a lot)
2. On Twitter you suggested things should be 'composable to the extreme' ..... using plugins and drivers. https://www.youtube.com/watch?v=G2y8Sx4B2Sk
Market share is power. Popular open-source projects can, and do, shape the industry. If you believe your trajectory is the right one for the industry, competition matters a lot.
As an example, Mozilla's Firefox was created to compete with Internet Explorer. It succeeded, and now Mozilla is working to defend the open web, so market share is still crucial for Mozilla even today.
Mozilla Suite was also not created to compete with Internet Explorer. In fact, Internet Explorer was created to compete with Netscape, which was the dominant browser for years until IE finally knocked it off its catbird seat. It never recovered because IE offered a simple, fast browsing experience, even if it sucked dick at actually rendering content.
In this vein, Phoenix was created in the model of Internet Explorer. So in a way you could say it competed, but in actual fact it was competing against its own progenitor.
Reflecting more on 'competition': the browser wars nearly destroyed the web as we know it as each browser introduced incompatible proprietary extensions which were then picked up (badly) by each other over time. The lack of standards, or good implementations of standards, severely hampered the adoption of more advanced technology. Firefox continues that tradition today by pushing more and more features that IE can't support; we're just lucky that Firefox is the dominant browser now, and that people are now used to upgrading their browser virtually every week.
I remember using it when it was called Firebird.
Docker's received a lot of funding, and so it has an interest in building a whole platform. I won't say whether that is "right" or "wrong," but it may pollute the original, simple container strategy. The goal of this seems to be to offer a pure alternative. No mud-slinging there - just a different goal than what Docker has become today.
(i don't know enough about rocker to make a judgement either way).
This is a design goal so that you can launch a container under the control of your init system or other process management system.
The first step of the process, stage 0, is the actual rkt binary itself. This binary is
in charge of doing a number of initial preparatory tasks:
Generating a Container UUID
Generating a Container Runtime Manifest
Creating a filesystem for the container
Setting up stage 1 and stage 2 directories in the filesystem
Copying the stage1 binary into the container filesystem
Fetching the specified ACIs
Unpacking the ACIs and copying each app into the stage2 directories
Don't all these steps seem like a lot of disk, cpu and system-dependency-intense operations just to run an application?
Why is this thing written in Go when a shell script could do the same thing while being more portable and easier to hack on?
Why are they saying this thing is composable when they just keep shoving features (like compilation, bootstrapping, configuration management, deployment, service autodiscovery, etc) into a single tool?
I'm not sure I follow. At least compared to using Docker it doesn't seem much different at all in terms of overhead.
> Why is this thing written in Go when a shell script could do the same thing while being more portable and easier to hack on?
Go runs on more platforms than Linux containers, so I don't think Go is going to be a limiting factor. If you think shell script programming is going to lead to more robust and efficient software... ;-)
> Why are they saying this thing is composable when they just keep shoving features (like compilation, bootstrapping, configuration management, deployment, service autodiscovery, etc) into a single tool?
They aren't a single tool? They've architected it so that those different components are quite separable, particularly the ACI is really, really separable from the rest.
I've used Docker. And I am looking forward to Rocket. I will use both and I will compare without prejudice.
I personally like the idea of Rocket and am looking forward to more blog posts comparing the two!
My problems with docker have been the security model, for which the only recourse I've had is to use the USER keyword in my Dockerfiles. Furthermore, networking has been a pain point, which I've had to resolve by using host networking to access interfaces.
Let's see how rocket deals with these issues and others. I pay for CoreOS support, so I'm glad to see that they're addressing this.
I was also having some issues with php5-fpm in a docker, it doesn't seem designed for it (it gets the file paths communicated from Nginx, not the files so dockers need to sync files)
Somehow I though CoreOS and Docker would be figuring this out together. I hope somehow that the knowledge I now have will remain relevant, I was planning a hosting service for sports clubs based on drupal8.
Ah well, we are at the beginning of an era, I should have expected this. I'm very curious, who knows, the container space is far from filled, we'll be seeing many distros. There will be Gentoo's, there will be Ubuntu's. It's going to be nice.
The volume that your site code is on needs to be linked to the php-fpm container. Typically you would host this volume on a data container and use --volumes-from $ctid when starting the php-fpm container.
It looks like Rocket actually intends to be more conservative than Docker:
"Additionally, in the past few weeks Docker has demonstrated that it is on a path to include many facilities beyond basic container management, turning it into a complex platform. Our primary users have existing platforms that they want to integrate containers with. We need to fill the gap for companies that just want a way to securely and portably run a container."
So it's actually moving in the opposite direction, compared to Sandstorm.
(You of course know this already, but disclosure for others reading: I'm the lead dev of Sandstorm.)
I'm looking into using containers for ui applications. I need to access GPU within the application. is this doable with Rocket or Docker?
Also does Rocket have to be used with CoreOS?
From the looks of your other comments in this tangent it might be exactly what you need or a starting point at least.
It's a base for these BOINC  and F@H  containers.
I previously heard that docker has trouble loading device drivers.
Yep, this is _exactly_ one of our design goals. ACIs are trivially buildable and inspectable with standard Unix tools.
Dockerfiles/`docker build` is an implementation of a build system which uses the docker engine to make said rootfs.
Realistically, if the stack is broken into a dozen pieces then somebody will create a bundle with sensible defaults (let's call it "CoreOS") and then we'll be back in the same situation.
Reading up on it, I can't see how it is massively different to OpenVZ? Given Docker's youth, is anyone still using OpenVZ over it? And why? I'm interested.
It's already tied to systemd-nspawn (though arguably you could make this pluggable to support other process babysitters).
Infact, Rocket as it stands is just a wrapper around systemd-nspawn and little else.
They harp on about this new ACI format but it isn't really anything new and fails to solve the problems that currently face Docker format, which is a sufficient amount of metadata to properly solve the clustered application and networking problems.
I am all for things that do one thing and do them well, but right now Rocket is just systemd-nspawn which is just a more platform specific LXC in my opinion.
Note: I don't necessarily agree with everything Docker is doing either, I just don't think Rocket is a productive way to fix it.
What does "timing" of the announcement mean?
On one hand it talks about the original Docker manifesto and later says it was removed, with the removal being a "bad" thing. However, it refers to Docker not being simple as there are plans to add more and more features to it.
Including, "wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server". However, in the original manifesto (that was removed), Docker announced/claimed those features would/should exist: https://github.com/docker/docker/commit/0db56e6c519b19ec16c6....
Competition is good but this was a bit weak in its first appearance.
Can't be coincidental.
"Why Docker and CoreOS’ split was predictable" http://bit.ly/1zMLYSt
I'm not really making a value judgement, just an observation.
Tron has been around since the mid 80s I believe and Linux was first released in the early 90s.
Docker may or may not be the container engine that lasts a long time. There is a reason they raised a bunch of money. Clearly containers are going to be big, but is Docker the one that goes on to be dominant? Docker is trying through building features & biz dev, but it's far from over.
It's too early to foretell the fate of Rocket. Containers are getting lots of attention, so I'm actually pretty happy to look at this as a potentially rewarding experiment. Worst case, it fails and we keep using Docker (or whatever else springs up).
I think competition is good, this will give us an option that's not monolithic.
I didn't realize docker direction was to encompass orchestration until this thread. This isn't something I want to use docker for and also I'm glad the competition is address the security issue where there is a need for more security.
And with a rival option I'm happy to choose rocket as an option when it's stablized and there aren't any other options out there.
These aren't really containers. They're giant statically linked binaries, more or less. The actual operating system is now just a VM host for running containerized giant WIMPs (weakly interacting massive programs). Fast-forward a few years and the host can wither and die and be replaced with a proprietary or custom/fragmented management layer. Linux survives only as an internal pseudo-OS within each mega-binary "container."
Edit: what I was really getting at was that these technologies are patches for the inadequacy of the OS. The fact that we need containers at all stems from the difficulty of managing software installations, configuration, etc on the actual operating system.
Great post. However, containers are an easy concept to grasp. Even if the actual OS could be fixed, you'd still want some similar concepts even if the names were different.