Hacker News new | past | comments | ask | show | jobs | submit login
CoreOS is building a container runtime, Rocket (coreos.com)
902 points by kelseyhightower on Dec 1, 2014 | hide | past | favorite | 277 comments



Interesting takeaways from the post:

* Despite Brandon Philips (CoreOS CTO) serving on the Docker governance board, Docker has aggressively expanded their scope well beyond their original container manifesto.

* CoreOS believes the Docker runtime is now too unwieldy and "fundamentally flawed"; the unwritten word that really sprung to mind was that Docker was getting "greedy."

* CoreOS reaffirms their original operating model of being capable of running their infrastructure on and with Docker.

* Rocket is CoreOS's answer to stay true to the "simple composable building block" mantra.


This is great news, particularly for Enterprise customers adopting containers. IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.

But crucially, they also crossed the business models of many startups (including CoreOS, Weave, Flocker, etc.) that rely on Docker maintaining an Open Platform. So this is an entirely logical response.

I'll be surprised if now Docker in response doesn't unveil an 'enterprise' Docker version that basically just strips away the unnecessary features and has more security by default. The enterprise market is just too valuable to let it just slip away like this. Your move...


What is Docker's 'new' direction? I don't see any related announcements on their blog besides adding support on new platforms.


Docker's 'new' direction is to direct its attention towards solving the orchestration and management problems involved in actually running infrastructure on Docker.

A number of third parties had begun work on various (sometimes proprietary) orchestration and management systems for creating a reliable/scalable/easily manageable cluster with Docker as a building block. CoreOS is one. But Docker is pushing towards an official, open-source orchestration/management system that threatens to make all of those companies irrelevant.


> Docker's 'new' direction is to direct its attention towards solving the orchestration and management problems involved in actually running infrastructure on Docker.

IME examining Docker, this is actually the hard problem.


I think it is a great stand for Docker. Very recently (IMHO in 1.3), it merged the functionality of Fig into Docker.

I think Docker orchestration and coreos can coexist - if I had to use COREOS to use the goodness of Docker, then systemd-nspawn would come and eat Docker's lunch.

I wish that Docker bless one of Ansible/Chef as the official orchestration base and take it forward. I really don't want to earn something Docker specific.


Fig functionality was not merged into Docker's 1.3 release.

Ansible/Chef orchestration IMHO solves a very different problem than container orchestration.


You're right - I was referencing this bug here (https://github.com/docker/docker/issues/8637), but seems it was not merged into mainline.

I agree that Chef and Ansible are different than container orchestration today - especially, when you look at low level stuff like networking, mounts, etc... But I guess what I was saying is that it is not hard to add these features to them.

They already have a specification format that works well and check for idempotency at its core. Unless you mean something like etcd is fundamental to container orchestration, which I don't believe it is (we run a couple of containers in production using Fig)


I attended Docker Global Hack Day #2 on Oct 30 from Austin. A talk was given on an active Docker project for host clustering and container management, which was non-pluggable, and made no reference to and used none of the code from CoreOS's etcd/fleet/flannel projects.

This was where I first started worrying about CoreOS and Docker divergence.


But since the hack day there has been a pretty reasonable (IMO) GitHub discussion about the tradeoffs between out-of-box ease of use and customizability.

https://github.com/docker/docker/pull/8859


I saw that same presentation at the same event, but came away with a very different impression: the container management they showed was implemented completely outside of docker itself, with no patches to the docker codebase needed. Also, IIRC it actually did use significant code from etcd for coordination.


Are we talking about the same thing?: https://github.com/docker/docker/pull/8859 ???

It had no etcd in it and the POC was implemented as part of the Docker API/CLI, as best I recall. There were significant questions in the discussion about etcd not being there.


I believe the two (Docker & CoreOS) might have rather similar strategies and / or product roadmaps.

What seemingly gets mixed up by quite a few commentators on this topic:

Docker is an orchestration, deployment, management, etc solution - the "container" is created by LXC, jails, libVirt or other OS features and now also libContainer.

This discussion also shows how far away / how early we are with "containerization" or container that are exchangeable / movable between different (OS) environment - we are discussing the companies that are building cranes to load and unload the boxes before we even have an understanding how the boxes will really look like.


CoreOS doesn't mind using systemd...


> CoreOS believes the Docker runtime is now too unwieldy and "fundamentally flawed"; the unwritten word that really sprung to mind was that Docker was getting "greedy."

I wonder if that comes from the partnership with Microsoft.


They raised $55 million [1], so you have to believe their ambitions are to extract as much rent from the container ecosystem as possible. That's not a bad thing, but it's behind a lot of their moves.

[1] http://www.crunchbase.com/organization/docker


Docker's MO is to become "that thing that is on all servers" so that when they flip the switch and start monetizing off support and tertiary services, people will be more-or-less locked in.

It has indeed surprised me how quickly a normally-slow-to-accept-new-things community has adopted Docker (even well before it was considered "stable").


> how quickly a normally-slow-to-accept-new-things community

I think you're referring to the sysadmin community - but I think the driver for this has been the search for deployment nirvana. Deployment is a much more fragmented field, so it makes sense that a good solution would find fertile ground.


Absolutely. Not only does it simplify deployment, you also get the ability to quickly spin up a new development environment. That means it's easy to dip a toe in and slowly increase how much you use it.


Like mongodb hype?

Docker seems decent but I don't think I want them to do ochestration..


There is no switch to flip.

You are not "locked in" by Docker Inc if you are using Docker just like you aren't locked in by Github if you are using git.


> You are not "locked in" by Docker Inc if you are using Docker just like you aren't locked in by Github if you are using git.

A much more accurate analogy would be you are not "locked in" by Oracle if you are using MySQL. It may be true today, but no guarantee that will always be the case.


Despite the attempt of some to move goal posts, you're still guaranteed that you won't be locked in by Oracle even tomorrow. You still have the source code for the version you're running right?


Sure...as long as I don't care about security patches, bug fixes, performance improvements, or new features.


If people think the software moves in the wrong direction it will be forked (see MariaDB). Nothing world changing will happen.

Docker Inc. seems to make a lot of effort to ensure Docker is a truely open project. I get the feeling that people think that handing your project to Apache is the only way to prevent vendor lock in these days.


" You still have the source code for the version you're running right?"

Yes but I don't have the skill, time, resources and will to maintain MySQL if and when Oracle goes evil (I mean more evil than now ;-)

This is why I choose carefully what companies / groups I depend on for my future computing needs.


Regardless of whether your comment was lightly sarcastic or not, I agree that the Docker VMWare[1] & Microsoft partnership announcements may have been conditional upon a committed Docker roadmap outlining some or all of the features that others (such as CoreOS) may feel should be broken out. Typically larger ecosystem players want to be assured that your offering will have a clearly defined role within their existing ecosystem that plays to your core brand and technical competency.

[1] http://www.forbes.com/sites/benkepes/2014/08/25/vmware-gets-...


Wow, that is a random jump of logic. Answer: no.


I have been concerned that Docker's scope was expanding too far for a while now, so I'm glad to see an alternative that might work appear on the horizon. That said, I am somewhat concerned that CoreOS has a suspiciously similar business model to where Docker would probably like to be.

It's in a business's best interest, and exceedingly common practice, to "land and expand" with something clear and compelling, and following that add features to compete with alternative solutions. I don't think there's anything inherently altruistic about CoreOS that would keep Rocket lean in the long-run, especially as they begin migrating their various tools away from Docker containers.


I had the same initial reaction, but I think there's good reason to trust the CoreOS folks to remain faithful to the project's goals. Containerization (although foundational) is one part of CoreOS's platform. It's easy to see where the boundaries fall, e.g. I expect systemd and fleetd to keep their respective functionality and not overlap with Rocket.

It become pretty clear once dotCloud became Docker Inc. that they intended to capitalize on the "Docker" brand to sell an integrated orchestration platform. CoreOS already has enterprise customers for their operating system and related components. They seem like the perfect team to take this challenge on.


I think it's also crucial users have more than one viable container option.


The difference here would be, IMO, that they have clearly made openness one of Rocket's goals: the formats should be well-specified and maintained separately so that other implementations can run them.


> I have been concerned that Docker's scope was expanding too far for a while now

What features were recently introduced that it increased Docker's scope?


Talk of Docker cluster, which might include a network overlay layer a la weave


All of which will be fully pluggable with a "batteries included but removable" design, just like we did with sandboxing and storage.


You may need to make that clearer to some of the people that are due to be building your plugins: reading http://weaveblog.com/2014/11/13/life-and-docker-networking/, I get the feeling that they're not thrilled about it.


maybe they're busy designing the interface and implementing a proof-of-concept with us as we speak, instead of blogging and twittering.


Welp, I'm putting "been snarked by founder of Docker" on my CV.


I guess so - sorry ;)

I hope you can understand that it's frustrating when, after hard work pitching an API to dozens of ecosystem players, spending weeks trying to wrangle a working implementation which makes as many of them as happy as possible, without compromising integrity of design - after all that, in the end, all it takes is one unhappy camper to write a blog post and that immediately tramples everything else.

It's even more discouraging in this particular case, because after this blog post, Alexis and I have discussed this topic extensively, and as a result he has since joined the effort. In fact I will be hacking with him in person on integrating Weave as a native networking plugin in 2 days in Amsterdam.

So, sorry for the insta-snark. But it can be frustrating to see so much good will and hard work be crushed in a second.


Hey, regardless of the technical sides of anything, I'm sure this is not a fun day for you. I think you're handling the situation terribly, but still, not a fun day. Anyway, docker is awesome, and thanks for building it. I know it's made my devops life a lot more enjoyable of late.


I guess I am. PR has never been my thing. I'll get back to hacking, after all it's the reason we do all this: building cool things.


Don't do PR, just build the better thing.

No malice, just a friendly tip :)


Concerning Weave that's quite a good news, the weave point of view for docker networking is good and can be easily setup in many (not all of them of course) infrastructure.


They'll probably keep Rocket lean and introduce new features as "separate projects" that will all be bundled into CoreOS.


I had just landed LXC container support in Velociraptor [1] when Docker was announced last year. It uses Supervisor to launch LXC containers and run your app inside. I thought long and hard about switching to Docker, but their decision to remove standalone mode [2] would have meant replacing all of Velociraptor's Supervisor integration with Docker integration instead. With Docker being such a moving target over that time span, it just seemed like a bad move.

Since then I've been mulling writing my own standalone 'drydock' utility that would just start a single container and then get out of the way (as opposed to the Docker daemon that insists on being the parent of everything). I'm optimistic that Rocket could be that thing.

Question though: Does Rocket have any concept of the image layering that Docker does? That still seems to me like a killer feature.

[1] https://bitbucket.org/yougov/velociraptor/ [2] https://github.com/docker/docker/issues/503


Yes, the app-container spec has the concept of dependent filesets. See:

https://github.com/coreos/rocket/blob/master/app-container/S... https://github.com/coreos/rocket/blob/master/app-container/S...

What do you think of the filesets concept?


I'm still digesting the Go-like syntax for vanity URLs and how that works here. If a fileset manifest lets you specify the URLs where the layers can be fetched from, then I like it.

Does Rocket just 'cp' files on top of each other to implement layering? It'd be nice to not require a bunch of copies of the same files. I thought that the hard link implementation in Docker's new overlayfs support was a smart idea.


Yes, all of this was designed with overlayfs in mind. I am waiting anxiously for Linux Kernel 3.18 to land, this is a huge step forward for Linux and years in the making.


Cool! The idea of filesets is very nice - there are some very interesting workflow ideas buried in there. I've been looking forward to a mainstream unioning filesystem for a while too - and I hope rocket does some serious exploration (or enables it) of how to take advantage of them fully in both development and deployment. (And while I'm at it, testing too).

My personal wish-list down this path includes:

* options in the relevant manifests on which layer is writable ... if I'm doing development on libfoo which is used by several different apps, let me make that layer writable so I can rapidly iterate integration tests and (bad practice) live coding on testing/dev servers.

* tools to help me smash a dev layer or 3 into a single fileset (and similarly dissect a layer into a few new filesets during a refactoring)

* the ability to use filesets and overlays in a way similar to package management is now, but with extended features that are similar to python's virtualenv.

One of the things I see as a boon of the filesets as described is: I can update parts of my system without having to rebuild the whole dang app silo from the get go. Combining this with some of the above features looks like it could be useful for making "thin" images - where I can build all my code in one place, and port only the binaries to the staging and deployment images, just by doing a few fileset/overlay tricks. (no more complicated scripts)


Whether Velociraptor uses Rocket or not, implementing the App Container Spec seems like a no brainer. I've made https://bitbucket.org/yougov/velociraptor/issue/136.


Please no json file, use yaml or have an option for yaml T___T.

The curly braces and brackets can get ugly when nested.

edit:

It may seems stupid but when you're on the terminal with vim and the directory path is long in some nested object/array. Things is really hard to parse out with your eyes.


I think using JSON is a solid choice. You can easily make yaml to JSON translators for this purpose.


JSON is okay, but I hope TOML will gain traction soon. Because comments. And trailing commas. Rust Crate is already using TOML for package metadata.

https://github.com/toml-lang/toml

http://doc.crates.io/manifest.html


+1 for TOML. If you think simplicity matters, look at the YAML spec [1], vomit and never use it again.

If you still don't believe me, try implementing a conforming YAML 1.2 parser (oh, and try not to make it a source of RCE [2]...).

[1] http://yaml.org/spec/1.2/spec.html

[2] http://blog.codeclimate.com/blog/2013/01/10/rails-remote-cod...


+1 for TOML, started playing with Rust a couple of weeks ago.


+2 on Toml, it's a great config file language.


I've been -1 on TOML for a long time, and preferred YAML, but your point about comments is persuading me.


For running Docker under Supervisor, you may get some inspiration from https://github.com/ibuildthecloud/systemd-docker


I hope Rocket will be more stability oriented than Docker. After runing few hundreds containers on machine for almost a year know I would not chosen Docker again. Docker has stability issues all the time and it is taking months to solve them.

Offering strace logs to developers without feedback and finally it was fixed by someone from outside the project. https://github.com/docker/docker/issues/7348

Allocating ports pops now and then every odd docker release: https://github.com/docker/docker/issues/8714

Even stupidest things like allowing to have more dockerfiles in one folder. https://github.com/docker/docker/issues/2112

Docker has own agenda and it is clearer and clearer.


> Even stupidest things like allowing to have more dockerfiles in one folder.

Wow. That issue has been open for a long time.


I've been involved in that ticket since the dark ages. There's a philosophical disconnect between how (some) people want to use Docker and how the maintainers want it to be used, and it comes to a head in that thread.


I saw your articulate response in that thread. Thank you for that response. I don't need multiple dockerfiles right now, but if i invest more into docker, I will. I think your responses might have convinced shykes that the need won't be going away.

There was a similar disconnect on data volumes a while ago that took some convincing before the discussion moved forward. It has always been trivial in OpenVZ to bind mount into a container (ie: share a very large read-only mount between containers).


For anyone else trying to follow along (and still evaluating docker at a distance), start with this comment in that thread:

https://github.com/docker/docker/issues/2112#issuecomment-39...

It gets quite interesting, and it's still going 14 months later...




Thanks, Docker team, this made me 100% sure I support Rocket.


Great news. I'm not a fan of Docker's new monolithic approach to containerization. Things like orchestration and networking should not be included in docker, but rather pluggable.


I prefer the Unix model - many programs that work together. That might not be practical for networking (a natural plug-in, probably), but feels like it should be the way for orchestration.

The Docker image registry and image management should really be a separate program as well - that is a huge pain point that Rocket seems more likely to get right.


Interestingly enough, with flannel, docker's advanced networking capabilities become pretty trivial, and communication across hosts is also pretty trivial.

I think all in all, CoreOS has built out a ton of tools to make using Docker easier, and they're all very well defined, and compossible. I'd even say that a lot of docker's features could be completely removed by using some of these tools.

Links? Nah just use ips/dns + etcd for service discovery.

Networking? Need very basic bridged networking, and flannel will handle communication on a single host, or multihost.

Deployment? Use fleet.

Not that all these are 100% perfect like I've made them out to be, but any individual component could be swapped out if you want.


The problem with the CoreOS tools is that they're pretty tightly coupled. We looked into using fleet to manage our deployments. Unfortunately, it relies on a minor feature of etcd and cannot work with Consul, our corporate standard. Flannel? Yep, again, tightly coupled with etcd.

I like some of the ideas behind the CoreOS tools, but until they start playing well with others, they're a non-starter for me. I'm not interested in tools that try to lock me into other, inferior, tools.


I think this is probably more indicative of the issue that Future Docker would like to be a CoreOS-competing platform, and has been edging towards that state. This is CoreOS' natural bounceback from that.


The thing I like about the link model is that they hide your containers from other containers and only expose the connections you want (I think using iptables?)

I'd like a tool that makes this linking easier outside of Docker, but for now this is one of the features I like about it (although holy moly do Docker links have a lot of baggage you have to bring along for the ride, like giving everything names).


Shameless self plug, but not sure if you saw my project that does something along these lines:

https://github.com/vishvananda/wormhole


From the docs, it looks like that has a dependency on Docker, which kind of defeats the purpose. If I'm stuck with Docker, I'm better off just sticking with links: I'm looking for something that could work with systemd-nspawn, etc.


That sounds awesome. I'm learning docker and I might wait on this issues to resolve first.

I don't like the sound of locking into one vendor for everything.


The Unix model works great at the network later. Otherwise I couldn't be building a complete, multi tenant, docker container as a service / infrastructure as a service, cloud. Built on top of an end to end SDN.


Interesting. Which SDN are you using?


We're doing something like that on top of Open vSwitch, OpenFlow, and vxlan.


Would you consider open-sourcing / documenting / blogging how you do that (or even providing some pointers to help get me started)? I'm playing with kubernetes and AWS, and it isn't clear what the best networking solution is (rudder, weave, IPv6, SDN); it would be helpful to have some pointers on the OpenVswitch front.

(I only ask because it doesn't look like it is your core business)


This is exactly the model being proposed in Docker.


I think this was the original model proposed by Docker. What we have now is (as other posters have mentioned), a Docker organization reasonably bent towards creating value for their investors, which means they need to start building things that, you know, make money.

To clarify, I don't think there's anything inherently wrong with what Docker's doing, but it is at odds with an entirely open, pluggable system. It doesn't make any sense for their business model to truly make it easy to just use their containers and none of the revenue-generating offerings.


I've not been following the discussions but if it's such a critical piece of the whole puzzle and it's in everybody's interest that it remains open, wouldn't a foundation, rather than a single private company, be the best venue for leading the project forward?


Then how do you fund that foundation? Good developers cost a ton of money. Marketing, organizing events, organizing conferences etc also costs a ton of money. I think something like Docker, especially given its growth and adoption rate, never would have been possible without VC funding. VCs wouldn't invest in a non-profit foundation.


The post mentions not having a daemon running as root, but then you have to run `rkt` as root anyway. Won't this just mean that instead of having a single implementation of a Rocket daemon running as root, there is now one custom one every time it needs to be automated?

It's great to see this problem broken up into reusable pieces though. It totally makes sense to function without a daemon, especially out of the box.


There actually is a significant difference between having 'rkt' as a setuid-root process that's invoked from the command line, and having a docker server always running waiting for commands. There are more ways for a potential attacker to get at the server. So, Rocket at least looks like they're trying to shrink the attack surface.


> There are more ways for a potential attacker to get at the server. So, Rocket at least looks like they're trying to shrink the attack surface.

hm.. I don't think that's a given at all! There's been many issues with setuid-root programs. And I've seen that the OpenBSD guys favor privilege separation by breaking breaking up daemons into several parts that communicate using a very strict set of commands. For example a dockerd that does most of the work, but talks to another daemon (dockerd-root) when it needs to do anything privileged.

OpenSMTPD example: https://www.opensmtpd.org/presentations/asiabsdcon2013-smtpd...

OpenSSH: Initial efort, 2002: http://www.citi.umich.edu/u/provos/ssh/privsep.html

http://www.openbsd.org/papers/openssh-measures-asiabsdcon200... - Page 16 ->


"Looks like" is usually about as far as people get when they start down this road. If they make it down the road, they arrive at mess. Just look at what OpenStack has been through :)

So then, I guess docker could just run the two servers, one internal as root and one public as not? That's a pretty quick fix.


> There actually is a significant difference between having 'rkt' as a setuid-root process that's invoked from the command line, and having a docker server always running waiting for commands. There are more ways for a potential attacker to get at the server.

Wrong. With a server, the only thing an attacker has control over is its input. With a setuid-root binary, they still have control over its input, but they also have control over the entire environment under which it executes, including many things that developers generally assume an attacker can't control. Setuid binaries are incredibly scary from a security perspective and much harder to get right than servers.


Yep, setuid would make sense. Hopefully that's how people end up using it. (i.e. Rocket should document or distribute it that way)


I found reading these comments very interesting.

From one point of view, I'm thinking "why did coreos need to be so aggressive?", and "boy, what a gift Solomon Hykes did to coreos by mismanaging this thing so badly", and "man, all of these guys look sort of immature to me".

From the other point of view, I'm respecting docker and coreos even more, as open source projects and as a companies, because it feels like there are real people behind them.

If this is the new wave of enterprise companies, I really like it. These are people like us, that engage with us and sometimes screw up, without hiding it. They are doing great things, and the fact that they are a bit immature is actually great.

I'm an entrepreneur myself, I've done enterprise software my whole life, and I always thought it's a shame that companies in this space are so distant from their users and have such little humanity.

Looks like things are changing.


Looking at the code[1] this seems to be a simple wrapper around systemd-nspawn[2]

[1]: https://github.com/coreos/rocket/blob/9ae5a199cce878f35a3be4...

[2]: http://lwn.net/Articles/572957/


Rocket is tied to systemd, that will definitely spawn some interesting discussions. https://github.com/coreos/rocket/blob/9b79880d915f63e7389108...


It isn't tied to systemd. The stage1 that is in the current prototype uses systemd to monitor and fork processes but we would love to see other stage1's that configure other process runners. For example configure and run a qemu-kvm filesystem as the container.

Also, even though it is using systemd to monitor and fork processes a design goal is to run on all Linux's that have a modern Kernel.


What about non-Linux platforms (FreeBSD, Mac OS X with a kext)?

One thing that I believe Docker has failed at is in taking a purely declarative approach to image definition; rather than specifying the packages that are assembled/inserted to create the container, Docker ships around non-portable Linux binaries.


I second that. At the begining Docker people were mentioning adding FreeBSD Jails support, what seemed to me an awesome thing, a platform independent containerization middleware, but recently they just seemt to forget about it and they're doing only linux-centric things - what a shame.


Yes, but the Docker Remote API allows for a great deal of implementation freedom -- including running on a different OS substrate. We're doing this with sdc-docker[1] to run Docker on top of SmartOS and in a SmartOS container, and the Docker folks have been incredibly supportive. Despite the rhetoric, Rocket appears to be much more bound to the OS platform than Docker -- and given @philips' comment that "part of the design difference is that rocket doesn't implement an API"[2], this binding appears to be deliberate.

[1] https://github.com/joyent/sdc-docker

[2] https://news.ycombinator.com/item?id=8682798


It depends on how you look at it. The Docker Remote API provides an abstraction on the OS substrate and definitely binds you a lot more to Docker and their model. The "rocket doesn't implement an API" means all that it isn't really doing much more than kicking off the container and using the existing OS substrate to manage everything else.

I can see where for SmartOS & Windows the Docker approach is more flexible. If someone has already settled on Linux, but they have their own ideas about how to manage containers within Linux that have nothing to do with CoreOS, the Rocket model is going to leave them much more flexibility.


Have you ever considered doing a Warden backend for SmartOS? It'd make Cloud Foundry run out of the box, as I understand it.

Disclaimer: while I work on CF, I'm not that close to the Warden nitty-gritty.


The app container specification has socket activation in it. This is going to essentially tie it to systemd. Otherwise you will need another daemon running to do the socket activation, but then that would seem to be a "fundamentally flawed" execution model.


Socket activation isn't mandated and completely optional. If an application detects it didn't get its sockets it can certainly just start listening instead if it wishes.


You may be shocked to discover that socket activation is actually a pretty old idea, and can even be found on non-Linux platforms. systemd embraces it in a pretty big way, but I can't see a problem getting the app containers to work with one of the other socket activation models.


And you may be surprised that I ran socket activation well over 15 years ago, so yes I'm well aware of the approach. The comment is more around the fact that in CoreOS's post they seems to harp on the security of a daemon process running as root that is responsible for spawning containers. What I'm saying is that with socket activation you will essentially have that again. Rocket can only work around it today because they have systemd as PID 1 running as root doing the socket activation.


With capabilities there is no need to have privileges beyond port binding and (possibly) userid assignment, no?


Socket activation does not need systemd in any way. You can launch any socket-activated daemon with https://github.com/LEW21/sdlaunch - without any dependence on systemd.


The great part of having a spec separate from the runtime is that Rocket can use systemd, but other compatible tools won't have to.


This part is super important for Rocket support in Mesos and other things that run containers.


Hi, I created Docker. I have exactly 3 things to say:

1) Competition is always good. Lxc brought competition to openvz and vserver. Docker brought competition to lxc. And now tools like lxd, rocket and nspawn are bringing competition to Docker. In response Docker is forced to up its game and earn its right to be the dominant tool. This is a good thing.

2) "disappointed" doesn't even begin to describe how I feel about the behavior and language in this post and in the accompanying press campaign. If you're going to compete, just compete! Slinging mud accomplishes nothing and will backfire in the end.

3) if anyone's interested, here is a recent exchange where I highlight Docker's philosophy and goals. Ironically the recipient of this exchange is the same person who posted this article. Spoiler alert: it tells a very different story from the above article.

https://twitter.com/solomonstre/status/530574130819923968 (this is principle 13/13, the rest should be visible via Twitter threading)

EDIT: here is the content of the above twitter thread:

1) interface to the app and developer should be standardized, and enforced ruthlessly to prevent fragmentation

2) infrastructure should be pluggable and composable to the extreme via drivers & plugins

3) batteries included but removable. Docker should ship a default, swappable implementation good enough for the 80% case

4) toolkit model. Whenever it doesn't hurt the user experience, allow using one piece of the platform without the others.

5) Developers and Ops are equally important users. It is possible and necessary to make both happy.

6) If you buy into Docker as a platform, we'll support and help you. If you don't, we'll support and help you :)

7) Protect the integrity of the project at all cost. No design decision in the project has EVER been driven by revenue.

8) Docker inc. in a nutshell: provide basic infrastructure, sell services which make the project more successful, not less.

9) Not everyone has a toaster, and not everyone gets power from a dam. But everyone has power outlets. Docker is the outlet

10) Docker follows the same hourglass architecture as the internet or unix. It's the opposite of "all things to all people"

11) Anyone is free to try "embrace, extend extinguish" on Docker. But incentives are designed to make that a stupid decision

12) Docker's scope and direction are constant. It's people's understanding of it, and execution speed, that are changing

13) If you USE Docker I should listen to your opinion on scope and design. If you SELL Docker, you should listen to mine.


I think you're reading too much - or too little - into this if you think they're "slinging mud". Any fork is going to list its reasons for the fork- if they didn't have issues with how Docker is heading, why would they be making the fork in the first place?

If they just quietly gave an ambiguous non-disparaging statement like "we're forking because we're unhappy with the direction Docker is taking", it would seem frivolous and ill-considered, and nobody would know on what points the fork would be aiming to distinguish itself.

This statement needs to be made, the way it was made, for the same reasons any project announcement is made: it needs to announce that it exists, and why it exists. It's the same as Docker's "debut" blog post(s).

Every schism needs its 95 Theses, and the odds favor the ones who can read them, understand them, and take them into consideration.

---

Disclaimer (re https://twitter.com/kenperkins/status/539528757711622145): I make edits to my comments after posting, usually posting a line or two then fleshing them out over time. If I make a change that conflicts with a statement in an earlier revision, I'll note it: otherwise I'm pretty much just composing live.


It's really bugging me people are using the word "fork". This is not a fork, it's a competing container format, there isn't any docker code in Rocket AFAIK. Even @shykes called it a fork in a comment, it's not somebody taking your code and doing something different with it, they are doing their own implementation. Ideas aren't "forked", code is.

As to everything else, I manage CoreOS clusters with docker for now, and while this came out of the blue (seems like for Docker folks as well) I'm happy to see what happens as a result. I'm not sure why there are hurt feelings over the announcement, didn't find anything particularly in bad taste and what exactly is wrong with promoting your new product?

The CoreOS team isn't under any obligation to docker to contribute however anyone on the docker team want's them too. Even if these issues have been discussed before they've clearly taken a different path and that's within their right, not sure where mud is being slung. Where this will lead who knows, but hopefully there will still be good collaboration between different groups as they pursue their own goals that align with their needs.

EDIT: I haven't actually looked at the code, so if somebody wants to prove what I'm saying wrong please do. I'm basing what I know off the announcement.


IMO rewriting something from scratch is like forking but worse because it's impossible to merge later. And Rocket is definitely forking the Docker community.


By this definition, linux is a fork of windows and is inferior because it cannot be merged back to windows.

Often, starting from scratch is better. This is especially true when the goals or philosophy of the two projects are fundamentally different and incompatible, even if they perform similar tasks. Again, linux vs windows example applies.


If it can't be merged, it's not a fork, that's the key part of forks (well, not entirely, but the lack of shared code means it's not a fork by my definition).

That said, you're on point: this is forking the community. A hard fork, too.


That's great, except forks can be merged later.


Yes, that's what I said.


I don't have a horse in this race, but from what I read this is the part that can be construed as "slinging mud". I've put some [read between the lines] comments in square brackets:

  "Unfortunately, a simple re-usable component is not how things are playing
   out. Docker [much to our dismay] now is building tools for launching cloud
   servers, systems for clustering, and a wide range of functions: building
   images, running images, uploading, downloading, and eventually even overlay
   networking, all compiled into one [big and nasty] monolithic binary running
   primarily as root [how insecure is that?] on your server. The standard
   container manifesto was removed [those flip-floppers!]. We should stop
   talking about Docker containers, and start talking about the Docker
   Platform [since we can focus attention on our efforts that way]. It is not
   becoming the simple composable building block we had envisioned [which puts
   our offerings at a disadvantage]."

  "We still believe in the original premise of containers that Docker
   introduced, so [unlike those silly Docker people] we are doing something
   about it."
Later on, they specifically say:

  "the Docker process model ... is fundamentally flawed"
  "We cannot in good faith continue to support Docker’s broken security model..."
All these may be valid criticisms, but even ignoring my potentially off-base annotations it's difficult to read their announcement as anything other than "Docker is broken and can't be fixed". It's reminiscent of political attack ads which focus on the shortcomings of your opponent rather than the strengths of your own platform.


"Docker is broken and can't be fixed"

Or, taking the announcement as intended, "We were interested in the direction Docker started in, they have since pivoted. We were more interested in the direction than Docker itself".

Yes, there is some mild-mannered disparagement in the announcement, but it's hard to characterise it as 'slinging mud', and it's not really fair to disparage it with the name-calling you're injecting.


I think they spent plenty of time talking about the advantages of their approach. The comment at the bottom there was only in response to an FAQ of "why not just work from the docker you already use?"


[deleted]


personally, I think the long term value of Rocket is not about Rocket -- its about the ACI specification for the formats of containers.

Right now I'm already taking a Dockerfile, exporting it to a tar, and then running systemd-nspawn -- I love Dockerfiles, I love being able to grab a postgres server and get it up quickly from Docker Hub, but I didn't need or want the rest of docker.

If both Docker and Rocket support ACI, then you have a composable image layer, and that means people aren't locked into either ecosystem just to build images of their applications.

ACI :: Docker-tar-format to me is like QCOW2 :: VMDK. Wouldn't it be cool if projects like Packer[1] didn't have to exist, because the image format of Virtual Machines was open and documented as an independent standard?

[1] - https://www.packer.io/


Now we're talking. Yes, I agree having a better spec for the underlying image format would be nice. In fact I also agree you should be able to use the Docker runtime without its packaging system, and vice-versa.

However I think it makes more sense to do this on the actual Docker format which everyone already uses... That way you get the benefit of increased openness without the drawback of fragmentation. I have the impression I've been pretty vocal in asking for help in making this happen, and wish these guys had stepped in to help instead of forking. I pretty distinctly remember pitching this to them in person.

So, I'll re-iterate my request for help here: I would like to improve the separation between the Docker runtime and packaging system, and am asking for help from the community. Ping me on irc if you are interested.


Looking back from the long-term future, though, what's the difference between the two approaches?

Whether the work on a standard container format happens inside or outside of Docker, it would result in a format presumably a bit different from how Docker containers are now (e.g. not overlay-layered by default, since most build tooling wants to just output an atomic fileset.) And either way, work would then occur to make Docker support that standard format.

The only real difference is that, in this approach, the ecosystem also gets a second viable runtime for these standard containers out of the deal, which doesn't seem like a bad thing. You can't have a "standard" that is useful in a generic sense without at least two major players pulling it in different directions; otherwise you get something like Microsoft's OpenDocument format.


> However I think it makes more sense to do this on the actual Docker format which everyone already uses... That way you get the benefit of increased openness without the drawback of fragmentation.

One could have just as easily said the same thing when the Docker format was introduced. The OpenVZ template format works well and is very similar to the proposed ACI format. The Docker format hasn't been without issue/problems.


I think it's called OVF[1]. It's just not as widely supported as it probably should be.

[1] - http://en.wikipedia.org/wiki/Open_Virtualization_Format


Yeah, just try using OVF with containers and you'll discover how much of a round peg/square hole fit you are describing. VM's and containers are surprisingly different.


> Wouldn't it be cool if projects like Packer[1] didn't have to exist, because the image format of Virtual Machines was open and documented as an independent standard?

Is what I should have quoted in my reply. Nobody was talking about using one instead of the other. Though, it'd be easy enough to run, for instance, CoreOS from an OVF image to run containers from. Though, I feel like you and I are just stating the obvious at this point. Wouldn't you agree?


I would.


Yes, it is always massively disappointing with every release of Parallels Desktop for Mac that OVFs and OVAs are not supported; they blindly continue to support just their own disk format whilst pushing their "enterprise" solution - surely OVF and OVA deployments are essential in that environment???

Sigh!

This'll be the last version of Parallels that I buy (thanks Yosemite)


In theory, OVF is the 'answer' for Virtual Machines -- but its failure has been in adoption -- if you can't get Amazon and OpenStack to adopt it, what's the point?

Before Rocket/ACI there wasn't even a contender for Containers. Now there is a published spec. Start there. Iterate.


I don't disagree, but OVF is an ANSI[1] & ISO[2] standard. Like you said, Amazon & OpenStack have chosen not to adopt it.

[1] http://webstore.ansi.org/RecordDetail.aspx?sku=INCITS+469-20...

[2] http://www.iso.org/iso/home/store/catalogue_tc/catalogue_det...


Just goes to show that adopted+working is more important and useful than a published spec.


Might be worth mentioning that you are employed by Docker, if you're going to engage in this discussion.


The Docker team does this a lot, and it's part of their PR machine. They creep their way into and eventually try to steer every conversation regarding containers, especially when it can potentially be damaging to their "brand". (part of what has rubbed me the wrong way)

~~~

Frankly, shykes and other Docker employees shouldn't be commenting here. It only serves to make them look petty with any attempt of a "rebuttal" and, as shykes put it, "sling mud". CoreOS made a grand announcement, and yes it competes with Docker... but just let it play out.

Frankly, there is a lot of things Rocket aims to do that are more appealing to me. Security being one of them, and a standardized container specification is another. If anything, it will make Docker compete better.


Actually, I appreciate that shykes and others take the time and try to explain their side of things and engage in a dialog. There's a lot of people confused right now about what's going on.


I think it's a little less scary than you think. The person who commented was a dev. Like a very devvy dev, who spends lots of time devving on Docker. He's free to express an opinion, but he prob should have mentioned who he was (I recognised his name because he dev a lot on docker). But he's not part of the PR machine. He's a dev. A dev with a kinda ill thought out opinion, but a dev.


> The Docker team does this a lot, and it's part of their PR machine. They creep their way into and eventually try to steer every conversation regarding containers, especially when it can potentially be damaging to their "brand".

Can you give three examples of this happening?


If you must know, the opposite just happened. Someone who happens to work at Docker just voiced their individual opinion. He was then reminded by "the PR machine" that it is better to take the high road and refrain from answering, and let the company make an official answer. This is pretty standard communication practices, and a good way to avoid feeding trolls like you. I know this, because I myself will get in trouble for replying to you :)


> avoid feeding trolls like you.

Interesting to see you resort to calling your users "trolls" simply because they feel it's not good for you, the head of Docker, to respond off-the-cuff and angry about a PR announcement from a competitor.

> that it is better to take the high road and refrain from answering, and let the company make an official answer

Your company already released an official announcement 2+ hours ago (with much of the same rhetoric as your post here). Seems you didn't even follow your own advice.


I'm just calling you a troll, and it's for implying that a cabal of Docker employees somehow manipulates and suppresses the public conversation about containers for the profit of their employers.


    > I'm just calling you a troll, and it's for implying that 
    > a cabal of Docker employees somehow manipulates and 
    > suppresses the public conversation about containers for 
    > the profit of their employers.
Really? This strikes you as a good idea?


You came here with the explicit intent of disseminating your viewpoint that CoreOS is making a terrible decision and why your company and it's ideals are better. Your company already made an official PR response, leave it at that. (and you call me a troll?)

For the first time in Docker's short history, it's future and mission are being directly challenged. This is your response? (it won't be the last time Docker is directly challenged).

Imagine if Microsoft went around rattling the cage every time Apple released some product -- it would make them look pretty petty pretty quickly. Just get out there and compete. Produce a superior product and the market will speak.


> Imagine if Microsoft went around rattling the cage every time Apple released some product

You mean like this? https://www.youtube.com/watch?v=eywi0h_Y5_U FIVE HUNDRED DOLLARS FOR A PHONE?

In all seriousness, you made a few blaming statements early on in this thread which is the most likely reason got the reaction you did from Solomon. I'm not opposed to people making observations, but speaking for others really has no place here!

Specifically talking about the "PR machine" comment. Say what you mean!


Well, for the sake of the argument, it did make them look bad.


You're just digging a hole here. Better to take your own advice and take the high road.


> Hi, I created Docker. I have exactly 3 things to say:

In line of making lists of things to say. I got 2.

1) Don't use Twitter for having long conversations and public fights. Just don't. No good will come out of it. Engaging in that is feeding the trolls and slinging mud, which you accuse the other party of doing.

2) Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?

Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.


> 1) Don't use Twitter for having long conversations and public fights. Just don't. No good will come out of it.

I encourage you to read the twitter exchange I linked to. It predates all of this, and is not at all a fight. On the contrary it is a constructive exchange and I am using it to assert Docker's philosophy in a positive way

> Vis-a-vis "just compete!". How do you see this "competing" happening without an announcement like this. "We have created X container thingy"? Ok, isn't it smart to compare to an existing container "thingy" right of the bat?

Surely it's possible to launch a competing tool without resorting to a press campaign like this one: http://techcrunch.com/2014/12/01/coreos-calls-docker-fundame...

> Imagine they didn't mention Docker. I can see you writing about "stealing of ideas", "lies", "not being straight-forward", "this is just a Docker clone by they don't mention Docker so they are being shady" and so on.

No, I would definitely not say that.


> without resorting to a press campaign like this one

So, clearly stating their concerns about the direction your company has taken, and why they feel the need to create a competing solution is bad?

The only way that article can be considered "negative" is if your opinion is that Docker, Inc are the gods of containerisation and should be considered the be-all and end-all of solutions to container based software deployments etc.


If you were trying to make sure as many people as possible paid attention to Rocket as a serious alternative to Docker, which is the current de facto standard Linux containerization scheme, well done.


An article spreading fud on Docker's philosophy is at the top of HN. I added a comment describing the actual Docker philosophy.


You have to realize that commenting here, in this thread in particular is not helping things... Instead of keeping your head down, letting the buzz blow over, you just made the PR that much stronger for the CoreOS POV. You should have thought about posting an article in a few days/weeks that, while not directly refuting the CoreOS post, put the Docker vision front and center and made it seem like you were the leader of the market, not just a company that was blindly reacting.

I think it's safe to say that while your comments here made you feel better, they didn't help your position at all, regardless of how valid your points are.


This is a pretty cynical point of view.

Why can't we all work something out here?


It's not cynical, it's PR 101. You shouldn't respond publicly to something until you've calmed down. (It's actually good advice overall). No good can come from posting in the heat of the moment. Hell, you had the founder of Docker calling some of the people in this thread "trolls". That's not a win for anyone.

The problem for the Docker folks was that they were making things into a much bigger deal than they otherwise were. By attacking the CoreOS announcement, both here in comments and in their blog, they only amplified the issue. Consider it a corollary of the Streisand effect.

You even alluded to this earlier in the day when you told them to: Don't do PR, just build the better thing.

There isn't anything here to "work out", and if there is something, it needed to not happen in public. People from Docker just needed to stop talking for a while and take a time out. They weren't helping their situation and didn't seem to get that.


If these projects are using 'open design processes', then the conversations do need to happen in the public.

What I felt as cynical about your post were things like:

>and made it seem like you were the leader of the market

I feel this is cynical because it's advocating not for facts and technical solutions, but arguably, willful misleading of the public. Docker should be open and honest about its software and its positions, not trying to create narratives where it 'seems' like you are something that you might not be.

>you just made the PR that much stronger for the CoreOS POV

The reason I advocated for not 'doing PR' is that, in my book, PR is an exercise in charade. Tell us what you feel, what you're working on, and why these things are good. Don't try to 'manage' appearances. If you have a problem with something, let it be known.

I think there are some things that might be able to be worked out. Docker and CoreOS/Rocket may be able to co-exist. Rocket doesn't seem to have the tools to easily produce the ACI's. Dockerfiles are widely used and pretty decent. Docker could focus on tooling while CoreOS handles execution. Both companies have contributed useful technology and it's not exactly clear that one company can/should own the entire solution.


Docker wants to be the leader in the market for containerized deployments, right? This is largely a competition for mind-share and users. How you act in a matters. Messaging matters. If Docker wants to be perceived as the leader in containers, then they should act like it.

PR isn't just standing in front of a microphone and saying what you're working on or how awesome you are. It's how you act in public, how you treat customers and competitors. You want to be authentic, but you don't have to share everything about how you feel to the public. Similarly, overly managed responses can be just as bad. There are good and bad ways to make an argument. Sometimes, it doesn't matter if you're right or not, if the way you make your argument turns people off, you are going to lose.

I think that the whole Docker/Rocket thing was vastly blown out of proportion, and wasn't the big deal that they made it out to be. Let's see who can make the best solution. But it is a mistake to think that this was a technical issue - it wasn't. The way that the situation was handled clouded what could have been a technical discussion of the merits or need for Rocket. At the same time, don't think that the best technical solution always wins.


Your comments on this post have done more to damage my faith in Docker's philosophy than the Rocket announcement did.

Somebody highlighted concerns they have with the direction of your product. You may not agree with their opinions, but that doesn't make them FUD. They have every right to ship a product that adheres to their vision, just as you do.


Hey Solomon; honest question - skipping the tête-à-tête for a moment, the first tenant you outline:

> 1) interface to the app and developer should be standardized, and enforced ruthlessly to prevent fragmentation

Is one I've been pondering and asking myself about a bit - what does this mean?

Is the interface the API? The docker CLI? Interfaces to libcontainer?

Where does the line "enforced ruthlessly" fall exactly?

Does this mean wrapping the CLI or API in another convenience layer is a no-no if it doesn't expose the docker API directly?

I think the rest of the 13 make perfect sense, and I actually don't think the CoreOS guys we're going against any of those in practice or philosophy; more they wanted something small that did one thing very well.

Anyway, I love you guys and the coreos guys, so I'm only in it for the swag.


What's with all the drama? Did we read the same announcement?

What is it that we end users don't know?


Docker and CoreOS are in a pre-monetization land grab for a single market.

They've so far been approaching it from opposing corners, but CoreOS just made the first play at the opponent's territory, and it apparently rattled Docker a bit.

I am excited to have more viewpoints in play.


Yes and Pivotal (CloudFoundry) has posted a fairly supportive blog entry on Rocket. So it's not just CoreOS "making a play".

https://news.ycombinator.com/item?id=8683540


Cloud Foundry also quietly forked Docker with Warden/Diego (edit: I meant Garden, thanks kapilvt), although in that case they remained compatible with Docker images.


clearing up some facts.. warden predates docker, its a container impl. diego is something entirely different more like kubernetes or mesosphere (scheduling & health, etc). garden the go implementation of warden containers does add fs compatibility for docker.


Your edit is incomplete. Warden predates Docker and is an independent container system. Diego is a new controller/staging/allocating/health system for part of Cloud Foundry, which is a complete PaaS, of which Warden is a low-level component.


Yeah, I'm not saying it's just a cheap shot; Rocket does a good job of addressing some real issues with Docker.

I'm optimistic that the ecosystem as a whole will benefit a lot from this, no matter how much or how little market share Rocket manages to capture.


My +1 goes to Kelsey Hightower. He posted on 7 Nov some worries which many Docker users and contributors, already had since the past year, when you dropped off LXC containers, instead of working together with https://linuxcontainers.org/ project to get a better code. That seems already a strategic business decision to decouple your "product value" from his mother and generator: LXC. IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.


> IMO, Docker's 'new' direction completely ignored the tremendous amount of support they had from the sysadmin and devops communities.

Kind of weird that this line from your comment is identical to a line in this comment from another user: https://news.ycombinator.com/item?id=8682864


Infact I just copy & past that ..., not the least, I didn't found better words to express that feeling, which I hope it will be wrong honestly because I'm on board with Docker since the early days and I'd like to see it more community driven that private business driven.


Hacker plagiarism.


@jsprogrammer yes,

cutting & pasting (hacking :) is faster, if you're not mother tongue but believe me, in Italian it won't sound so gentle & polite.

Moreover, ... we all hope that like a "plagiarism" it won't became a a common feeling, a meme.

So what about the other 75% of my worry? That's not a cut & past, is my worry, what do you think about:

> ... Kelsey Hightower ... posted on 7 Nov some worries which many Docker users and contributors, already had since the past year, when you dropped off LXC containers, instead of working together with https://linuxcontainers.org/ project to get a better code. That seems already a strategic business decision to decouple your "product value" from his mother and generator: LXC.


I only meant it as a statement of fact. When all we really get from each other is pieces of text, it is viewed as 'suspicious' when someone copies something verbatim without acknowledging that they have done so. Of course, the context plays a big part of whether it's viewed like that. In a forum like this, where we are supposedly typing our own 'comments' to the conversation, it's a little strange to see a sentence like that copied in two places with two different user names. It made me consider if the comment was made by a bot.

Is LXC upset about Docker? I'm not sure how much room is for strategic business decisions like that. The solution is going to be a technical one and it's probably going to go to the first one to get it 'right'. There might not be space for many companies to compete on small parts of the solution (like, how to package a container).


Thanks for clarifying @jsprogrammer. What I'd really like, would be an Open Source model able to be COOPERATIVE over COMPETITIVE, as it was at the beginning of the movement.

We all are here because of that disruptive vision and willing to cooperate beyond personal & corps interests, not to compete between startups to get funds and be quoted to the NASDAQ. No Linux or Docker or even Google, would be here without that enlightened vision.

That's way I don't agree with @shykes statement one too: "1)Competition is always good ...". No Sir, not always, it depends by what you are competing for and if you follow the competition rules too.

I'm really astonished seeing big CORPs like Microsoft, VMware and others, put their eyes on a relatively small but potentially disruptive projects like Docker, the pressure could be misleading, also for an Hacker like @shykes.

We already saw that traps so many times ..., anyway I think everything it's gonna be good at the end of the day and I saw @shykes on the right pathway already: https://news.ycombinator.com/item?id=8684119

I like Docker as a project, as well as a company. So many times I thought: "that's a company I'd really like to work for".

I'm sure that @shykes (Solomon Hykes), has the strength to find the balance between external Corps pressures and Project wellness to led this Open Source community the right path like he did until now.


That is difficult to achieve when you're embedded in a culture that often puts money above everything. Everyone (as in, those competing [often corporations]) are focused on the dollars and market share over the technical merits of the solutions.

There is a strong desire to own and control solutions. The facts are that image registries and container execution are lightweight abstractions over already existing protocols [DNS, HTTP] and technologies [Linux Containers]. There's not much to own in the space other than through having the 'best' technical solution.


Ops (particularly in Enterprise) doesn't want batteries included by default. Principle #3 and #5 are incompatible IMO. Do one thing and do it well...

Seems to me that post-Docker 1.2, the Docker team has taken Ops concerns much less seriously and is focused almost exclusively on iterating Dev-friendly features.

Hope things change.


This is all too common in my opinion.

It feels to me like a lot of startups, and even smaller tech companies focus completely on developers. I seriously think some people think "DevOps" literally means Developers doing what Operations/Infrastructure people/teams do (or should do).


It literally does mean that a lot of the time. It also means sysadmins bashing our horrible code and of course, the two working together which was the original concept.

Any term coined to remove barriers will always be co-opted by middle management to mean something else so they can put them back up. Otherwise, they'd be out of jobs and they can't have that now, can they?

;)


Sounds nice to work somewhere that developers actually recognise (even if its only because they're forced to by access rights, structure or what have you) that Ops actually have a clue what they're talking about,


I see a lot of words in your post, but for the life of me, I can't figure out what part of the linked story you feel is "mud slinging", nor what "langauge" or "behaviour" you're so disappointed about? AFAIK docker has had a crappy security story from the start, by design. Perhaps Docker is now, or proceeding to, leverage more of LXC/namespaces for "proper" security -- but the argument against a monolithic daemon makes perfect sense.

Security and convenience are always at odds: I don't see it as a problem that tools lean one way or the other: It does make me a bit worried if they lean one way or the other by accident -- which is what you seem to imply with your comment. I take it you're trying to "defend" docker -- but I don't know against what, nor do I understand your arguments.

Perhaps you could take a deep breath, and try again? I'm sure you really do have something to say on the matter, that is worth reading.


I like how 'exactly 3 things' turned into two lists, one with 13 items :)


Don't be unfair, clearly the 2nd list is nested :)


IMHO Docker should go Git way. I mean that the "core" binary provide minimum commands needed to work and any other command should be external executable that is discovered by running `docker-command-name`.


We compiled our (Codenvy's) thoughts on the Rocket vs. Docker perspectives and posted it on a blog post today. http://blog.codenvy.com/rocket-rocket-docker/

Our net conclusion is that this is good for the industry as competition induces everyone to work a little bit harder. We anticipate that the end game is that advancements and concepts suggested by Rocket are likely not to make it stand alone, or with very broad adoption, but that there will be enough interest & momentum that we eventually see some sort of alignment between Docker and Rocket. It's what would be in the best interests of all involved, in the end. Seems like the projects could eventually merge.

While some of the tone of the initial announcement had political overtures, which were further amplified by Pivotal (James Watters certainly didn't mind fanning flames), what this could indicate is that there were deep ideological divisions in thought within the Docker community. And instead of the parties finding common ground, the CoreOS team needed to create a new project with PR to gain attention to their ideas. That shows commitment and to a degree, high certainty, in their beliefs. Sometimes it requires one person taking on massive risks to fully convey the power of their position.

But there have been examples in the past of splinters that eventually get mended back into the fold. This wasn't a full fork, this was an entirely new approach. The foundations of what they are proposing are nice gap fills for Docker. So there are many more ways for alignment here than for division.


Two thoughts:

1. Competition? How can open source software be in competition with anything? It's free, its source code is there; if people want it they'll use it, if not they won't. Why would anyone care what other projects are doing or saying? Just build your tools how you want and go on with life. (Unless you're building your tools specifically to make money, in which case I guess PR and 'competition' does matter a lot)

2. On Twitter you suggested things should be 'composable to the extreme' ..... using plugins and drivers. https://www.youtube.com/watch?v=G2y8Sx4B2Sk


> How can open source software be in competition with anything?

Market share is power. Popular open-source projects can, and do, shape the industry. If you believe your trajectory is the right one for the industry, competition matters a lot.

As an example, Mozilla's Firefox was created to compete with Internet Explorer. It succeeded, and now Mozilla is working to defend the open web, so market share is still crucial for Mozilla even today.


I'm sorry but you're incorrect. Mozilla's Firefox was originally called Phoenix, and it was created because Mozilla the browser was a dog-slow encumbered monstrosity of Netscape's attempt to create an all-in-one solution for the web. Firefox was essentially competing with Mozilla Suite, but it wasn't so much "competing" as filling a necessary role: a browser that didn't suck.

Mozilla Suite was also not created to compete with Internet Explorer. In fact, Internet Explorer was created to compete with Netscape, which was the dominant browser for years until IE finally knocked it off its catbird seat. It never recovered because IE offered a simple, fast browsing experience, even if it sucked dick at actually rendering content.

In this vein, Phoenix was created in the model of Internet Explorer. So in a way you could say it competed, but in actual fact it was competing against its own progenitor.

Reflecting more on 'competition': the browser wars nearly destroyed the web as we know it as each browser introduced incompatible proprietary extensions which were then picked up (badly) by each other over time. The lack of standards, or good implementations of standards, severely hampered the adoption of more advanced technology. Firefox continues that tradition today by pushing more and more features that IE can't support; we're just lucky that Firefox is the dominant browser now, and that people are now used to upgrading their browser virtually every week.


It always makes me chuckle that Firefox adds more and more features and becomes more and more like the suite they replaced; I still miss the Composer for web pages!

I remember using it when it was called Firebird.


Huh. I was basing my comment on the knowledge that Mozilla feared IE would become the way to browse the web. I should have double checked.


Firefox founder here. You are correct and the reply comment is incorrect. Firefox was created to take on IE. Period.


I stand corrected, then. I definitely agree that by 2004 there was a huge effort to get as many people to the browser as possible, even comparing it as a better browser than IE. Still, it's interesting that IE was only ever mentioned two years after the initial release, and everyone who talked about the goals of the project were talking about the bloat of Mozilla and having a better user experience. I imagine it would have ended up much worse if the focus was competition alone.


You seem to narrow down (i.e. restrict) pretty heavily what competition can mean. Open source projects can compete even if no money is involved, e.g. on visibility and amount of help and traction they can get from the community. This is partly related to the concept of fragmentation (where some people argue that fragmentation dilutes efforts).


I don't read any mud-slinging in this post at all. What I read from it is someone (or a group of people) disappointed by Docker's current path and wanting to offer an alternative.

Docker's received a lot of funding, and so it has an interest in building a whole platform. I won't say whether that is "right" or "wrong," but it may pollute the original, simple container strategy. The goal of this seems to be to offer a pure alternative. No mud-slinging there - just a different goal than what Docker has become today.


Hey, shykes. Thanks for creating Docker (and your enthusiasm). I think HN is being extremely critical of you, but they have some valid points. Keep a thick skin -- it's tough not to look defensive even if you are right.

(i don't know enough about rocker to make a judgement either way).


And then?


Can't let the ecosystem fracture. Docker creates a set of standards that are badly needed. This creates value for everyone


Docker's main focus is to "get people agree on something". And they are doing great in getting traction and adoption. But if everyone starts to create their own flavor of containers, we still don't get portability across servers and clouds. It would be better IMHO if Rocket implements the Docker API, or if they collaborate together in creating a minimal standard. Then everyone would benefit. I'm really curious how Solomon will respond to this...


FWIW, part of the design difference is that rocket doesn't implement an API. When you do `rkt run` it is actually executing under that PID hierarchy; there is no rktd that forks the process.

This is a design goal so that you can launch a container under the control of your init system or other process management system.


In case you change your mind, I just created this awesome project: https://github.com/fsouza/go-rocketclient


That's really too bad, because the only way for me to spawn containers programmatically is shelling out.


Forking, not shelling out, no?


This was a key principle of LMCTFY, too, FWIW.



Thanks!


So here's my take on this. From the docs on github:

  The first step of the process, stage 0, is the actual rkt binary itself. This binary is
  in charge of doing a number of initial preparatory tasks:
  
    Generating a Container UUID
    Generating a Container Runtime Manifest
    Creating a filesystem for the container
    Setting up stage 1 and stage 2 directories in the filesystem
    Copying the stage1 binary into the container filesystem
    Fetching the specified ACIs
    Unpacking the ACIs and copying each app into the stage2 directories
Questions:

Don't all these steps seem like a lot of disk, cpu and system-dependency-intense operations just to run an application?

Why is this thing written in Go when a shell script could do the same thing while being more portable and easier to hack on?

Why are they saying this thing is composable when they just keep shoving features (like compilation, bootstrapping, configuration management, deployment, service autodiscovery, etc) into a single tool?


> Don't all these steps seem like a lot of disk, cpu and system-dependency-intense operations just to run an application?

I'm not sure I follow. At least compared to using Docker it doesn't seem much different at all in terms of overhead.

> Why is this thing written in Go when a shell script could do the same thing while being more portable and easier to hack on?

Go runs on more platforms than Linux containers, so I don't think Go is going to be a limiting factor. If you think shell script programming is going to lead to more robust and efficient software... ;-)

> Why are they saying this thing is composable when they just keep shoving features (like compilation, bootstrapping, configuration management, deployment, service autodiscovery, etc) into a single tool?

They aren't a single tool? They've architected it so that those different components are quite separable, particularly the ACI is really, really separable from the rest.


Docker has responded on their blog. https://news.ycombinator.com/item?id=8683276


I don't see any mud slinging.

I've used Docker. And I am looking forward to Rocket. I will use both and I will compare without prejudice.

I personally like the idea of Rocket and am looking forward to more blog posts comparing the two!


As a heavy user of CoreOS and docker, I'm interested to see how this plays out.

My problems with docker have been the security model, for which the only recourse I've had is to use the USER keyword in my Dockerfiles. Furthermore, networking has been a pain point, which I've had to resolve by using host networking to access interfaces.

Let's see how rocket deals with these issues and others. I pay for CoreOS support, so I'm glad to see that they're addressing this.


Hmm, I played around with CoreOS for the past weeks, it was nice, I'm getting the hang of it. What is constantly difficult though is that there is no cross linking of containers (mysql database accessible from user@172.ip.add.r while the Nginx/PHP-fpm docker is looking for a specific mysql ip addr). Restarting containers from images changes both IPs. Not handy. Why not always share a common /etc/hosts with all current containers (given name with current ip addr) in them?

I was also having some issues with php5-fpm in a docker, it doesn't seem designed for it (it gets the file paths communicated from Nginx, not the files so dockers need to sync files)

Somehow I though CoreOS and Docker would be figuring this out together. I hope somehow that the knowledge I now have will remain relevant, I was planning a hosting service for sports clubs based on drupal8.

Ah well, we are at the beginning of an era, I should have expected this. I'm very curious, who knows, the container space is far from filled, we'll be seeing many distros. There will be Gentoo's, there will be Ubuntu's. It's going to be nice.


> I was also having some issues with php5-fpm in a docker, it doesn't seem designed for it (it gets the file paths communicated from Nginx, not the files so dockers need to sync files)

The volume that your site code is on needs to be linked to the php-fpm container. Typically you would host this volume on a data container and use --volumes-from $ctid when starting the php-fpm container.


Has libcontainer[1] been considered as a minimal Docker alternative?

[1] https://github.com/docker/libcontainer


it's a very exciting time for Linux Containers. it's been a fun to watch the evolution from BSD jails to lxc to docker, but the rate of innovation and usefulness is certainly accelerating. it sure seems like rocket's approach will be much less of a black box than docker images/registry, which should make it much more approachable to people trying to understand what linux containers are all about.


Improving the security model of docker is mentioned. Docker is known to be currently unsafe to run untrusted containers. Does anyone know yet if Rocket plans to support running untrusted containers safely, ala sandstorm.io?


Unlikely. Doing that requires a willingness to break things (disabling vast swaths of the kernel API in order to reduce attack surface). Sandstorm is fine with breaking things because Sandstorm is all about rethinking the platform and that means apps already need to be tweaked in a number of ways (see: https://blog.sandstorm.io/news/2014-08-19-why-not-run-docker...). Docker and Rocket are very much designed to provide "Standard Linux" inside their containers, and be able to run standard Linux applications.

It looks like Rocket actually intends to be more conservative than Docker:

"Additionally, in the past few weeks Docker has demonstrated that it is on a path to include many facilities beyond basic container management, turning it into a complex platform. Our primary users have existing platforms that they want to integrate containers with. We need to fill the gap for companies that just want a way to securely and portably run a container."

So it's actually moving in the opposite direction, compared to Sandstorm.

(You of course know this already, but disclosure for others reading: I'm the lead dev of Sandstorm.)


kentonv ftw


That's open source. The early implementation of an idea is broken. Someone creates an alternative which fixes the problems. The alternative often doesn't gain the same traction and the original continues as the broken dominant implementation. But the alternative is also broken, but maybe in different ways. As design decisions pile on, the broken spreads. In the end, we again learn that software sucks. It will always suck. For people who don't like reinventing the wheel (or relearning the reinvention) stick with the "good enough" and focus on building cool stuff.


This may be a noob question,

I'm looking into using containers for ui applications. I need to access GPU within the application. is this doable with Rocket or Docker?

Also does Rocket have to be used with CoreOS?


Have a look at this container [1] I put together for accessing GPU instances on AWS via Docker. Runs various compute tasks including multiple containers against a single GPU without issue.

From the looks of your other comments in this tangent it might be exactly what you need or a starting point at least.

It's a base for these BOINC [2] and F@H [3] containers.

1: https://registry.hub.docker.com/u/ozzyjohnson/cuda/

2: https://registry.hub.docker.com/u/ozzyjohnson/boinc-gpu/

3: https://registry.hub.docker.com/u/ozzyjohnson/cuda-fah/


Thank you very much! this is really useful information. Aside from cuda, I also want to make EGL/opengl work with docker, hopefully I can find examples for that.



thank you very much. I will take a look. but the fact that this is tied to Gnome worries me. I actually need a console application with gpu access.


Ah, in that case Docker may be a better choice. You can (probably?) use volumes to expose /dev/drm and such into the container.


Certainly. The kernel can simply pass through the device, although you lose some of the security of containerization that way. There may be issues with multiple containers sharing the same GPU though.


I indeed need multiple containers to share the same gpu. :(


no, rocket does not require coreos, just linux, see: https://github.com/coreos/rocket#trying-out-rocket


I'm interested: why do you need a container for a UI application? It would be better for your users if it could run as a simple process.


I actually need gpu, not the ui. I need it to do scientific computation. Video streaming service is another case. gpu has better video encoding capabilities.

I previously heard that docker has trouble loading device drivers.


Not the parent poster, but needing GPU isn't necessarily the same as having UI. You can use GPU for a variety of general purpose math (Example: mining bitcoins, or doing stuff like Folding@Home), or for offline rendering.


yes, I understand offline rendering. I'm looking into egl off screen rendering. But due to historical reason, the current gpu drivers (NVIDIA) need x server.


This looks very interesting - it'll be really useful to have something like Docker that isn't so monolithic - it should be much more composable in new ways.


How will App Container Images be built? I'm guessing that unlike Docker, the standard App Container build tool(s), if any, will be separate from Rocket.


Right now there is a `actool build` subcommand that will build an ACI given a root filesystem. That tool is used to build the validation ACI's and the etcd ACI. It is rough right now and we will make it simpler to use overtime; and as rkt gets better people can run the build tool from inside of a container given source code.


Nice. It occurs to me that since an ACI is just a tarball, the build process is decoupled from the runtime engine, unlike in Docker. I've found the Docker build process to be unsuitable for creating minimal images (though I've read that nested builds plus layer squashing will fix this). It'll be interesting to watch the exploration of different build tools and processes that Rocket's decoupled approach will enable, if it catches on.


> Nice. It occurs to me that since an ACI is just a tarball, the build process is decoupled from the runtime engine, unlike in Docker.

Yep, this is _exactly_ one of our design goals. ACIs are trivially buildable and inspectable with standard Unix tools.


Docker can import any tarball as a rootfs for a container, essentially allowing you to use whatever build tool you want.

Dockerfiles/`docker build` is an implementation of a build system which uses the docker engine to make said rootfs.


Yes, but the actual container image that is being distributed can only be created by Docker. The ability to import is nice, but irrelevant here.


Docker already supports alternative build systems via docker import.

Realistically, if the stack is broken into a dozen pieces then somebody will create a bundle with sensible defaults (let's call it "CoreOS") and then we'll be back in the same situation.


Every open source project starts off so well, then the "founders" decide they want to be gazillionaires, and it's all downhill from there.

Sad.


The vast majority of open source projects, even if you just take the popular ones, never end up being companies run by "founders" who want to be gazillionaires.


Out of curiosity (as I haven't been using virtualized servers or anything for a number of years, and used to use ESXi on the racks back then, for Windows + Linux), is Docker that widely used?

Reading up on it, I can't see how it is massively different to OpenVZ? Given Docker's youth, is anyone still using OpenVZ over it? And why? I'm interested.


The underlying software coreos relys on is a tightly coupled implementation defined api, then arguing that docker isn't following the "Unix philsolphy" is hilarious I wont touch coreos due to this. I also won't touch docker due to its NIH syndrome of reinventing things, poorly.


I fail to see how Rocket is going to end any better than Docker.

It's already tied to systemd-nspawn (though arguably you could make this pluggable to support other process babysitters).

Infact, Rocket as it stands is just a wrapper around systemd-nspawn and little else.

They harp on about this new ACI format but it isn't really anything new and fails to solve the problems that currently face Docker format, which is a sufficient amount of metadata to properly solve the clustered application and networking problems.

I am all for things that do one thing and do them well, but right now Rocket is just systemd-nspawn which is just a more platform specific LXC in my opinion.

Note: I don't necessarily agree with everything Docker is doing either, I just don't think Rocket is a productive way to fix it.


Forget the interpersonal back-and-forth. My suspicion is that this is due largely because CoreOS (the company) does not their product completely dependent on another for-profit company's platform (Docker). It's just smart business.


I'm all for a new container runtime if it lets me start containers as a non-root user. Allowing non-root users to start containers would open up a whole new level of applications, particularly on multi-tenant HPC-style clusters.


This would only be possible on very new linux kernels (that provide user namespacing)


Yes, finally! I've been working around this by making users inside the containers that people launch jobs on, but it would be much easier if they could do it individually in their own namespace (not using a docker group either).


I'm not saying that Rocket will support this, I just hope it does! I really want users to be able to spawn a container themselves without requiring special privileges.


I wonder if Ubuntu LXD will participate in this?


LXD is another competitor to Docker, as I understand it, so it will participate in the fight, for sure.


Interesting what the CoreOS team is building. If the code becomes as neat as some of the main parts of CoreOS, then this alone merits attention, we cannot have to much security.


The first thing that popped into my mind when I read this is http://xkcd.com/927/


Great now people who were suppose to be living and working together are going to be at odds with one another casualty being the end user. Also windows is taking this platforming thing under their consideration too. Given their reach and funding I think it would be smart to band together so it would not turn out like it did in July 1993


awesome! this sounds like a great philosophical fork of docker, I'm excited to see this grow.


>>> While we disagree with some of the arguments and questionable rhetoric and timing of the Rocket announcement, we hope that we can all continue to be guided by what is best for users and developers. >>>

What does "timing" of the announcement mean?


It's a well-orchestrated PR effort just days before Dockercon. The goal is to get it into the press short-term memory, so they mention on their own in Docker's own press announcements later in the week. It's corporate PR 101 - pretty aggressive but effective.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: