
Talk of a Split from Docker - iamthemuffinman
http://thenewstack.io/docker-fork-talk-split-now-table/
======
csears
I doubt they would ever consider it, but I think Docker Inc's best move would
be to push reset for Docker 2.0:

\- Fully embrace Kubernetes for orchestration

\- Drop Swarm

\- Roll Docker Engine back to its pre-1.12 scope

\- Get on board with standardizing the image format, now

\- Stop fighting Google and instead let them help you succeed

A Docker distro of Kubernetes would do very well in enterprise on-prem or
private cloud environments. They already have a great developer experience.
Companies will pay for support on both.

Continuing to oppose Kubernetes risks damaging the significant brand equity
they've accrued as containers in production become mainstream.

~~~
felixgallo
Unfortunately there is no way to justify a unicorn valuation if all that you
have as an asset is an open source file format.

Consider Vagrant -- like Docker, it's essentially a wrapper around other
people's facilities and libraries. Mitchell and co appear to be trying to
escape the trap by moving into nearby areas of real value, but there's no
meaningful way to monetize Vagrant no matter how popular it is.

~~~
djsumdog
Do you remember when open source meant people just working on stuff for fun
and trying to provide free as in speech alternatives to pay crap?

I feel like this is just another step into the corporate OSS world which is a
far departure of what a lot of the original OSS architects envisioned.

Trying to monetize OSS often leads to rushed features and bad decisions like
in most traditional non-OSS/enterprise products. If you strip off the polish
and the fancy website, you should still have a usable, well documented thing
that people need. That may also be a product you can sell support for or host,
but it doesn't have to be.

Unfortunately, there aren't as many grants and not as much developer free time
to go around anymore.

~~~
bluejekyll
I'm not sure OSS ever meant that:

 _Many people believe that the spirit of the GNU Project is that you should
not charge money for distributing copies of software, or that you should
charge as little as possible—just enough to cover the cost. This is a
misunderstanding.

Actually, we encourage people who redistribute free software to charge as much
as they wish or can. If a license does not permit users to make copies and
sell them, it is a nonfree license. If this seems surprising to you, please
read on._

from:

[https://www.gnu.org/philosophy/selling.en.html](https://www.gnu.org/philosophy/selling.en.html)

~~~
dragonwriter
> Actually, we encourage people who redistribute free software to charge as
> much as they wish or can.

What they don't mention is that, when every recipient of the software is free
to redistribute it and compete against you, as much as you can rapidly becomes
zero (or, at most, just enough to cover the cost of the most-efficient
distributor), since free competition drives prices down to the marginal
production cost.

~~~
cyphar
Except of course, that the business model of free software is generally
support-license based. Yes, someone can take your software and provide support
instead of you but they usually are not as experienced as you with what you
created. If they are, you should hire them.

Companies make money charging for free software all the time, and I really
wish we would stop having to go through this discussion every time that free
software shows up as a point. Companies have been making money from free
software for more than 20-30 years at this point, and none of it required
taking away the freedom of their end-users.

~~~
dragonwriter
> Except of course, that the business model of free software is generally
> support-license based

Its _charging for support_ (not support-license since the support contract
isn't a license when the software is actually delivered on a Free license)
because that's an alternative to _charging for software_ , because,
practically, you can't charge for Free software, for the reason discussed in
GP.

> Companies make money charging for free software all the time

No, they don't. They often make money charging for ancillary services related
to Free software, which their involvement in contributing to the Free software
may have positioned them to provide at an advantage to competitors, but not
many make money charging for Free software.

~~~
cyphar
> Its charging for support (not support-license since the support contract
> isn't a license when the software is actually delivered on a Free license)
> because that's an alternative to charging for software, because,
> practically, you can't charge for Free software, for the reason discussed in
> GP.

The industry calls them support licenses (since generally you only support X
machines running your software -- though of course you can have alternative
models), so that's what I'm going to call them.

As for not being able to charge for Free Software, it is true that unless you
have some value-add (preinstalling or burning to physical media) then yes,
youre going to have trouble selling the bits that make up a piece of software.
But then again, why does is that model taken as being the "right model" with
the free software model being the "alternative". In fact, many proprietary
companies have the same model (Oracle will charge you for support too). How is
it a better model that you buy a set of bits and that's all you get (no
promise of updates, no support if something breaks, nothing other than the 1s
and 0s that make up version X.Y of the binaries you built). In fact, I'm
having trouble of thinking what companies have such a model, because it's so
user-hostile (even for proprietary so software).

------
brudgers
I'm not saying those looking to fork Docker are wrong. I don't think that they
are. But I think Docker's approach to Swarm is more _useful_ than the roadmap
that those organizations considering a fork wish to pursue.

Kubernetes, Mesos, etc. appear to be great orchestration tools for an
organization with a few [or many] engineers dedicated to operations. Their not
so great for a small team [or individuals] who are just trying to deploy some
software.

As I see it, Swarm seeks to solve orchestration analogously to the way Docker
seeks to solve containers. Before Docker, LXC was around and the Google's of
the world had the engineer-years on staff to make containers work. Docker came
along and improved deployment for the ordinary CRUD on Rails developer who
just wants to go home at night without worrying about the pager going off.

To put it another way, it looks to me like the intent of Swarm is to provide
container orchestration for people who don't run a data center. Like Docker,
it is an improvement for those scaling up toward clusters not down from the
cloud.

None of which is to say that moving fast with Swarm isn't a business strategy
at Docker. There's a whole lotta' hole in the container market and part of
that is because the other organizations currently supporting development of
container orchestration tools has business interests at a much larger
scale...Google doesn't see a business case for pushing Kubernetes toward the
Gmail end of the ease of use spectrum.

The desire to fork is based on the needs of the cathedral not those in the
bazaar.

~~~
kozikow
People on the bazaar who just want to deploy a CRUD web app you mentioned
shouldn't be getting into the containers/orchestration in the first place.
Just go for some PaaS or rent a server or two, write shell or python scripts
to set them up and save yourself all the hassle.

~~~
knz
The benefits of containerization are not limited to production deployment.
Docker makes it trivial to run the exact same container in QA/dev as well -
it's a 10 line Dockerfile and three commands (create, build, run) vs having to
build custom shell/python scripts. Don't underestimate how useful that is,
especially in smaller offices where you don't have the staff dedicated to
automating everything. Even small shops also benefit from the ephemeral nature
of containers - redeploying a container is a lot quicker and easier than
redeploying an entire VM/physical server. PaaS isn't without it's own issues
either (you have to learn the AWS/Google/openShift/azure/heroku way of doing
things) and can be cost prohibitive.

~~~
bryanlarsen
What does Docker bring to this? This is basically the argument "use your
production deployment mechanism to create your development environment". There
are lots of ways other than Docker to do this, Vagrant being one of the most
prominent.

~~~
brudgers
One of the differences between a containerized deployment and scripted
provisioning with tools like Vagrant is the state of a node following a
deployment failure.

Deployment of a container to a node is roughly an atomic transaction. If it
fails due to a network partition or server crash etc. The container can just
be deployed again to the node again. By comparison, a partially executed
provisioning script can leave the target node in an arbitrary state and what
needs to happen next to bring the node online depends on when and how the
deployment script failed as well as the nature of any partial deployment...and
whatever state the server was in prior to deployment.

~~~
bryanlarsen
That's a reason why Docker is great for production deployment. If you don't
use Docker for production your production deployment scripts have to deal with
that.

But if you use your production deployment scripts for development deployments,
then that problem has been dealt with one way or another.

------
valarauca1
Makes sense.

Since the late 90's early 00's when Linux _won_ the data center. Most people
became really enjoyed the _we don 't break user land_ motto.

Once a kernel interface went live, it stayed that way. Ugly spots and all.
Containers are starting to become a fairly important part of IT/Cloud
infrastructure. Easily compariable to the OS itself. Logically those involved
with maintainence would demand the same.

Yes I'm aware Docker is more a control program for interacting with Cgroups,
setting quotas, installing packages, and isolating processes. Not the OS
itself. It is an abstraction over the OS, hence for most developers it feels
like part of the OS. So logically they'd demand it be as robust as the OS.

~~~
jwr
I think this is part of a more general trend of companies trying to use
software that definitely isn't ready for prime time. Sure, it's trendy and
being hyped, but those are not sufficient reasons to use it in production.

These days some new technologies become "trendy" (on HN and elsewhere) and
start being used by developers. That in itself is great, but many of those
developers are also young and inexperienced and do not realize that production
systems have different requirements.

There are many symptoms: the docker situation, npm (need I say more),
libraries like Semantic UI that can't be built in a CI environment from the
command line (require user interaction). Even small things, like tools that
change their behavior based on files in one of the _parent_ directories (npm
again, but not only), or the proliferation of fancy progress bars and useless
drivel being spewed to the terminals, are symptoms. Those are tools designed
by developers working on their laptops, for developers working on their
laptops, and do not (at this stage) fit the requirements of a production
server environment.

Maturity and stability are valuable traits in software, especially in larger
systems.

~~~
pjmlp
That is actually what I like about being a senior developer on enterprise
consultancy.

We see these fads come and go, wait for the dust to settle and use what is
actually mature to be deployed in real production environment.

It also helps that our customers don't sell software, as such they view of
what technology stack is supposed to look like is quite different from what
the typical Starbucks developer thinks about software.

~~~
valarauca1
>It also helps that our customers don't sell software, as such they view of
what technology stack is supposed to look like is quite different from what
the typical Starbucks developer thinks about software.

Selling software to people who _use_ software vs those who _develop_ software
is a vastly different experience. As are the questions and rigor they'll put a
consultancy agency though. As well as what they expect out of a product.

>We see these fads come and go, wait for the dust to settle and use what is
actually mature to be deployed in real production environment.

Yup nothing like Oracle SQL + Java App driven by Tomcat or Spring. Hell you
can even get an Oracle Rep to appear in your sales pitch. The suits really
like that.

~~~
pjmlp
One of the customers I was dealing with recently still requires .NET 4.0 due
to XP support.

------
raesene6
I'm not really sure why the proponents of this split don't just put their
efforts into improving one of the alternatives which are already available
(e.g. rkt). A split would seem to be a bad outcome allround (confusion in the
market, divided resources, duplicated features), whereas competing products
might bring out the best in each other.

Also it does seem a little odd to me to see people suggesting that Docker
needs to be more stable and "boring" (from this article
[https://medium.com/@bob_48171/an-ode-to-boring-creating-
open...](https://medium.com/@bob_48171/an-ode-to-boring-creating-open-and-
stable-container-world-4a7a39971443#.gux8c0bx2) referenced in the main link)
to fit in with other projects in this space like Kubernetes, when it seems
that most/all of these projects including Kubernetes are moving as fast as
each other...

~~~
alauda
I don't think rkt is a valid candidate. People are already used to the basic
of Docker command. There is no reason to change to something else. If rkt
decides to reproduce the feature set, how is that different from a fork?

~~~
creshal
> People are already used to the basic of Docker command.

The tiny amount of existing early adopters can be made to adopt something else
one more time.

~~~
lmickh
Not to mention that they constantly break the Docker api and interactions.
Knowing the commands today is not helpful in the future. Once you get past
"docker run <image>" so much has changed the last couple of years that it is
pointless to hold that up as a virtue.

------
CSDude
Docker's mistake is bundling swarm and throwing away regular docker-compose
with services. The bigger mistake was presenting them as they worked
perfectly, just because in the sake of a badly timed DockerCon (seriously why
the hell do we need 6-month spaced dockercons) they released something that
was not complete and fundementally different from previous way of running
containers. I feel their urge to monetize but events like this really leaves
bad memories. By the way, does anyone remember the service command that was
introduced in ~1.8 or 1.9 and just vanished? It was a similar mess.

~~~
bogomipz
Wait did they throw away docker-compose with services in this latest release?
I'm not totally up on this release obviously.

~~~
CSDude
Not actually, but swarm services with docker application bundle (dab) seems
like a replacement for docker-compose and makes it redundant. All they had to
do was promote compose from a cli tool to docker daemon itself and it would be
great, not a major fundemental change. Now, I cannot even get the logs of a
service because it is failing too fast and I can't catchup with it.

------
jondubois
That sounds like a bad idea to me. I do agree that Docker rushed things a bit
with Swarm (on the orchestration front) but I think that they're doing an
excellent job with the containers themselves.

I don't think a fork will help - I think a fork would make sense if there were
concerns about Docker's level of 'openness' but I think that's not the problem
here.

Forking/duplicating a technology whose main premise is being "a single
consistent environment for running apps" sounds like a contradiction to me.

~~~
sjellis
There are concerns at every level about the decisions coming from Docker, Inc.
and there have been consistent issues for a long time. Swarm is just the
latest issue, and the arguably the worst, since none of the key decisions are
defensible - Why is it there in the core product? Why was it added in a point
release? Why was it included at all when it is clearly not stable? Did any
major contributor outside Docker, Inc. have any input into this?

Ultimately, forking is what happens when the contributors can't work with the
maintainer, and it seems pretty clear that Docker, Inc. can't act as
responsible maintainers.

------
Philipp__
Every time I think of Docker recently, the image of huge container ship
accident shows up in my head. This thing needs to be standardized, and things
are not going that way at the moment.

------
kharms
This seems rather manufactured. In the span of days, we've seen article after
article coming out and condemning docker, advising kubernetes, and now this
fork. Who benefits?

~~~
mentat
People who want a stable container orchestration engine and through that, the
users.

------
kozikow
Imagine there would be "docker engine for running in production" outside of
control of docker inc, with backing from the rest of orchestration industry.
It may be considered as a tool for compelling docker inc. to play more nicely,
rather than the total fork.

\- Developers interested only in the docker engine would consider using it for
higher reliability, less breaking and rushed changes or force-bundling of
things they don't want.

\- If enough developers are using it, docker inc. would be compelled to
maintain compatibility to avoid "mainstream" docker ending up as the tool only
for dev/CI.

------
awinder
Beyond API flux, one thing docker could really focus on to alleviate a lot of
pain would be to have some more stable / sane version management. If you need
to break the API to bring in new features so be it, but maybe a former API
client should be able to still interface with newer versions through backwards
API support. When docker was new there was maybe a case to be made for
stricter version matching, but it's just a sign of immaturity at this point
that new versions of dockers upset the rest of the ecosystem so greatly.

~~~
cyphar
I'm fairly sure that old Docker clients can communicate with new Docker
servers (the docker.sock API is versioned and is backwards compatible). If
that doesn't work, you should file a bug (there's lots of code inside the API
handlers that deals with setting older API version defaults).

------
ThePhysicist
While Docker is probably not the last word in containerization technology
(which is a good thing), the idea behind it is quite powerful: Small,
lightweight, self-contained objects that perform a given function and that we
can plug together in many ways. I think the impact of having something like
this will not be limited to traditional DevOps but will permeate many other
areas as well, like data analysis and the delivery of end-user applications.

~~~
metamet
> Small, lightweight, self-contained objects that perform a given function and
> that we can plug together in many ways.

So Unix philosophy?

~~~
technofiend
You bring up a really good point - that is the _UNIX_ philosophy, but UNIX
didn't win the datacenter wars: _Linux_ did. And Linux doesn't entirely share
that philosophy; depending on the Linux release it can be the polar opposite;
piling on feature after feature into a monolithic block like systemd.

That's not an indictment of systemd... it's just an example of how I'm not
sure everyone has glommed on to the fact that just a GNU isn't UNIX neither is
Linux.

~~~
felixgallo
systemd is a really recent development. Linux won the datacenter wars well
before any of the recent desktop stuff started encroaching, and it did it on
the back of being a free Unix clone.

~~~
digi_owl
Yep. Linux offered a free unix that was not touched by the AT&T lawsuit and
could run on commodity hardware.

Then we had the whole dot-com bust that freed up a whole lot of hardware to
run LAMP stacks on, and things really got rolling.

------
Mizza
I feel bad for shykes. This can't have been a very fun release. He tries
really hard, and I get the feeling he takes a lot of the feedback to heart.

~~~
mentat
He's aggressive and isn't super consistent. In the end I think the conclusion
of the Twitter conversation I had with him was good but then you look at the
rest of the it and it's just aggressively negative. Conspiracies everywhere.

------
syshum
Didnt this happen like 2 years ago when CoreOS created the Rocket Project?

[https://github.com/coreos/rkt/](https://github.com/coreos/rkt/)

~~~
digi_owl
When i noticed the posting mentioning CoreOS and Red Hat i found myself
thinking that both of them would not mind seeing Docker go belly up. This
because both are in deep with Systemd, and it is at this point offering a
straight up competitor to Docker.

------
Halienja
I hope rkt, runc and similar get donated to the CNC Foundation and get a
direction there

~~~
cyphar
runC is part of the Open Containers Initiative (which is a project by the
Linux Foundation). So there wouldn't be much sense moving runC between two
different Linux Foundation projects (especially since the same person [Chris
Aniszczyk] is currently managing both projects). Also, since Kubernetes is
part of the CNCF I'm hoping that will mean we'll get some support to adding
OCI support to Kubernetes (something that we're currently working on).

------
bootload
_" What’s happening right now, if we are not careful, will fragment the
container ecosystem, and it make it impossible for single containers to target
multiple runtimes,"_

UNIX had this problem and look how long it took before things settled. Linux
was the result. Maybe this is a good thing?

~~~
pjmlp
Have they really settled?

In the 90's we had the UNIX wars.

Nowadays we have the GNU/Linux wars.

Each distribution does its own thing and every disagreement leads to yet
another fork.

~~~
malingo
Right, so in that context who is the FreeBSD of containers?

~~~
_delirium
Illumos (formerly OpenSolaris) containers perhaps?

~~~
Annatar
Those are the ones!

------
joostdevries
Sounds like the Docker container format should be split of into an independent
foundation. Because everybody wants to use it but there's no money to be made
of it. Then companies can compete on how to run Docker in production.

~~~
gtirloni
That is/was supposed to be what OpenContainers.org (OCI) is all about.

~~~
joostdevries
Good point. I guess there's more that should be a shared commonly funded
resource: probably the core engine.

------
leetrout
Support for legacy OSes in newer releases would be wonderful but I don't know
how hard that is... There's a lot of talk about how the older kernel makes it
really difficult.

If you're using CentOS 6 you're stuck with Docker 1.7. There are a lot of
enterprise companies out there (I'm looking at you big banking) that aren't
ready to move to CentOS / RHEL 7 and trying to get stable usage out of Docker
1.7 doesn't "Just Work" in my experience.

Anyone here use rkt with CentOS 6??

------
madmax96
> The Docker orchestration capabilities are opt-in; they must be activated by
> the user. Though not opting in may lead to backward compatibility issues
> down the road.

Whoa, citation needed! Not activating swarm features has the same probability
of causing problems as relying on fork() to create a process. Maybe not, but I
don't see Docker suddenly forcing everyone into using swarm. It seems
unreasonable to even suggest this, and a bit of a scare tactic.

------
falcolas
I'm not developing anything against Docker except for automation tooling, and
I would kill for a stable Docker Engine; stable disk drivers, stable CLI
arguments, stable configuration file formats, a stable daemon, and so on.

That said, it's a hard problem, and I certainly don't have the time to work on
it myself; nor can my employer spare me to work on them either.

------
geggam
I am sort of curious.

1\. Who is running docker in production.

2\. If you are running docker in production what sort of money are you making
with it ?

------
hosh
I think a lot of these issues were already nascent when CoreOS decided to fork
Docker. People are asking for 'boring container infrastructure' \-- and it's
called rkt. I remember the bruhaha at the time, with the Docker folks pissed
at the CoreOS folks for doing so. That was Dec 2014. It looks like the way
Docker handled the 1.12 release is shifting the sentiment.

For example: I ended up writing this to ask the Docker team for more
transparency on this issue: [https://forums.docker.com/t/file-access-in-
mounted-volumes-e...](https://forums.docker.com/t/file-access-in-mounted-
volumes-extremely-slow-cpu-bound/8076/148)

And they responded with an _awesome_ reply addressing it:
[https://forums.docker.com/t/file-access-in-mounted-
volumes-e...](https://forums.docker.com/t/file-access-in-mounted-volumes-
extremely-slow-cpu-bound/8076/158) and it went a long way towards helping the
community understand the issue what to do about helping.

However, there are also threads like these that asks for the same issue:

[https://forums.docker.com/t/should-docker-run-net-host-
work/...](https://forums.docker.com/t/should-docker-run-net-host-work/14215)
[https://forums.docker.com/t/access-host-not-vm-from-
inside-c...](https://forums.docker.com/t/access-host-not-vm-from-inside-
container/11747/9) [https://forums.docker.com/t/explain-networking-known-
limitat...](https://forums.docker.com/t/explain-networking-known-limitations-
explain-host/15205)

They kinda left the community in a limbo here, and quietly added a line in the
documentation saying it won't happen. But without the transparency, we don't
really know what's going on here.

Back then with the rkt split, the Docker design was to gear towards users so
that there is as little friction as possible. It worked all right when it was
just Docker Engine on Linux. Over time, due to differences in distros, you can
see the container abstraction leaking here and there. Generally manageable.

In the 18 months since, it's becoming clearer that Docker is drunk on their
story. Seems like more and more of the leaks from the abstraction are getting
swept under the rug while they are trying to make a land grab for the
orchestration. Yet Docker is doing it in a way that is sacrificing the
goodwill from the community. It first starts with the third-party vendor
relationships, but as you can see from these forum posts, it is starting to
leak into the relationship with end-users as well -- the developers.

There's still time to turn the (ahem) ship around. But a big part of what is
driving Kubernetes success isn't that their abstractions are brilliant, but
rather, that project is very transparent and communicative of what they are
doing and what they are intending with the community. I get Docker is trying
to do that with Docker Swarm, yet I think they missed a critical part of why
and how Kubernetes gained so much traction so quickly.

~~~
mentat
Great summary of the issues. Though I've been only tangentially involved, I
agree with all the points, especially "drunk on their story".

------
Annatar
So instead of going to working, stable alternatives to Docker like SmartOS,
people still cling on to it and try everything and anything to save it!

I'll be damned if I understand why someone would continue to cling on to
broken software[1] when there is a working alternative. Can someone explain
this irrationality in terms which I can understand?

[1]
[https://news.ycombinator.com/item?id=12377457](https://news.ycombinator.com/item?id=12377457)

------
twblalock
Maybe Docker should do what Ubuntu does: periodic LTS releases with a
guaranteed support timeframe. They can experiment with the newer releases, and
the LTS releases will be there for people who need stability and don't need
bleeding-edge, unproven features.

------
dmourati
This sounds like a bluff to me. "Oh, you want fast moving changes, mobility up
the stack, and centralized control? We want none of those things. Either you
soften your stance and start listening or we'll fork."

------
jaboutboul
I think this is all a push by Docker's management to get the company acquired

------
duncanjw
See also
[https://news.ycombinator.com/item?id=12364123](https://news.ycombinator.com/item?id=12364123)

------
coding123
The real reason everyone is upset is that I can now say.. goodbye kub, goodbye
mesos, goodbye coreos, your are all complicated. I'm thrilled about docker
1.12 and swarm mode.

~~~
iamthemuffinman
That's not why. Good for you on using Docker 1.12 with Swarm mode, if that's
what works for you. Have fun in production.

