Hacker News new | comments | show | ask | jobs | submit login
Docker Enterprise Edition (docker.com)
231 points by frostmatthew 143 days ago | hide | past | web | 201 comments | favorite



My worst fears confirmed. Docker finally realized they couldn't get a piece of the orchestration pie and are resorting to making Docker a freemium product. Very sad. Wished they had gone so many different directions than this. Knew things were going to get bad then they pushed out their terrible new versioning scheme that every developer had a negative reaction to on the PR. Docker was great, but now I'm really wishing their was a fork or alternative with feature parity.


There's no need to worry. `Docker CE` is the exact same Docker as before - just renamed to clarify the relationship to Docker EE.

Docker is still open-source; it still has all the same features; it has the same maintainers and contribution roles; it has the same roadmap of features. And we still welcome pull requests.

Meanwhile we have been breaking the components of Docker into standalone upstream projects: containerd, swarmkit, libnetwork etc. So there are more and more ways to use parts of Docker without being forced to use all of it. We will continue doing that.

Is there any detailed fear that you can describe? I will do my best to reassure you.


It may be only a name change today, but relabeling the docker to 'CE' opens up a world of possibilities where 'enterprise features' get shipped in the 'EE' and the 'CE' turns into watered down freemiumware. Honestly I don't believe that this 'clarifies' an existing relationship. DDC was the product and the locked down engine was just a part of that product. This makes it excruciatingly clear that DDC failed to take off and now we're going another level down to making the engine itself a product, which comes with all sorts of questions about what will really be driving the roadmap in the future. What would have been wrong with focusing on monetizing Docker Hub and leaving the tool alone like Github and Git? Docker Hub doesn't look like it's been updated in ages... I also don't see why anyone should be thrilled that this split is being made when you can only offer a 1 year support window for 'EE'? How is anyone going to explain to their manager that what the cool tool they were sold on is now just the 'community version' and if they want support they can only get it for one year at a time? Do you know how long it takes to go through the upgrade process in a large enterprise? We'll be lucky if we get six months or less of support before we have to upgrade again. Also, I feel no urge to go and create pull requests for your community edition product. If Docker were still completely free and community driven I would feel compelled to do my part, but I'm not building software for you to sell.


Seems unlikely that there would be 'enterprise features'... the value of Docker is that you can send a Dockerfile somewhere and it will work identically on each system. If they changed the feature-set of Docker EE it'd undermine the reason people use their product


I'd guess that if enterprise features were to appear, it would be higher up in the stack then "sending a Dockerfile somewhere". Container runtimes are becoming (or have already become) a commodity, so people are now chasing to get a slice of the orchestration pie before everyone standardizes on Kubernetes.


If the Enterprise Edition is a superset of Community Edition functionality, then this could definitely still happen. EE people could run all CE Dockerfiles, containers, etc., but EE people could also (for their own uses, or from enterprise vendors) run Enterprise docker containers, Dockerfiles, etc.

The situation could turn into something reminiscent of using non-RHEL/CentOS Linux in the enterprise before Ubuntu became popular. Using Debian and need drivers from a vendor? Here's an RPM! Convert it to a .deb, extract it yourself, hope it doesn't have scripts that'll break your install and that the paths work properly. Oh wait this is for a specific patched kernel that RHEL ships, now I have to go get the source and build it myself, but it only ships as patches to a kernel source tree of theirs.

Ubuntu's widespread support helped, and then the advent of VMs made it so that you could hypervisor your hardware and then not have to worry about support in your various OSes. And then vendors started shipping VM images (e.g. AeroFS), but only if you're using a supported virtualization solution (we support both ESIx and HyperV!). Now we have containers, and we can ship customized environments, stripped down and devoid of anything the app doesn't need, but how long until vendors start shipping those containers with assumptions about either the host or the environment/tooling that only works for the people who pay extra?


It looks like containerd will be a better fit for you than Docker. It's the core container runtime extracted into a standalone upstream project. See https://containerd.tools and https://blog.docker.com/2017/02/containerd-summit-recap-slid...


I'll detail my fears. I'm afraid investment and corporate structure, specifically of infrastructure companies, directly affects the security, reliability and trust of the infrastructure on which all of this stuff runs. I am aware maintaining shareholder value for such a company may be difficult given infrastructure (or fractional infrastructure) may be asked to remain open, transparent and trustworthy for the user's benefit.

To use a bit of a biased argument, I've been told by several infrastructure VCs that the infrastructure market is currently difficult to invest in because of the uncertainty the technology behind it has brought us. I don't trust that continued traditional investments behind producing those infrastructure offerings are a rational choice for users. However, at the end of the day, only the users can speak to that claim. I can only speak my mind on the matter.

Unfortunately, it's difficult to trust a service or software built on closed technologies because seeing inside the service or software becomes difficult, expensive or impossible. The combination of desired outcomes (easy infrastructure) and risk bias (implied trust) presents itself as a dangerous one because leads to cognitive dissonance where the market must literally believe two things at once: We have to TRUST this service or software because We NEED this service or software.

I'd prefer we all work together to solve these conflicted views with "enterprise" software offerings, especially those involved in building infrastructure, but my observations say that we are more likely to not work together because of existing investment structures. Perhaps this will change over time as new models emerge. For now, I remain sceptical at best about the way we're investing and growing the infrastructure market.


I'm sorry you don't like the new versioning.

Echoing @shykes below[1], Docker had premium paid products before this launch, but we're trying to make that clearer and to simplify the product lineup.

Note that Docker CE is _just_ as good as the Docker you were using yesterday. In addition, the version lifecycle improvements are designed to get new features into Docker users' hands faster (with monthly Edge releases) and to improve maintainability by overlapping the maintenance windows of free Docker CE quarterly releases.

[1]: https://news.ycombinator.com/item?id=13774420


> Docker CE is _just_ as good as the Docker you were using yesterday

The concern is that the Docker CE we'll be using in 2020 will be missing useful features that Docker EE has, and which vendors who ship containers/Dockerfiles for their products will rely on.


> I'm sorry you don't like the new versioning.

I would note this is a non-apology, given it's stating you are "sorry" someone doesn't like a decision that was made.


Docker has always been freemium. Not sure why you'd fear that even if it weren't the case. IMO freemium OSS is a good thing, this is how "free" software gets funded.


Looks like Docker is still Docker, they just tacked on CE at the end. The enterprise edition looks like a promising (and more obvious) way to monetize.


I don't think this is 100% fair. Docker (the infrastructure)is relied on by SO MANY people and projects that I don't see it going away, even if Docker (the company) shut their doors tomorrow.

Now, it's not clear to us in the public HOW MUCH of Docker (the software project)_would fall into that category. From the outside, it appears that there is a lot of community around the infrastructure, and somewhat less community around the higher layers (e.g. SwarmKit).


> Each Docker EE release is supported and maintained for one year and receives security and critical bugfixes during that period.

I thought one of those most important enterprisy things would usually be a long support timeframe. One year isn't exactly very LTS.


It's true that 1 year support is not as long as many enterprise infrastructure products. We plan to expand it over time. But we've found that it's a good fit for our current offering, and the value enterprises are getting from it.

- Keep in mind we are releasing Docker EE quarterly, and supporting every release for a year. This is attractive for enterprises who are adopting Docker in part to make their software practice more agile. They don't want to be forced to upgrade every 3 months. But they appreciate that they can. This works for Docker because it sits relatively higher on the stack. If we were providing a storage appliance, or a traditional host operating system, this wouldn't make sense

- For a company of our size and maturity (300 people, 4 year-old free product, 18 months-old paid product), earning the trust of large conservative enterprises can be hard. We do it by being honest about our abilities, conservative in what we promise, and going above and beyond to deliver on what we promised. In this case, we simply weren't confident that we could promise more than 4 simultaneous EE releases (1 year support x 4 releases per year). That might cost us sales opportunities now with more conservative buyers, but those buyers would probably have been unhappy with us anyway. We can get them later - when our product is more mature and our release and support infrastructure is more robust.

EDIT: I see other commentors confidently stating what is and isn't enterprise-ready. Remember that enterprises are, by definition, very large. There are many departments with different goals and different priorities. For some of them, Docker EE with quarterly releases and 1-year support is a good fit. For other, it's not. And that's OK.


As a relatable data point, Pivotal Cloud Foundry does not have LTS releases yet. We also release quarterly and make very frequent updates, usually from upstream CVEs, through Pivotal Network.

Enterprises want LTSes because legacy platforms are a nightmare to install, operate and upgrade. It becomes less important as the platform itself becomes less bestial.

Ultimately, enterprises want outcomes. Any given checklist item from the buying department usually represents scar tissue that may or may not still be relevant. New platforms -- Cloud Foundry, OpenShift v3, Docker EE, whichever of the thousand blooming Kubernetes offerings will succeed -- are in a position to renegotiate from first principles.

You might want to look into BOSH. It's a large part of the operability secret sauce for Cloud Foundry.

Disclosure: I work for Pivotal, we're ostensibly competitors due to the increasing overlap between Docker EE and Cloud Foundry.


At some point it might make sense to do something like Ubuntu: LTS releases (5 year support) every two years, and short term support often.


That was my thought as well. Docker not supporting a version for longer than a year is a no-go. This is not enterprise ready. It's barely dev ready because my devs need a stable environment to work in, not something that's going to puke every few weeks.


Just because something isn't guaranteed not to puke every few weeks doesn't mean that it will. I've been using Docker for a while now and it has been very rare that I run into issues because of an update.


In large enterprise it will take 6 months to get a version certified to work with tooling so need 3 year at the minimum to put any effort on it.


Docker initial release was just 4 years ago, your asking for a support timeframe that's around the same length as the age of the project itself.

Docker still is a new product, evolving very fast (this announcement is just one more proof of that). Is it production ready ? Yes If you want to move fast and benefit from new tech and are ready to upgrade regularly. No if you are not able to perform those upgrades and need insurance that the project will contribute significant ressources to fixing issues on 3 years old versions.

And currently I totally understand that the Docker team sets slider more towards "new dev" than towards "support". That will probably change in the future.


> Docker still is a new product, evolving very fast

And that's a large part of the problem. We, as an Enterprise, don't want something with rapid evolving infrastructure, features. We need stability, critical and security patches. Emphasis on stability. 3 years is a minimum we would consider for that and not a 1 year only support range.


> "We, as an Enterprise, don't want something with rapid evolving infrastructure, features"

That's exactly why you do not need/want docker now. Several years ago Hypervisors like XEN were all the rage. Enterprise could not rely on it, because of the lack of LTS, while smaller startups and businesses were building their infrastructure around it. Everyone is now using those technologies which have undergone years of testing and have massive support from vendors. Docker will reach that stage in the future, but right now it's just too soon. You are asking the Docker team to slow down the development of the technology for everyone because you don't want to deal with the consequences of using a developing product. Let docker reach it's full potential, let the community do the testing, then ask for LTS on a mature technology. At that point, there will be probably be a new trend somewhere that will bring startups to the next level while your get to benefit from containerisation with full vendor support.


Years ago, the hypervisor was nailed by VmWare, while Xen was a joke.

Then the AWS/Google/DO/other worked hard on the hypervisor and they released infra that worked and is entreprisey ready... but it's a complete offering, you cannot just get their Xen package and put it on your systems (assuming it's even powered by Xen anymore), you have to use their platforms.

Docker is unstable now.

AWS/Google/RedHat/CoreOS are all working pretty hard to pull out an offering for containers that works and is entreprisey ready. It's gonna take years to be production ready enough for critical systems, and it's as likely to reuse Docker as to replace it with their own more stable tech.

The point is, docker is nowhere near ready in its current state. Wait and see.


I'd argue, with the benefit of infinite bias, that an enterprise offering for containers already exists: Cloud Foundry. Running in production for years.

The commercial distributions (by Pivotal, IBM, SAP most notably) are licensed to gigabuck corporations worldwide.

That said, other companies and opensource projects will absolutely move towards the value line. There's no money or glory in building blocks.

Disclosure: I work for Pivotal, the major contributor of engineering to Cloud Foundry.


IIRC it's pretty likely that AWS runs on Xen, or at least did so at some point in the past. There was some striking correlation between Xen security advisories and mass reboots of AWS nodes.


AFAIK AWS runs patched Xen. They seem to have built quite a bit on top though, and e.g. Have re-written lots of the network stack. This also seems to be why their networking stuff would break or be very flaky regularly as of a few years ago.


You just listed a bunch of reasons why your Enterprise should not be using Docker.

EDIT: I did not realize it was per node pricing. I take back mostly everything I said below. I hope that ultimately when Docker does provide 3 years of support, you'll be willing to pay up.

Look at the pricing, even the biggest Enterprise plan is $2000/yr. This is aimed at small consultancies, startups and small businesses, etc that are tired of sending their developers into the docker github issues to log bugs and argue features.

When Docker adds a plan that satisfies your Enterprise, you'll be paying typical Enterprise prices that will be at least an order of magnitude larger than the current biggest plan. Hopefully you are able to put your money where your mouth is.


For reference here are a few case studies of enterprises using Docker EE in production:

- https://www.docker.com/customers/docker-datacenter-delivers-...

- https://www.docker.com/customers/ge-uses-docker-enable-self-...

- https://www.docker.com/customers/ing-delivers-value-customer...

There are many more here: https://www.docker.com/customers

Also, as others have noted: the price point is per-node. Docker EE is most definitely for enterprises.


Of interest, GE is a Cloud Foundry Foundation member and a founding investor in Pivotal. They have a fairly engineering org working on their IoT-oriented version, Predix.

Disclosure: I work for Pivotal, etc.


Note that the pricing is per node.


Red Hat OpenShift / Atomic might be a good match then.


NodeJS still is a new product, evolving very fast. Is it production ready ? Yes If you want to move fast and benefit from new tech and are ready to upgrade regularly.

...

... and NodeJS learned that if you want to be a proper production environment, you still need to have LTS. 'Being fashionable' isn't enough.

> And currently I totally understand that the Docker team sets slider more towards "new dev" than towards "support".

Flashy features built on an unstable base is not something you want to stake a company on - support is important, it's just not sexy. New features gets you new users. Support makes them stay... and even pay.


There are 2250 open issues on the github. We're tracking a good handful or two as nuisances to our process but there are no showstoppers for us in dev. We're not in production yet.

https://github.com/docker/docker/issues


We had a pretty good time with CoreOS ^W Container Linux by CoreOS in that regard. They seem to be really good at providing automatic OS updates and shipping known-to-be-working combinations of kernel and Docker.


I've been working with Docker in production for about 2 years now and I have the opposite experience, not in the sense that I run in issues very very frequently between updates but it's happened often enough for us to not push updates at all for Docker until we are very confident it won't break.

It's happened at least 5 times in the past 2 years, that's VERY frequent from an enterprise viewpoint.


I'd like to +1 this - one year is a tough sell in a world where a 5-10 years is norm

Backwards compat and LTS already comes up a lot with enterprise support, requiring an annual upgrade project will only complicate it


Sorry about that. We will expand the support window over time.


FWIW, I think an 18 month support promise would be a reasonable first endeavor. The enterprise I worked at for several years did things on an annual cycle (as their customer interest was quite annual also). Project planning took this into account, so a year was a reasonable planning point - but things do slip, and having an extra few months with known support would be a relief factor.


That's great feedback, thank you.


On the other hand, expecting a company to promise a 5 year support window when they're less than 5 years old seems unrealistic.


Look, CE vs EE feature gating aside I think what rubs me the wrong way the most here is the abandoning of SemVer. I was following the PR where it happened and the reasoning seemed to boil down to a bunch of hand-wavey "just because". When 1.13.1 was released I installed it being pretty confident that it was only bugfixes and that's how I perceive the rest of the world to work. When I install 17.04 CE how will I have any idea of the impact on my servers vs 17.03 CE? I mean I read CHANGELOGS and stuff when I can but there's a certain level of comfort knowing that the people who create and package the software have spent enough time to figure out it's just a bunch of non-breaking bugfixes and I'm safe to send it out pretty quickly.

EDIT: I've seen somewhere that it's supposed to mirror the Ubuntu naming scheme but that's fundamentally different. I know that the "X.04 LTS" releases are stable-ish and they only come out every 2 years (right? Going off the top of my head here), which is waaaaay different than monthly releases in terms of time spent vetting the stability IMO.


Hi, in addition to what others have said in response, note that the actual version is 17.03.0. Patch releases will be 17.03.X, just like 1.12.0 was the initial 1.12 release has 6 patch releases after, final one being 1.12.6.


Docker has never used SemVer, so no there has been no abandoning of it. People, including you, seem to have been confused by the versioning scheme into thinking we did. Other people complained that we did not. There has always been a deprecation scheme and an API compatibility scheme that has not been SemVer, and this is not changing. Now at least the version numbering scheme makes it clear that this is not SemVer.


Yes, using `major.minor.patch` versioning does give a strong impression of SemVer usage.


We take backwards compatibility seriously. If you encounter problems updating from one version of Docker to the next (whether from 1.13.1 to 17.03 or from 17.03 to the upcoming 17.04), please open an issue on docker/docker so that can fix the incompatibility and improve our change process.

Quoting from the blog post:

    The Docker API version continues to be independent of the
    Docker platform version and the API version does not
    change from Docker 1.13.1 to Docker 17.03. Even with the
    faster release pace, Docker will continue to maintain 
    careful API backwards compatibility and deprecate APIs and 
    features only slowly and conservatively. And in Docker 
    1.13 introduced improved interoperability between clients 
    and servers using different API versions, including 
    dynamic feature negotiation.
- https://blog.docker.com/2017/03/docker-enterprise-edition/


Docker takes backwards compatibility so seriously they wholesale block the client and server from communicating with each other if they differ by a single minor version.

Docker takes backwards compatibility so seriously they've released multiple versions of a docker registry all with completely new APIs.

Sorry if I don't buy it.


> Docker takes backwards compatibility so seriously they wholesale block the client and server from communicating with each other if they differ by a single minor version.

That has been fixed. Note that this limitation (although it turned out to be annoying, which is why we removed it), did not actually break reverse compatibility in the API. It just made the client excessively paranoid about reverse compatibility. In other words the client didn't trust the stability of the daemon enough, even though the daemon in pratice almost never broke compat.

> Docker takes backwards compatibility so seriously they've released multiple versions of a docker registry all with completely new APIs.

I'm not sure what you're referring to, but I will look into it. Is this still affecting you? Or is it a past problem you are still pissed off about?


With all due respect, this is exactly the attitude that will prevent enterprises from ever taking Docker seriously.

Why should enterprises trust you on backwards compatibility when longstanding issues with backwards compatibility were just fixed and then glossed over like this ("it never broke in practice because we forcibly made you update")? Docker has repeatedly made poor decisions with really poor optics both in the open source community and with their product, this is just one example, and asking enterprises to just trust you now while not even providing the support terms most of the enterprise world demands is doing the exact opposite of inspiring trust.

Do you honestly not remember sunsetting the python docker registry just a year and a half ago and then introducing a brand new golang registry product with an entirely different API? Because that's precisely what enterprises pay to avoid, they don't shell out absurd money for LTS versions to hit a constantly moving target. And please don't patronize me with "past problem", some of us lowly end users of your product had to clean up that mess just to get day to day operations working again. Forgive me if I'm gunshy.


My intention is not dismiss your complaint, but to gather more details so I can help.

Here is a list of known past breaking changes in the Docker API. https://docs.docker.com/engine/breaking_changes/

If you have encountered a breaking change that is not on the list, could you mention it either here or on https://github.com/docker/docker.github.io ?

Some of your claims about breaking backwards compatibility above are incorrect. I am trying my best to point that out without seeming dismissive of your overall point - which I think is that Docker can do more to improve stability and backwards-compat. I agree with that point.


pdeuchler expressed skepticism about Docker's current compatibility statements based on Docker's historical compatibility practices.

Suggesting that this could be "a past problem [he's] still pissed off about" comes across as tone-deaf when the underlying issue is Docker's credibility when it comes to backwards compatibility.


In terms of client server communications, that's no longer true: https://github.com/docker/docker/pull/27745


The quarterly ("stable channel") CE releases (17.03, 17.06 and so on) are supported for 4 months, and will not get new features during that period. EE quarterly releases have a 1 year support period, and also won't get new features.

During the support period, bug fixes will get back ported to those versions and released as "patch" releases (e.g. 17.03.1).

When installing, you can choose to install either the "stable" (quarterly) channel, or the "edge" (monthly) channel.


This seems to carefully avoid making any promises about the future. Where SemVer _does_ make promises.

Also, picking a date for versioning is weird as it doesn't contain any information other than when the Changelog was set in stone. Too bad this decision was made, and Docker choose not to value the stability of SemVer.


We most definitely WILL respect SemVer where it matters: the API versions.

Docker is a collection of many different components, exposing many different interfaces. Semver in Docker version doesn't make sense for the same reason it doesn't make sense in Ubuntu or Windows.


Cool, thanks for stressing this! I'm fine with not choosing Semver, but a date holds not guarantees on backwards compat, nor any other useful info. But I do get you'd like to version components the same to stress those are meant to be used together.


If you're going to say that, you really should build your packages appropriately. e.g, right now, you have:

$ repoquery -q --provides docker-engine-17.03.0.ce-1.el7.centos docker-engine = 17.03.0.ce-1.el7.centos docker-engine(x86-64) = 17.03.0.ce-1.el7.centos

...compare to, say, python (I've elided a few things for clarity):

$ rpm -q --provides python python = 2.7.5-48.el7 python(abi) = 2.7

If you do something like: Provides: docker(api) = 1.23

... this at least means that folks can depend on/install 'docker(api) > 1.23' or whatever if they actually want to get a specific semantic version.


That's good feedback, thanks.


What I don't understand is now there are two repositories for ubuntu for docker: apt.dockerproject.org and download.docker.com/linux/ubuntu. Will both of these continue to exist?


Yes, for now both will continue to exist. The download.docker.com/linux/ubuntu repo is the preferred method of installing docker-ce on ubuntu. Instructions here: https://docs.docker.com/engine/installation/linux/ubuntu/#in...

In the future we will phase out the apt.dockerproject.org repo.


I don't see how a freemium model solves the fundamental problem of Kubernetes eating their lunch. If anything surely this exacerbates it?


Hi, I'm the founder of Docker.

> [...] a freemium model

Docker had already adopted an enterprise subscription + freemium model, but the offering was less clear (case in point: you weren't aware of it). This clarifies and simplifies our offering, and upgrades the enterprise offering along the way.

> [...] solves the fundamental problem of Kubernetes eating their lunch

Kubernetes is a component (like containerd or swarmkit), while Docker is a platform which integrates many components (like Cloud Foundry or Openshift).

So Docker and Kubernetes are not directly competitive - Docker just happens not use Kubernetes as an orchestration component. It uses SwarmKit, an open-source component developed in-house (https://github.com/docker/swarmkit)

A better comparison would be Docker and Openshift (which is Kubernetes-based). Is Openshift eating Docker's lunch? It certainly doesn't feel that way to me, but of course I am biased. Docker has three major advantages over Openshift: it's modular, it has better security, and it's not locked to RHEL. The main advantage of Openshift of course is that it is highly integrated into the Red Hat platform, which is appealing if you are already heavily invested in it. Openshift also benefits from the demand for a commercially supported product based on kubernetes.

But either way the market is so early, and the demand so strong, I believe there is room for more than one major container platform. In a few years when the market starts maturing, we'll see!


> Kubernetes is a component...while Docker is a platform which integrates many components...So Docker and Kubernetes are not directly competitive

I wonder whether this is a distinction that exists in your mind, as someone intimately involved in the development of the docker tool and the Docker, Inc business model more so than in the minds of Docker users. You have a vested interest in making Docker encompass all that Docker, Inc produces. For many of us, this is actually against our interests. A lot of us want Docker to just be the base containerization layer with other offerings (like k8s, swarm, etc) built on top of it and branded separately. Continually adding more to the docker base layer adds confusion in the minds of people who don't follow Docker closely, makes it harder to get it approved for use in our organizations and increases the security footprint that needs to be audited.

I won't speak for others, but it would make my life much easier if you'd build (and name) your offerings on top of the base containerization tool like everyone else rather than trying to stuff everything into one tool with one name. You have no idea how hard some of us have had to fight inside our organizations to simply deploy builds inside containers. Increasing the scope of what Docker means is just giving ammunition to our internal opponents.

I understand why you're doing what you're doing...there's no money in developing that base layer unless you can parlay it into selling the other parts of your platform, but just understand that what you're doing isn't really user-friendly and many users won't pliantly go along with whatever marketing decisions you make. Like it or not, Docker is an ecosystem, not a platform. It has a life of its own that you're only partially able to shape. You have the advantage of being able to shape the roadmap for the underlying containerization layer and the goodwill that comes from putting in the work to have created initially and maintain that layer on an ongoing basis. That should be enough without leveraging it further.


> I won't speak for others, but it would make my life much easier if you'd build (and name) your offerings on top of the base containerization tool like everyone else

We are doing exactly that. The base containerization layer is containerd, and it is now available standalone separate from Docker.

I covered this topic in more detail in another comment: https://news.ycombinator.com/item?id=13775677

I hope this helps.


I think this distinction is important for people to internalize. Docker heard the userbase that there needed to be a distinction between the "bottom half" and the "top half" of the Docker stack.

Giving away the Docker name to the bottom half would have been dumb (IMO), so extracting it as containerd makes sense.

Will that confound some people? Sure, but they will get over it. I think it was the right thing to do, but then I am known to be a big fan of layered systems. :)


Again, that's a relatively-recent branding decision on your part. It's pretty meaningless to people that have spent more than a year trying to explain to people what Docker is and why we should use it. Trying to go back to those people now to switch terminology is, at best a massive headache and, at worst, going to get containerization efforts canceled.


I'm not sure I understand the problem. Whatever containerization project you were building on top of Docker, it is just as viable today as yesterday. No option has been removed - if anything there are now more options available to you. All new versions of Docker remain backwards-compatible with previous versions. And components of Docker which were previously not available standalone, now are.

What exactly would you like Docker to do, that we're not currently doing?


What you're missing is the ecosystem vs platform distinction. A platform is yours to control and an ecosystem has a life of its own. You're right that everything that was possible still is. But docker, as a term, developed in ways not controlled by you. It took on a meaning that's probably more limited than you'd like. And it developed momentum that can't be stopped or diverted by a marketing strategy and a press release from Docker, Inc.

You're not the first company that's had to deal with this. To me, it's similar to Google's failed attempts to keep people from using their name as a verb. You're both companies that has a wildly successful initial product that got associated with your company's name and that hindered attempts to branch out from that initial offering. But the more you try to repurpose the term to refer to your broader offering rather than the narrower base layer that it started out as, the more you create problems for those of us that have spent a lot of time and effort selling your approach within our organizations. Not everyone is sold on containerization and we (your advocates) have people that will pounce on any confusion as a way to push back.

Decision-makers in organizations are often surprisingly non-technical. Imagine if there were a company behind email. And that company wanted to make money selling add-on services like encryption and mailing list management. So it decided to call the base email layer libsmtpd and repurpose the term email to mean the broader platform offering. Now imagine you've got to explain this change to your elderly mother who's just gotten over the hump and gotten comfortable with sending email and referring to email correctly.

That's kinda the position you're putting us in.


I understand what you're saying. And you would be 100% correct if everyone shared your definition of Docker as a tiny container runtime and nothing else. But that is not at all the case. There are conflicting narratives of what Docker is and isn't. And where you see the mystical forces of brand destiny deciding which narrative will win - I have seen the kitchen where the branding sausage is made. And believe me there is nothing magical about it. Some vendors want you to believe Docker is a component, so they spend money in marketing programs and suddenly you're hearing through various channels that Docker is a component. We in turn want you to know that Docker is a product, we invest in telling you that story, and here we are.

The difference is that we invented Docker. We understand its future potential and control its design and trademark. Our competitors don't.

So I'm sorry that you have a different definition of Docker than the people who invented Docker. But just like Google decides what Google is - Docker decides what Docker is.


I'm sorry you feel that way.

For what it's worth, my notion of what Docker is comes from having run it in production since before the transition away from LXC. I ran a small team inside a relatively large company (8000 employees, $25B mkt cap) and we largely had control over our own ops. We ran into a lot of early adopter pains, but the benefits definitely outweighed the pain.

I was also in various architecture groups that made decisions for the larger products at the company. I was always honest about the drawbacks, but I pushed for limited exploratory projects using Docker to try to slowly move ops in that direction. I had a lot of opposition from ops folks that never felt the pain of the developer experience and worried that Docker was an encroachment on their fiefdom (GoT had nothing on our internal politics :-)

Slowly, we (I wasn't the only Docker supporter at the company) made progress. We set up a Quay private registry server so that, at least, teams could begin to experiment. And when I met your team at re:invent and heard that you were developing your own offering for private registry, I convinced them to switch. The first-party argument was easy to make and the company didn't really quibble about sub-7-figure software purchases.

I ended up leaving that company last year, so your current direction isn't really making my life harder, but had I stayed,it would be. If you want to be explicit in your targeting of a different market segment from the kind of early adopter that I am/was, that's fine. But don't accuse me of having my view shaped by competitors' marketing. That's a highly revisionist view of your own history. Because when I got on the Docker bandwagon, the reality absolutely matched what I'm describing. Docker was the base layer and a prefix added to related product offerings. But it wasn't a platform like you're describing. That seems like it's changed now, which would've really screwed me in trying to push Docker at my previous employer.


None of that has changed. Docker is still targeted explicitly at your use case. I believe your project would have been just fine. Docker EE was shaped by requirements very similar to your own. But I can't seem to convince you of that... Sorry if I'm not being clear. Maybe we can discuss in person one of these days :)


If Google decided to pivot to a lunch delivery service and shut their search offerings, people would still 'google' things.


Very well said!


@curun1r

Maybe you should face and accept the fact that Docker is not going in the direction you need it to, and its future plans are not aligned with your interest.

These are perfectly reasonable reasons for a company to cancel the efforts and avoid the headache.

A comment on a news will not put it back on the "right" track. Don't invest in risky and uncertain products.


We are going in the direction he wants us to. We're just doing it under the name "containerd" and he's unhappy that we're not calling it "Docker". If he were to simply accept our choice of naming, 100% of his issues would go away.


I think Solomon has hit the nail on the head here, but I'll call it out even more starkly.

Assuming containerd is successful, and that higher-order systems like Kubernetes and Mesos use containerd directly, we will see one of two things in 12-18 months time: 1) There are hundreds of thousands of DOCKER users 2) There are hundreds of thousands of CONTAINERD users

The split is forcing stratification (in a good way) where there previously was none. Users have to identify whether they think "docker" means "a container runtime" or "a full stack".

Of course, the pie is growing, so maybe we get both results. The real problems over the next 12-18 months are MESSAGING and DISENTANGLING. How do you get this message across to people, and can you actually change the words in the common vernacular?

As owners of the word "Docker" Solomon can define it how he likes, but that doesn't mean he can actually stop people from using it to mean something else. That's going to be a process.

c.f. "literally". Even Webster's Dictionary has given up the fight on that.


>> accept our choice of naming,

Therefore you are not going in the direction he wants you to.

The name for that is "Docker" and it has been for many years already. You can't just rename things to fit your new marketing strategy and pretend that your users won't get hurt.

If you could simply erase the memory of all inhabitants of Earth and rename all books and articles even written, then yes, there would be no issues in renaming.


I don't understand this level of outrage. In any case being outraged doesn't make you right, and it doesn't authorize you to twist the facts. So let's compare your claim that Docker was "just a container runtime" for years, and we are suddenly changing that to "fit our marketing strategy", whatever that means. For reference here is the changelog of Docker: https://github.com/docker/docker/blob/master/CHANGELOG.md

- Docker has included a container build system since version 0.3 in May 2013. https://github.com/docker/docker/blob/master/CHANGELOG.md#03...

- Docker has included image storage and distribution since the version 0.1 in March 2013. https://github.com/docker/docker/blob/master/CHANGELOG.md#01...

- The official website has said "Docker is a platform to build, ship and run distributed applications" since 2014, and clearly featuring the collection of multiple tools forming that platform. https://web.archive.org/web/20141216011043/https://www.docke...

- Docker has included a distributed key-value store and optional multi-host networking since version 1.7 in June 2015. https://github.com/docker/docker/blob/master/CHANGELOG.md#17...

- Docker has included orchestration since 1.12 in June 2016.

- Docker has included cryptographic content trust since version 1.8 in August 2015. https://blog.docker.com/2015/08/content-trust-docker-1-8/

- Docker has included secrets management since 1.13 in February 2017. https://blog.docker.com/2017/02/docker-secrets-management/

- Docker for Mac and Windows have included a built-in hypervisor and OS since March 2016. https://blog.docker.com/2016/03/docker-for-mac-windows-beta/

- Docker for AWS and Azure have also included a built-in OS since June 2016. https://blog.docker.com/2016/06/azure-aws-beta/

- Docker started gradually factoring out its container runtime in 2013. First with libcontainer; then with runc/OCI; and most recently with containerd. https://containerd.tools

- containerd in particular exists since 2015; and it's been over a year since Docker doesn't perform any container runtime task itself - it's all handed off to containerd.

I could go on. Docker has said clearly and consistently, for several years now, that it is building a platform made of a collection of tools, including but not limited to a core container runtime. You just didn't want to hear that explanation, either because you didn't like it, or because someone other than Docker (perhaps a competitor) gave you an incorrect explanation of what Docker is.

And now that the dissonance is becoming hard to ignore, you're forced to reconcile your incorrect definition of Docker with the real definition, and it makes you angry. But, like I said, being angry doesn't make you right. And it doesn't put you above having to provide evidence of your claims. So, can you provide concrete evidence that we are "hurting our users"?


"docker" has too much brand recognition to let it go even though this whole 'rename' (that's what this is) makes things much more confusing than before.


"Kubernetes is a component (like containerd or swarmkit)...Docker is a platform...(like Cloud Foundry or Openshift)"

I think that's pushing k8s farther to the left of what it really is, and pushing Docker farther to the right of what it really is.

k8s, for example, incorporates service discovery. As far as I can tell, swarmkit does not. k8s incorporates networking, containerd does not. Similar for things like ingress load balancing.

There are certainly potential customers debating, directly, k8s vs your full Docker platform, even if there are some gaps they have to fill with other software.


Especially since OpenShift uses both Docker and Kubernetes...

So Docker is like Openshift, except openshift uses docker as a component? Very confusing..


Openshift uses a very small subset of Docker. That subset is now available standalone as containerd, in part to allow Openshift to no longer depend on what is essentially a competing platform. I expect lits of constructive open-source collaboration with Red Hat on containerd.


This is why it's important to clarify what "Docker" means in context. Kube and OpenShift DO use Docker (the lower-level infrastructure parts of it, but not the higher-level platformy parts of it).

That represents the evolution of the containerd-vs-docker split, and (to me) seems totally rational.


> k8s, for example, incorporates service discovery. As far as I can tell, swarmkit does not. k8s incorporates networking, containerd does not. Similar for things like ingress load balancing.

Swarmkit does in fact implement service discovery, networking, and ingress load-balancing. It also implements out-of-the-box node security and mutual TLS, secure secrets management, a built-in raft store, infrastructure-agnostic overlay networking, and various goodies which we needed to make Docker work great out of the box.

containerd is a different type of component entirely - in fact it is very complementary to kubernetes.

> There are certainly potential customers debating, directly, k8s vs your full Docker platform

They typically debate Docker vs kubernetes-based platforms (among other possible alternatives). If they're a Red Hat shop, they typically evaluate Openshift. Sometimes there's a team building an in-house platform. Nobody ever deploys kub alone in production. There is always some form of platform on top.


> Nobody ever deploys kub alone in production

That doesn't gel with any reality I know of, sorry. Lots of people use higher level platforms, or develop bespoke tooling, to be sure, but that's going to be true forever, for every platform.

(disclosure for others: I'm a Kubernetes founder)


Replying to myself: The fact that someone builds tooling on top of your offering is not an indictment of your product. It simply means people are using it.

Be worried when people STOP trying to wrap your stuff with their own opinions.


> Nobody ever deploys kub alone in production.

We do. I work at SAP, and we run our company-wide OpenStack on pure Kubernetes on CoreOS's Container Linux. [1] We do use Docker (the container runtime) because it comes with CoreOS, but no other Docker product as far as I'm aware. I've been working with Kubernetes for quite some time now, and honestly don't know what else you would need on top of it (except for some Continuous Integration tool, of course, but that's already a staple of any well-organized agile team, no matter the platform).

[1] I mean the API and orchestrator parts, not the customer VMs themselves. These sit on traditional hypervisors.



Of interest, SAP is a Platinum member of the Cloud Foundry Foundation and has a certified CF distribution, SAP Cloud Platform.

Disclosure: I work for Pivotal, another Foundation member, which also sells a commercial CF distribution.


Ah, yeah, the CF guys are from another team. They deploy their stuff on (among other things) the VMs created by the OpenStacks running within our Kuberneteses, though. It's turtles all the way down. :)


That's more turtles than I've typically seen. How're you deploying k8s?


Bare-metal with homebrew automation. We install CoreOS via PXE boot, and during the installation it also sets up a Kubelet as an rkt container. The Kubelet then spins up the other k8s components via manifests. The pod and service networks are routed via BGP using our own https://github.com/sapcc/parrot

Later this year, we will go back and evaluate the maturing k8s administration landscape. Our current approach has a few drawbacks, e.g. it requires a CoreOS reinstall to upgrade k8s cleanly (since all the magic happens in cloud-init and Ignition).


Could you email me on my work address? I may have something of interest to share. I'm just not sure if we've made it public yet.


I beg to differ, having done Kubernetes training for nearly a year now. None of the shops I've trained at had any platforms on "top" of Kubernetes. They all ran Kubernetes directly from the source. And we're talking about some big enterprises here, not mom and pop shops.


When an enterprise runs Kubernetes directly from the source how do they deal with bugs and support ? Doesn't that need some vendor backing with tight SLAs ?


> Nobody ever deploys kub alone in production.

Uhh, we do. And we're not alone as we work with several other companies that do. That's quite a big nobody tent.


What do you use for provisioning and upgrading Kub? For monitoring it? For CICD? For access control? Image registry? Image build? The collection of these things, and everything else required for deploying and managing containers in production, is what I call a container platform.

Is kub core not working on spinning out built-in volume plugins out of the core to keep it smaller? Are the maintainers not recommending using third-party tools + 3PR for all major new features going forward?

It seems like my comment was misinterpreted as a criticism of kub, or a sign of ignorance. It was neither. I stand by my comment that nobody runs naked kub in production. That doesn't make it any less good, stable or useful. It just means it's not meant to be a complete product.


>containerd is a different type of component entirely - in fact it is very complementary to kubernetes

You made the initial comparison.

> Nobody ever deploys kub alone in production. There is always some form of platform on top.

Which is another way of saying "even if there are some gaps they have to fill with other software". Sure, some customers pick a platform where the gaps are prefilled. Not terribly different from some of your customers that pair docker pieces with pieces from other vendors.


>> containerd is a different type of component entirely - in fact it is very complementary to kubernetes

> You made the initial comparison.

Yes, sorry I wasn't clear. I meant that they were both components, as opposed to complete products.

On the other hand, SwarmKit and Kubernetes are comparable in functionality.

> Which is another way of saying "even if there are some gaps they have to fill with other software".

I agree.

> Sure, some customers pick a platform where the gaps are prefilled. Not terribly different from some of your customers that pair docker pieces with pieces from other vendors.

I also agree.

It seems that there is nothing left to disagree on :)


Sure. Said more concisely, my beef was that you compared k8s to things narrower in purpose, then compared Docker to things broader in purpose. It made the gap between the two seem larger than what it is. I agree there's a gap.

Edit:

Re: "Try developing on Docker for Mac/Windows and deploying to production with Docker for AWS/Azure"

Developing on a platform that isn't the same as your production platform is an avoidable situation. I could develop on .Net and deploy on mono as well, but...


The gap is in fact huge. Try developing on Docker for Mac/Windows and deploying to production with Docker for AWS/Azure. Then try to build the same level of functionality yourself from naked kubernetes and only open-source tools. Let me know when you're done :)


The difference is that I can put together components of Kubernetes that actually work. Sorry to be inflammatory, but I've been working on the bleeding edge of Swarm and it simply doesn't deliver what's promised (ex: flaky management of iptables, missing idioms/abstractions, etc.).

Single-host Docker in production, yeah that's not terribly complex, but the new price of admission for production is a complete orchestration layer, and there is 0% chance I consider taking a Swarm cluster to production any time soon. And that's to say nothing of the DDC disaster...


MiniKube as a local env is getting great feedback filling this niche.


It's absolutely awesome in this role.

I don't know if a few hundred lines of shell scripting and Helm charts are considered building around it though, but that's enough for a Git hosting service (Gogs) and a CI (Drone 0.5) with persistent storage with vboxsf.


I deploy on minikube (Mac Desktop) and GKE.

There are 4 lines of configuration that differ between those two environments.


> Nobody ever deploys kub alone in production.

We do: https://kubernetes-on-aws.readthedocs.io/en/latest/admin-gui...

There are only a few AWS integration components we added, e.g. to use the "new" AWS ALB (see the linked text for details).


> Kubernetes is a component (like containerd or swarmkit)

That's a bit of stretch. Everything that people use to build something bigger is a "component", but that doesn't make it not a platform.

Kubernetes is absolutely a platform, in that it is the base layer on which higher-level systems are built. It's somewhat less opinionated than OpenShift (which is literally Kubernetes++) or Docker (the full stack), but that is by design. Opinions are too fickle - Kubernetes is here to service the evolving fashion of opinions, while providing durable base abstractions.


That's totally fair. Maybe I should have used to word "product" instead of "platform".


THAT is true. Kubernetes on its own is not a product, per se. There is no one company behind it. No legally binding support contract. etc.

It is the basis for many products (plural) and an ecosystem, which is really what we wanted to achieve.


> Kubernetes is a component (like containerd or swarmkit), while Docker is a platform

That's a really interesting angle to take on it. I think most users of Kubernetes would view it as the opposite; that K8s is the platform and Docker is just one component of that platform. It probably depends on which camp you've really bought into.


Yes, that is a common misunderstanding with the kubernetes community. We are correcting that misunderstanding with a few simple steps:

- What kubernetes needs from Docker is a simple and robust container runtime. We are spinning out containerd (the core container runtime that powers the Docker platform) to provide exactly that. We are actively working with the Kubernetes community to make sure containerd is a perfect fit for kubernetes to integrate - a better fit than Docker itself, in fact, since it will be much smaller and change only very slowly. See https://containerd.tools and https://blog.docker.com/2017/02/containerd-summit-recap-slid...

- This in turn will free Docker to focus on serving its userbase of developers and enterprises, which do want an integrated platform. Take a look at Docker for Mac/Windows or Docker for AWS/Azure for a sense of where we are taking the platform.

- If you ask the core kubernetes maintainers, they will tell you that kube is intended to be the "kernel" of your distributed system, and it's up to you to build a platform on top. So in that way, I think we agree that kubernetes is ultimately a component - nothing derogatory about that!

- You mention "camps". I think this evolution is very exciting because it allows us to move beyond the concept of camps. With containerd, a lot of bridges are already being built - engineers are collaborating peacefully and focusing on solving technical problems, which is a huge relief to everyone. Nobody likes drama.

- Lastly, we are making sure Docker is a very modular and loosely coupled platform. So, who knows? If enough of our customers ask us, maybe we'll eventually integrate kubernetes as an optional component ;) The point is, we have an opportunity to refocus the conversation on technical tradeoffs rather than silly pissing contests.

For all these reasons I think 2017 will be a good year for the entire container community.


> If enough of our customers ask us, maybe we'll eventually integrate kubernetes as an optional component

I have agitated for a long time that it's silly to be competing at the layer of orchestration - it's not a revenue-producing product on its own. I'd love to see more alignment here. At this point, the systems are very similar in a number of facets.

We're wasting a lot of time copying features and ideas back-and-forth, when we could be pushing the state of the art forward faster.


My understanding is Red Hat is generating more revenue than Docker selling to the enterprise market. Is this true, and if so, how does Docker plan to beat them?


In a growth market you don't have to beat anyone. You just have to deliver enough value to enough customers to capture part of that growth in the form of revenue. Which is the purpose of Docker EE :)


More importantly than any of that, to capture any revenue you need to have a business model where you actually bill something!

RedHat has OpenShift and support contracts.

Google/AWS bill the usage directly, and get returns on other products you use.

Docker didn't have much to sell.


Why does Docker Inc have to beat Red Hat? It's a big, growing market, enough cake for everyone?


Aren't unicorn valuations based on the likelihood of winning a winner-take-all race?


No, they're based on the likelihood of achieving massive growth in revenue. In consumer markets that often requires a winner-takes-all race. In the enterprise market, which is Docker's market, winner-take-all is not as central because enterprise buyers care more about interoperability and integrations of many products made by many vendors.

This is why, for example, Microsoft is seeing much more success with less tight coupling of Windows, Azure and Office.


Yet Docker is embracing "tight coupling" if not in the architecture ('batteries not included') then certainly in the marketing of their platform.

If you use Docker it's expected (by Docker) that you will use Swarm. And now it's expected that your organization will use EE (is that Java EE? no it's Docker EE.)

Docker followed the Apple II model into the enterprise, it was a fancy typewriter.


Our motto is "batteries included but removable". We make sure the platform works great out of the box, and offers a smooth integrated experience. That is a big differentiator for Docker.

At the same time, we also make sure you can pop the hood, mess with the components directly, and swap them out in various ways. You can do this within the Docker ecosyste with Docker plugins (for things like networking, storage, logging, authorization); or you can do it outside the ecosystem by hitting the low-level open-source components directly: containerd/runc, swarmkit, notary, libnetwork, infrakit.. All these components are usable standalone, and Docker preserves the loose coupling.

You're right that currently Docker does not offer swappable orchestration - not because we don't want to, but because it's hard to do that without affecting the quality of the platform. Sometimes excessive abstraction leads to bad engineering.

In early versions of Docker Swarm we experimented with pluggable support for Mesos, Kubernetes etc. It worked in demos but we didn't find it fit for production.

I hope this helps understand our approach better.


Can you explain how Docker has better security then OpenShift? (asking to learn, not a challenge).


Off the top of my head:

- End-to-end content trust with crypto signatures and verifications using Notary/TUF https://docs.docker.com/engine/security/trust/content_trust/

- Secrets management with encrypted storage and transport (https://blog.docker.com/2017/02/docker-secrets-management/)

- A vulnerability scanner that can detect vulnerabilities in arbitrary binaries without distro lock-in (ie. even if your developer built from source on a non-Red Hat distro, it will still catch vulnerabilities) https://docs.docker.com/datacenter/dtr/2.2/guides/admin/conf...

- Secure orchestration out-of-the-box: https://docs.docker.com/engine/swarm/how-swarm-mode-works/pk...

- Somewhat counter-intuitively, the default security profile is more secure in Docker than in Openshift, because the focus on "systemd everywhere" requires loosening the sandbox to allow systemd's tentacles to get through. In the past Red Hat has actually introduced CVEs in their forked version of Docker that didn't exist in the official Docker.

- In Docker for AWS, Docker for Azure, Docker for Mac, Docker for Windows, we embed a specialized Linux distro that is trimmed down and locked down to the extreme, making OS surface area much much smaller than a traditional OS like Red Hat.


Whilst Kubernetes is really cool and has a lot of nice features I think, at the moment, it's lacking in some areas that are likely to be important for Enterprise customers who are likely to be interested in this Docker EE setup.

Specifically things like best practice guides for securing Kubernetes are currently thin on the ground compared to Docker which has a fair amount of information covering that sort of thing.

Also the Kubernetes security model is still being developed with things like locking down the kubelet API still to come in 1.6. Whilst that's less likely to be important for some companies, enterprises tend towards solutions with that sort of thing sorted out.


I would never deny that there's a lot of work to do, but let's be clear: Kubernetes' security model is evolving in concert with a large number of high-profile users' demands.

Designing security in the absence of real customers would have been a mistake.


Of course, I don't think my comment implied anything else... do you?

My point was around maturity of things that enterprises tend to focus on like hardening/security best practice guides.

The kubelet API bit was just an example, although I do think the Kubernetes docs could be a bit clearer that this is a critical change to make after install to secure the cluster, given that all the install methods I've tried so far (kube-up, kubeadm etc) leave the kubelet API available unauthenticated by default.


My point was that we have enterprises who are using it and helping to shape it. There are parts that are simply under-developed and there are parts that are downright embarrassing, no denial.

I do expect that many of the docs/articles/blogs written about 1.6, 1.7, 1.8 are going to focus on hardening, security, etc. I just hope it isn't selinux style: "how do I turn it off" :)


The kublet API hardening looks very useful and there seems to a lot of good (security) stuff coming with each release.

In many environments, however, other aspects of Kube's security prove inadequate. For instance, there is currently no way to protect secrets in an environment requiring an HSM for certain keys. In contrast, secrets are stored in an etcd server accessible to the entire cluster (please correct me if this is out of date.)

One article discussing this:

https://medium.com/on-docker/secrets-and-lie-abilities-the-s...


no indeed, I think it's important to provide a set of options and advice about when companies might want to use them.

A setup that's appropriate to say a start-up environment may very much not be appropriate to a bank for example, so hopefully security docs will be able to lay out the pros and cons of each configuration choice.

The CIS guide for Kubernetes has started up so that will hopefully see some of these things mentioned.


Kubernetes is definitely lacking some best practice guides. We are trying to share our learnings here: https://kubernetes-on-aws.readthedocs.io/en/latest/admin-gui...

Interestingly we had many "hard" problems with Docker itself (race conditions, stuck Docker daemon) so my confidence in Docker getting the Enterprise thing right is not very high.


that's a really cool link I hadn't seen, thanks :)

It will be interesting to see how Docker gets on in more enterprise environments.


But isn't kubernetes managing docker containers? or does it support anything else?


Check out CRI-O. Goal is to make anything OCI compliant a first class citizen in Kubernetes. https://github.com/kubernetes-incubator/cri-o/blob/master/RE...



When i last checked out features of rkt on k8s it lacks of many fundamental things like log streaming and persisting data for a while after pod deletion.


i'll, thanks. just one question: how stable/good is rkt for production?


They also announced the Docker Certified program [1] but with no technical details about what that involves beyond hand waving that "containers are tested, built with Docker recommended best practices, are scanned for vulnerabilities, and are reviewed before posting on Docker Store."

Is there some set of automated tests my container has to pass? Can I run them today? More to the point, how much will it cost me?

[1] https://blog.docker.com/2017/03/announcing-docker-certified/


Here's a link to resources on how to start publishing content: https://success.docker.com/store


Thank you. I think you buried the lede at the bottom of the page though - the certification program is free currently.

On the other hand it looks like you have to purchase an EE license to test your code for certification: "Content that runs on the Docker Community Edition may be published in the Store, but will not be supported by Docker nor is it eligible for certification".

So looks like a minimum of $750 for one node EE license to play?


Docker CE, Docker EE... was docker purchased by Oracle?

The only sad part of this annoucement is the part were they talk about "certifications" this will open the world to the next stage "docker developer certifications" and will soon start seeing HR departments asking for it.


Providing certification is typically a means of establishing trust in the marketplace and opens up new revenue. We saw this with OpenStack.


Following the directions here[1] - adding Docker CE to CentOS 7.3 is broken right now:

[root@sandbox ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker.repo Loaded plugins: fastestmirror adding repo from: https://download.docker.com/linux/centos/docker.repo grabbing file https://download.docker.com/linux/centos/docker.repo to /etc/yum.repos.d/docker.repo Could not fetch/save url https://download.docker.com/linux/centos/docker.repo to file /etc/yum.repos.d/docker.repo: [Errno 14] HTTPS Error 403 - Forbidden

[1] https://store.docker.com/editions/community/docker-ce-server...



Thanks for reporting! Looks like a typo on the store page install instructions, having this updated.

In the meantime here is the correct URL: https://download.docker.com/linux/centos/docker-ce.repo


That resolved it. Thanks!


To the Docker guys/gals around here

- I'd be more comfortable using Docker if we had alternative runtimes, Docker being just one (maybe primus-inter-pares) among them; I'm aware of runC but don't know if Docker images are realistically portable (after all, Docker with its quarterly release and only 1 year enterprise support seems relative immature still)

- I'm not 100% sure on the legal situation re: distributing Linux and GNU userland binaries along with non-F/OSS commercial software; the practice of running eg `apt-get` and fetch the base OS userland on first start (and to a lesser degree, using union'd file systems, though I like that part actually), for me, has the smell of circumventing implied GPL conditions (but IANAL)

- in that light, I'd like a characterization of Docker vs. basic built-in POSIX/Linux/FreeBSD chroot jails

- the permissions story (must start as root, effective UID in container typically not resolvable with /etc/passwd) is suboptimal


> I'm aware of runC but don't know if Docker images are realistically portable

Quay, the Docker Registry competitor from CoreOS, has automatic support for converting Docker images into rkt images (as in: you push the Docker image, and pull with rkt, and it supposedly just works). I don't know how well it works in practice, though I can't imagine (from the top of my head) why it shouldn't. A Docker image is mostly a layered tarball with a few fields of metadata. Nothing particularly obscure.


Docker is trying to alleviate this concern by spinning out and trying to foster alternative runtimes, check out containerd and runc:

* https://blog.docker.com/2016/12/introducing-containerd/ * https://runc.io/


> I'm aware of runC but don't know if Docker images are realistically portabl

(It wasn't clear from your comment if you were aware of this) Since last year (docker 1.11) docker itself no longer is a runtime, and uses runC as the default runtime (https://blog.docker.com/2016/04/docker-engine-1-11-runc/)

Additional OCI compliant runtimes can be configured on the daemon (https://docs.docker.com/engine/reference/commandline/dockerd...), and can be selected per container, using the "--runtime" option on "docker run" (https://docs.docker.com/engine/reference/commandline/run/#op...)


> I'm aware of runC but don't know if Docker images are realistically portable

Until recently Cloud Foundry ran Docker images with a custom runtime backend (garden-linux). It's since switched to using runC (garden-runc). So in principle it was always possible to do this.

A major reason for the switch was to reduce duplicated effort.

Disclosure: I work for Pivotal, a major contributor to Cloud Foundry. Insofar as Docker move towards the value line, we're competitors.


As I understand one of the target audiences of Docker EE is RHEL/Docker users. New version of documentation now says:

- Docker Community Edition (Docker CE) is not supported on Red Hat Enterprise Linux.

Previous version of documentation have had no such sentence.



Nice, no features anymore for CE. I really love the path Docker has taken in the past months...

I wish I could switch to rkt, but there are so many things such as docker-compose which don't exist as equivalent for rkt yet.


> Nice, no features anymore for CE

What do you mean by that? Docker CE will continue to ship features in exactly the same way as before. If anything the new monthly edge releases will allow us to ship features faster. A common complaint from enterprise customers was that they were tied to the same release trains as the community version. Now that the CE and EE releases are clearly distinct, CE has more flexibility to move fast.


Not sure about this, generally I have the feeling that Docker doesn't really care anymore about making a great open source container engine. They've never listened to the people who pointed out flaws in Docker which then made these guys start working on rkt. I've had a ton of issues with Docker and IPTables, Multi Host Networking, IPv6 etc. - there were solutions available most of the time but it always takes ages for them to merge it in (5+ months in some cases) or even care about basic issues such as not being able to run IPTables with docker because it bypasses IPTables en total. Adding a EE will definitely not make things better, in my opinion even worse. I think most "new" features will be put into EE so that they can bait enterprises into buying it. But maybe I am mistaken, we'll see. Like I said, not sure about it.


That's how it always starts. EE gets introduced, Company assures everyone they will have the same features then CE goes down the hill in time. I want to believe you though.


What's hilarious is that Docker is frequently criticized for "moving too fast" and "adding too many features". Now it seems we're going to be suspected of not adding enough features... Which is it?


We'll see about that


I don't think this is the intention. Personally, I think the name "CE" is a bit unfortunate, as it implies what you said. However, for the time-being at least, I would expect the core Docker engine to remain the same between CE and EE.


One reason for the name change is to clarify that Docker is a product. If you're looking to use Docker as a low-level component to run containers as part of another product, then you should not use Docker: instead you should use containerd, which we have spun out of Docker for exactly that reason.

A few resources about containerd:

https://containerd.tools

https://blog.docker.com/2017/02/containerd-summit-recap-slid...


I think a thing you're suffering from here is that for most of us the central example of 'thing added CE, made enterprise version more obvious' is mysql post-Oracle-acquisition, which eventually resulted in two forks because CE became a red-headed stepchild that almost never got any features and they worked as hard as possible to hide on the website.

If you consider people to be reacting based on an expectation that you're quite possibly going to do the exact same thing, this thread makes significantly more sense - or at least it does to me.

Exactly what, if anything, can/should be done about this, I'm not sure. But I think that's probably what's going on.


That's understandable. Many of us at Docker are from a more C/Unix/ops background so we're less sensitive to that cultural reference.

In any case, in the end actions speak louder than words. If we consistently ship solid, open code that actually solves problem, and no frankenstein crippleware materializes... Then we will gradually earn the trust of more and more people.

FWIW, we took some of our inspiration from the original RHEL/Fedora fork by Red Hat in the early 2000s. And more recently from the Gitlab CE/EE product positioning.


Thanks for being inspired by us and for helping to make the CE/EE convention more popular. I have no doubt you'll continue to ship many new features as part of CE.


Do you have some sort of notification on Gitlab mentions in HN comments? :)


For the record, I'm at least opsish so it didn't bother me too much directly either. But it did explain my confusion at most of this thread, so I figured I'd offer the thought. Free idea, worth exactly what you paid ;)


Yes, I understand. I would just like the messaging to stress that the engine or containerd or low-level component x is the same between CE and EE and will continue to be.


That makes perfect sense. I will pass along the feedback. Thanks!


IMO, rkt is great since it's simple. Although I do agree in regards in how to build multi-container applications.


rkt's unit of execution is pods; most users are simply running pods with only one container, though.

rktnetes leverages the fact that rkt can natively execute a whole pod to avoid a lot of the extra code integrating with Kubernetes that docker requires.


Does anyone here have a good guide for moving from Docker and docker-compose to rkt and related tooling?

I like what I'm hearing about rkt but I'm having a lot of trouble even getting a single simple container image together, let alone run it...


http://kompose.io/ is part of the Kubernetes project (in incubation) — https://github.com/kubernetes-incubator/kompose


Ugh, Docker for Mac now has two separate ads in it's 9 options in the menubar :/


A possibly bad thing I see about this, there seems to be a segmentation of sorts. I've noticed there are now a few conflicting editions of Docker Engine. There's CE and EE now, then there's some version that Docker Cloud Agent bundles, and probably more. (Maybe I'm wrong, though)

Maybe that's just me, but it would be better if there'd be one core to rule them all, and extras would be managed by plugins/wrappers/companion daemons. So if you have Docker Engine installed you're good to go.


One of the goals of this release is to remove this fragmentation. For example Docker Cloud will use CE, and soon will allow you to easily upgrade to EE.

In the free / open-source side, Docker CE is the "one core to rule them all" that you describe.


Oh, then it's great!

I just didn't wanted to have different daemons (server, not client), and having to replace them if I'll want to use Cloud, EE or whatever (or stop using that or replace them with something else, etc).


Docker CE and EE are based on the Docker open source project and work the same way. You can also use a Docker CE client to talk to Docker EE host or swarm, and vice-versa.

The Docker Cloud team is working on improvements to reduce segmentation too - stay tuned.


Off-ish topic. But will the last piece of the puzzle (compose) be integrated into the Docker Go binary?


Yes. We have already started in 1.13. Check out the "docker stack" subcommand.

https://docs.docker.com/engine/reference/commandline/stack/


What about swarm? you have not mentioned it in your blog post - will it continue to be part of CE?

You guys have a big PR problem when it comes to swarm. I believe that Swarm is a nicer, simpler alternative to those trying out kubernetes and its well worth running in production.

But there are absolutely no case studies, success stories, etc around Swarm. Possibly that's happening as a consequence of your Datacenter product. For example, this entire page has zero mentions of Swarm - https://www.docker.com/enterprise-edition

On Azure, it took me quite a while to figure out (after going through support) that their cluster management is the pre-1.12 cluster product... not the Swarm mode.

Then someone pointed Docker-for-Azure which is way down google search results when you search for "azure docker swarm".

So what's the future of swarm? will you go back from being integrated to a separate product like Datacenter? why is it conspicuous by its absence in every post and press release? Including a path to your enterprise product.

The consequence of that is the comments on this very thread on HN : "Docker does not implement service discovery like kubernetes".


Oh, and to answer your question: Yes, Swarm is core part of Docker and is in Docker Community Edition too!


Thanks, this is excellent feedback.


Uuuuuugggh. Docker for Mac now won't let you uncheck "Send usage statistics"


You can uncheck it in stable releases, but not in beta releases. This is so we can track error rates, crashes etc to make sure the stable releases are actually stable :)


Ah, that's okay then!


Does OS X have an iptables equivalent? Does seem rather unfriendly though..


Not sure how configurable it is under the hood, but it's in the "Security & Privacy" preferences pane.


Yes, it has packet filter (pf).


I don't understand what anybody would be paying for here.

If I'm already paying $OS_VENDOR to support my OS, wouldn't whatever packages I'm running be covered as well? If not, why am I paying them?!


$OS_VENDOR most likely only ships Docker the container runtime, not Docker all the other shebang.


These comments are crazy. @shykes and team have done a wonderful job of making Linux containers easier to use. They have every right to make money from that work, and nothing Docker has done to date has concerned me with it's future. We also run Kubernetes in Production, from source, using a custom provisioner.


Hoping this is mostly support


it's all in here : https://www.docker.com/pricing


If you link deeper to the offerings for specific providers you get a bit more clarity. For example, Docker EE for AWS is $0.119 per node hour, or roughly $80/month per node.

https://store.docker.com/editions/enterprise/docker-ee-aws?t...


I have to assume some really deep discounts must be available, because although I can see paying $80 per "node" in a small shop, when you start to scale this doesn't make any sense. If I had a shop with 50,000 servers in it that would be $48 million a year in docker fees. Sorry, no.


That is very typical of enterprise products. Large deployments always get discounts. Extremely large deployments will often request "all-you-can-eat" unlimited deployment for a flat fee.


Can I license a single node & then rephrase all my support requests so they occur on that host? Even better, can I do it with the AWS option so I only need to pay when I find a critical bug?


Does anyone know why GCP isn't one of the supported cloud providers for EE? This is surprising to me since they had docker-related offerings a long time before AWS and Azure.


1 year of support is better than 0 years of support.


Is there a tl;dr on the differences between enterprise and community edition?



Ford's this mean that we can no longer have more than one private image server?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: