Docker is still open-source; it still has all the same features; it has the same maintainers and contribution roles; it has the same roadmap of features. And we still welcome pull requests.
Meanwhile we have been breaking the components of Docker into standalone upstream projects: containerd, swarmkit, libnetwork etc. So there are more and more ways to use parts of Docker without being forced to use all of it. We will continue doing that.
Is there any detailed fear that you can describe? I will do my best to reassure you.
The situation could turn into something reminiscent of using non-RHEL/CentOS Linux in the enterprise before Ubuntu became popular. Using Debian and need drivers from a vendor? Here's an RPM! Convert it to a .deb, extract it yourself, hope it doesn't have scripts that'll break your install and that the paths work properly. Oh wait this is for a specific patched kernel that RHEL ships, now I have to go get the source and build it myself, but it only ships as patches to a kernel source tree of theirs.
Ubuntu's widespread support helped, and then the advent of VMs made it so that you could hypervisor your hardware and then not have to worry about support in your various OSes. And then vendors started shipping VM images (e.g. AeroFS), but only if you're using a supported virtualization solution (we support both ESIx and HyperV!). Now we have containers, and we can ship customized environments, stripped down and devoid of anything the app doesn't need, but how long until vendors start shipping those containers with assumptions about either the host or the environment/tooling that only works for the people who pay extra?
To use a bit of a biased argument, I've been told by several infrastructure VCs that the infrastructure market is currently difficult to invest in because of the uncertainty the technology behind it has brought us. I don't trust that continued traditional investments behind producing those infrastructure offerings are a rational choice for users. However, at the end of the day, only the users can speak to that claim. I can only speak my mind on the matter.
Unfortunately, it's difficult to trust a service or software built on closed technologies because seeing inside the service or software becomes difficult, expensive or impossible. The combination of desired outcomes (easy infrastructure) and risk bias (implied trust) presents itself as a dangerous one because leads to cognitive dissonance where the market must literally believe two things at once: We have to TRUST this service or software because We NEED this service or software.
I'd prefer we all work together to solve these conflicted views with "enterprise" software offerings, especially those involved in building infrastructure, but my observations say that we are more likely to not work together because of existing investment structures. Perhaps this will change over time as new models emerge. For now, I remain sceptical at best about the way we're investing and growing the infrastructure market.
Echoing @shykes below, Docker had premium paid products before this launch, but we're trying to make that clearer and to simplify the product lineup.
Note that Docker CE is _just_ as good as the Docker you were using yesterday. In addition, the version lifecycle improvements are designed to get new features into Docker users' hands faster (with monthly Edge releases) and to improve maintainability by overlapping the maintenance windows of free Docker CE quarterly releases.
The concern is that the Docker CE we'll be using in 2020 will be missing useful features that Docker EE has, and which vendors who ship containers/Dockerfiles for their products will rely on.
I would note this is a non-apology, given it's stating you are "sorry" someone doesn't like a decision that was made.
Now, it's not clear to us in the public HOW MUCH of Docker (the software project)_would fall into that category. From the outside, it appears that there is a lot of community around the infrastructure, and somewhat less community around the higher layers (e.g. SwarmKit).
I thought one of those most important enterprisy things would usually be a long support timeframe. One year isn't exactly very LTS.
- Keep in mind we are releasing Docker EE quarterly, and supporting every release for a year. This is attractive for enterprises who are adopting Docker in part to make their software practice more agile. They don't want to be forced to upgrade every 3 months. But they appreciate that they can. This works for Docker because it sits relatively higher on the stack. If we were providing a storage appliance, or a traditional host operating system, this wouldn't make sense
- For a company of our size and maturity (300 people, 4 year-old free product, 18 months-old paid product), earning the trust of large conservative enterprises can be hard. We do it by being honest about our abilities, conservative in what we promise, and going above and beyond to deliver on what we promised. In this case, we simply weren't confident that we could promise more than 4 simultaneous EE releases (1 year support x 4 releases per year). That might cost us sales opportunities now with more conservative buyers, but those buyers would probably have been unhappy with us anyway. We can get them later - when our product is more mature and our release and support infrastructure is more robust.
EDIT: I see other commentors confidently stating what is and isn't enterprise-ready. Remember that enterprises are, by definition, very large. There are many departments with different goals and different priorities. For some of them, Docker EE with quarterly releases and 1-year support is a good fit. For other, it's not. And that's OK.
Enterprises want LTSes because legacy platforms are a nightmare to install, operate and upgrade. It becomes less important as the platform itself becomes less bestial.
Ultimately, enterprises want outcomes. Any given checklist item from the buying department usually represents scar tissue that may or may not still be relevant. New platforms -- Cloud Foundry, OpenShift v3, Docker EE, whichever of the thousand blooming Kubernetes offerings will succeed -- are in a position to renegotiate from first principles.
You might want to look into BOSH. It's a large part of the operability secret sauce for Cloud Foundry.
Disclosure: I work for Pivotal, we're ostensibly competitors due to the increasing overlap between Docker EE and Cloud Foundry.
Docker still is a new product, evolving very fast (this announcement is just one more proof of that). Is it production ready ? Yes If you want to move fast and benefit from new tech and are ready to upgrade regularly. No if you are not able to perform those upgrades and need insurance that the project will contribute significant ressources to fixing issues on 3 years old versions.
And currently I totally understand that the Docker team sets slider more towards "new dev" than towards "support". That will probably change in the future.
And that's a large part of the problem. We, as an Enterprise, don't want something with rapid evolving infrastructure, features. We need stability, critical and security patches. Emphasis on stability. 3 years is a minimum we would consider for that and not a 1 year only support range.
That's exactly why you do not need/want docker now. Several years ago Hypervisors like XEN were all the rage. Enterprise could not rely on it, because of the lack of LTS, while smaller startups and businesses were building their infrastructure around it. Everyone is now using those technologies which have undergone years of testing and have massive support from vendors. Docker will reach that stage in the future, but right now it's just too soon. You are asking the Docker team to slow down the development of the technology for everyone because you don't want to deal with the consequences of using a developing product. Let docker reach it's full potential, let the community do the testing, then ask for LTS on a mature technology. At that point, there will be probably be a new trend somewhere that will bring startups to the next level while your get to benefit from containerisation with full vendor support.
Then the AWS/Google/DO/other worked hard on the hypervisor and they released infra that worked and is entreprisey ready... but it's a complete offering, you cannot just get their Xen package and put it on your systems (assuming it's even powered by Xen anymore), you have to use their platforms.
Docker is unstable now.
AWS/Google/RedHat/CoreOS are all working pretty hard to pull out an offering for containers that works and is entreprisey ready. It's gonna take years to be production ready enough for critical systems, and it's as likely to reuse Docker as to replace it with their own more stable tech.
The point is, docker is nowhere near ready in its current state. Wait and see.
The commercial distributions (by Pivotal, IBM, SAP most notably) are licensed to gigabuck corporations worldwide.
That said, other companies and opensource projects will absolutely move towards the value line. There's no money or glory in building blocks.
Disclosure: I work for Pivotal, the major contributor of engineering to Cloud Foundry.
EDIT: I did not realize it was per node pricing. I take back mostly everything I said below. I hope that ultimately when Docker does provide 3 years of support, you'll be willing to pay up.
Look at the pricing, even the biggest Enterprise plan is $2000/yr. This is aimed at small consultancies, startups and small businesses, etc that are tired of sending their developers into the docker github issues to log bugs and argue features.
When Docker adds a plan that satisfies your Enterprise, you'll be paying typical Enterprise prices that will be at least an order of magnitude larger than the current biggest plan. Hopefully you are able to put your money where your mouth is.
There are many more here: https://www.docker.com/customers
Also, as others have noted: the price point is per-node. Docker EE is most definitely for enterprises.
Disclosure: I work for Pivotal, etc.
... and NodeJS learned that if you want to be a proper production environment, you still need to have LTS. 'Being fashionable' isn't enough.
> And currently I totally understand that the Docker team sets slider more towards "new dev" than towards "support".
Flashy features built on an unstable base is not something you want to stake a company on - support is important, it's just not sexy. New features gets you new users. Support makes them stay... and even pay.
It's happened at least 5 times in the past 2 years, that's VERY frequent from an enterprise viewpoint.
Backwards compat and LTS already comes up a lot with enterprise support, requiring an annual upgrade project will only complicate it
EDIT: I've seen somewhere that it's supposed to mirror the Ubuntu naming scheme but that's fundamentally different. I know that the "X.04 LTS" releases are stable-ish and they only come out every 2 years (right? Going off the top of my head here), which is waaaaay different than monthly releases in terms of time spent vetting the stability IMO.
Quoting from the blog post:
The Docker API version continues to be independent of the
Docker platform version and the API version does not
change from Docker 1.13.1 to Docker 17.03. Even with the
faster release pace, Docker will continue to maintain
careful API backwards compatibility and deprecate APIs and
features only slowly and conservatively. And in Docker
1.13 introduced improved interoperability between clients
and servers using different API versions, including
dynamic feature negotiation.
Docker takes backwards compatibility so seriously they've released multiple versions of a docker registry all with completely new APIs.
Sorry if I don't buy it.
That has been fixed. Note that this limitation (although it turned out to be annoying, which is why we removed it), did not actually break reverse compatibility in the API. It just made the client excessively paranoid about reverse compatibility. In other words the client didn't trust the stability of the daemon enough, even though the daemon in pratice almost never broke compat.
> Docker takes backwards compatibility so seriously they've released multiple versions of a docker registry all with completely new APIs.
I'm not sure what you're referring to, but I will look into it. Is this still affecting you? Or is it a past problem you are still pissed off about?
Why should enterprises trust you on backwards compatibility when longstanding issues with backwards compatibility were just fixed and then glossed over like this ("it never broke in practice because we forcibly made you update")? Docker has repeatedly made poor decisions with really poor optics both in the open source community and with their product, this is just one example, and asking enterprises to just trust you now while not even providing the support terms most of the enterprise world demands is doing the exact opposite of inspiring trust.
Do you honestly not remember sunsetting the python docker registry just a year and a half ago and then introducing a brand new golang registry product with an entirely different API? Because that's precisely what enterprises pay to avoid, they don't shell out absurd money for LTS versions to hit a constantly moving target. And please don't patronize me with "past problem", some of us lowly end users of your product had to clean up that mess just to get day to day operations working again. Forgive me if I'm gunshy.
Here is a list of known past breaking changes in the Docker API. https://docs.docker.com/engine/breaking_changes/
If you have encountered a breaking change that is not on the list, could you mention it either here or on https://github.com/docker/docker.github.io ?
Some of your claims about breaking backwards compatibility above are incorrect. I am trying my best to point that out without seeming dismissive of your overall point - which I think is that Docker can do more to improve stability and backwards-compat. I agree with that point.
Suggesting that this could be "a past problem [he's] still pissed off about" comes across as tone-deaf when the underlying issue is Docker's credibility when it comes to backwards compatibility.
During the support period, bug fixes will get back ported to those versions and released as "patch" releases (e.g. 17.03.1).
When installing, you can choose to install either the "stable" (quarterly) channel, or the "edge" (monthly) channel.
Also, picking a date for versioning is weird as it doesn't contain any information other than when the Changelog was set in stone. Too bad this decision was made, and Docker choose not to value the stability of SemVer.
Docker is a collection of many different components, exposing many different interfaces. Semver in Docker version doesn't make sense for the same reason it doesn't make sense in Ubuntu or Windows.
$ repoquery -q --provides docker-engine-17.03.0.ce-1.el7.centos
docker-engine = 17.03.0.ce-1.el7.centos
docker-engine(x86-64) = 17.03.0.ce-1.el7.centos
...compare to, say, python (I've elided a few things for clarity):
$ rpm -q --provides python
python = 2.7.5-48.el7
python(abi) = 2.7
If you do something like:
Provides: docker(api) = 1.23
... this at least means that folks can depend on/install 'docker(api) > 1.23' or whatever if they actually want to get a specific semantic version.
In the future we will phase out the apt.dockerproject.org repo.
> [...] a freemium model
Docker had already adopted an enterprise subscription + freemium model, but the offering was less clear (case in point: you weren't aware of it). This clarifies and simplifies our offering, and upgrades the enterprise offering along the way.
> [...] solves the fundamental problem of Kubernetes eating their lunch
Kubernetes is a component (like containerd or swarmkit), while Docker is a platform which integrates many components (like Cloud Foundry or Openshift).
So Docker and Kubernetes are not directly competitive - Docker just happens not use Kubernetes as an orchestration component. It uses SwarmKit, an open-source component developed in-house (https://github.com/docker/swarmkit)
A better comparison would be Docker and Openshift (which is Kubernetes-based). Is Openshift eating Docker's lunch? It certainly doesn't feel that way to me, but of course I am biased. Docker has three major advantages over Openshift: it's modular, it has better security, and it's not locked to RHEL. The main advantage of Openshift of course is that it is highly integrated into the Red Hat platform, which is appealing if you are already heavily invested in it. Openshift also benefits from the demand for a commercially supported product based on kubernetes.
But either way the market is so early, and the demand so strong, I believe there is room for more than one major container platform. In a few years when the market starts maturing, we'll see!
I wonder whether this is a distinction that exists in your mind, as someone intimately involved in the development of the docker tool and the Docker, Inc business model more so than in the minds of Docker users. You have a vested interest in making Docker encompass all that Docker, Inc produces. For many of us, this is actually against our interests. A lot of us want Docker to just be the base containerization layer with other offerings (like k8s, swarm, etc) built on top of it and branded separately. Continually adding more to the docker base layer adds confusion in the minds of people who don't follow Docker closely, makes it harder to get it approved for use in our organizations and increases the security footprint that needs to be audited.
I won't speak for others, but it would make my life much easier if you'd build (and name) your offerings on top of the base containerization tool like everyone else rather than trying to stuff everything into one tool with one name. You have no idea how hard some of us have had to fight inside our organizations to simply deploy builds inside containers. Increasing the scope of what Docker means is just giving ammunition to our internal opponents.
I understand why you're doing what you're doing...there's no money in developing that base layer unless you can parlay it into selling the other parts of your platform, but just understand that what you're doing isn't really user-friendly and many users won't pliantly go along with whatever marketing decisions you make. Like it or not, Docker is an ecosystem, not a platform. It has a life of its own that you're only partially able to shape. You have the advantage of being able to shape the roadmap for the underlying containerization layer and the goodwill that comes from putting in the work to have created initially and maintain that layer on an ongoing basis. That should be enough without leveraging it further.
We are doing exactly that. The base containerization layer is containerd, and it is now available standalone separate from Docker.
I covered this topic in more detail in another comment: https://news.ycombinator.com/item?id=13775677
I hope this helps.
Giving away the Docker name to the bottom half would have been dumb (IMO), so extracting it as containerd makes sense.
Will that confound some people? Sure, but they will get over it. I think it was the right thing to do, but then I am known to be a big fan of layered systems. :)
What exactly would you like Docker to do, that we're not currently doing?
You're not the first company that's had to deal with this. To me, it's similar to Google's failed attempts to keep people from using their name as a verb. You're both companies that has a wildly successful initial product that got associated with your company's name and that hindered attempts to branch out from that initial offering. But the more you try to repurpose the term to refer to your broader offering rather than the narrower base layer that it started out as, the more you create problems for those of us that have spent a lot of time and effort selling your approach within our organizations. Not everyone is sold on containerization and we (your advocates) have people that will pounce on any confusion as a way to push back.
Decision-makers in organizations are often surprisingly non-technical. Imagine if there were a company behind email. And that company wanted to make money selling add-on services like encryption and mailing list management. So it decided to call the base email layer libsmtpd and repurpose the term email to mean the broader platform offering. Now imagine you've got to explain this change to your elderly mother who's just gotten over the hump and gotten comfortable with sending email and referring to email correctly.
That's kinda the position you're putting us in.
The difference is that we invented Docker. We understand its future potential and control its design and trademark. Our competitors don't.
So I'm sorry that you have a different definition of Docker than the people who invented Docker. But just like Google decides what Google is - Docker decides what Docker is.
For what it's worth, my notion of what Docker is comes from having run it in production since before the transition away from LXC. I ran a small team inside a relatively large company (8000 employees, $25B mkt cap) and we largely had control over our own ops. We ran into a lot of early adopter pains, but the benefits definitely outweighed the pain.
I was also in various architecture groups that made decisions for the larger products at the company. I was always honest about the drawbacks, but I pushed for limited exploratory projects using Docker to try to slowly move ops in that direction. I had a lot of opposition from ops folks that never felt the pain of the developer experience and worried that Docker was an encroachment on their fiefdom (GoT had nothing on our internal politics :-)
Slowly, we (I wasn't the only Docker supporter at the company) made progress. We set up a Quay private registry server so that, at least, teams could begin to experiment. And when I met your team at re:invent and heard that you were developing your own offering for private registry, I convinced them to switch. The first-party argument was easy to make and the company didn't really quibble about sub-7-figure software purchases.
I ended up leaving that company last year, so your current direction isn't really making my life harder, but had I stayed,it would be. If you want to be explicit in your targeting of a different market segment from the kind of early adopter that I am/was, that's fine. But don't accuse me of having my view shaped by competitors' marketing. That's a highly revisionist view of your own history. Because when I got on the Docker bandwagon, the reality absolutely matched what I'm describing. Docker was the base layer and a prefix added to related product offerings. But it wasn't a platform like you're describing. That seems like it's changed now, which would've really screwed me in trying to push Docker at my previous employer.
Maybe you should face and accept the fact that Docker is not going in the direction you need it to, and its future plans are not aligned with your interest.
These are perfectly reasonable reasons for a company to cancel the efforts and avoid the headache.
A comment on a news will not put it back on the "right" track. Don't invest in risky and uncertain products.
Assuming containerd is successful, and that higher-order systems like Kubernetes and Mesos use containerd directly, we will see one of two things in 12-18 months time:
1) There are hundreds of thousands of DOCKER users
2) There are hundreds of thousands of CONTAINERD users
The split is forcing stratification (in a good way) where there previously was none. Users have to identify whether they think "docker" means "a container runtime" or "a full stack".
Of course, the pie is growing, so maybe we get both results. The real problems over the next 12-18 months are MESSAGING and DISENTANGLING. How do you get this message across to people, and can you actually change the words in the common vernacular?
As owners of the word "Docker" Solomon can define it how he likes, but that doesn't mean he can actually stop people from using it to mean something else. That's going to be a process.
c.f. "literally". Even Webster's Dictionary has given up the fight on that.
Therefore you are not going in the direction he wants you to.
The name for that is "Docker" and it has been for many years already. You can't just rename things to fit your new marketing strategy and pretend that your users won't get hurt.
If you could simply erase the memory of all inhabitants of Earth and rename all books and articles even written, then yes, there would be no issues in renaming.
- Docker has included a container build system since version 0.3 in May 2013. https://github.com/docker/docker/blob/master/CHANGELOG.md#03...
- Docker has included image storage and distribution since the version 0.1 in March 2013. https://github.com/docker/docker/blob/master/CHANGELOG.md#01...
- The official website has said "Docker is a platform to build, ship and run distributed applications" since 2014, and clearly featuring the collection of multiple tools forming that platform. https://web.archive.org/web/20141216011043/https://www.docke...
- Docker has included a distributed key-value store and optional multi-host networking since version 1.7 in June 2015. https://github.com/docker/docker/blob/master/CHANGELOG.md#17...
- Docker has included orchestration since 1.12 in June 2016.
- Docker has included cryptographic content trust since version 1.8 in August 2015. https://blog.docker.com/2015/08/content-trust-docker-1-8/
- Docker has included secrets management since 1.13 in February 2017. https://blog.docker.com/2017/02/docker-secrets-management/
- Docker for Mac and Windows have included a built-in hypervisor and OS since March 2016. https://blog.docker.com/2016/03/docker-for-mac-windows-beta/
- Docker for AWS and Azure have also included a built-in OS since June 2016. https://blog.docker.com/2016/06/azure-aws-beta/
- Docker started gradually factoring out its container runtime in 2013. First with libcontainer; then with runc/OCI; and most recently with containerd. https://containerd.tools
- containerd in particular exists since 2015; and it's been over a year since Docker doesn't perform any container runtime task itself - it's all handed off to containerd.
I could go on. Docker has said clearly and consistently, for several years now, that it is building a platform made of a collection of tools, including but not limited to a core container runtime. You just didn't want to hear that explanation, either because you didn't like it, or because someone other than Docker (perhaps a competitor) gave you an incorrect explanation of what Docker is.
And now that the dissonance is becoming hard to ignore, you're forced to reconcile your incorrect definition of Docker with the real definition, and it makes you angry. But, like I said, being angry doesn't make you right. And it doesn't put you above having to provide evidence of your claims. So, can you provide concrete evidence that we are "hurting our users"?
I think that's pushing k8s farther to the left of what it really is, and pushing Docker farther to the right of what it really is.
k8s, for example, incorporates service discovery. As far as I can tell, swarmkit does not. k8s incorporates networking, containerd does not. Similar for things like ingress load balancing.
There are certainly potential customers debating, directly, k8s vs your full Docker platform, even if there are some gaps they have to fill with other software.
So Docker is like Openshift, except openshift uses docker as a component? Very confusing..
That represents the evolution of the containerd-vs-docker split, and (to me) seems totally rational.
Swarmkit does in fact implement service discovery, networking, and ingress load-balancing. It also implements out-of-the-box node security and mutual TLS, secure secrets management, a built-in raft store, infrastructure-agnostic overlay networking, and various goodies which we needed to make Docker work great out of the box.
containerd is a different type of component entirely - in fact it is very complementary to kubernetes.
> There are certainly potential customers debating, directly, k8s vs your full Docker platform
They typically debate Docker vs kubernetes-based platforms (among other possible alternatives). If they're a Red Hat shop, they typically evaluate Openshift. Sometimes there's a team building an in-house platform. Nobody ever deploys kub alone in production. There is always some form of platform on top.
That doesn't gel with any reality I know of, sorry. Lots of people use higher level platforms, or develop bespoke tooling, to be sure, but that's going to be true forever, for every platform.
(disclosure for others: I'm a Kubernetes founder)
Be worried when people STOP trying to wrap your stuff with their own opinions.
We do. I work at SAP, and we run our company-wide OpenStack on pure Kubernetes on CoreOS's Container Linux.  We do use Docker (the container runtime) because it comes with CoreOS, but no other Docker product as far as I'm aware. I've been working with Kubernetes for quite some time now, and honestly don't know what else you would need on top of it (except for some Continuous Integration tool, of course, but that's already a staple of any well-organized agile team, no matter the platform).
 I mean the API and orchestrator parts, not the customer VMs themselves. These sit on traditional hypervisors.
Disclosure: I work for Pivotal, another Foundation member, which also sells a commercial CF distribution.
Later this year, we will go back and evaluate the maturing k8s administration landscape. Our current approach has a few drawbacks, e.g. it requires a CoreOS reinstall to upgrade k8s cleanly (since all the magic happens in cloud-init and Ignition).
Uhh, we do. And we're not alone as we work with several other companies that do. That's quite a big nobody tent.
Is kub core not working on spinning out built-in volume plugins out of the core to keep it smaller? Are the maintainers not recommending using third-party tools + 3PR for all major new features going forward?
It seems like my comment was misinterpreted as a criticism of kub, or a sign of ignorance. It was neither. I stand by my comment that nobody runs naked kub in production. That doesn't make it any less good, stable or useful. It just means it's not meant to be a complete product.
You made the initial comparison.
> Nobody ever deploys kub alone in production. There is always some form of platform on top.
Which is another way of saying "even if there are some gaps they have to fill with other software". Sure, some customers pick a platform where the gaps are prefilled. Not terribly different from some of your customers that pair docker pieces with pieces from other vendors.
> You made the initial comparison.
Yes, sorry I wasn't clear. I meant that they were both components, as opposed to complete products.
On the other hand, SwarmKit and Kubernetes are comparable in functionality.
> Which is another way of saying "even if there are some gaps they have to fill with other software".
> Sure, some customers pick a platform where the gaps are prefilled. Not terribly different from some of your customers that pair docker pieces with pieces from other vendors.
I also agree.
It seems that there is nothing left to disagree on :)
Re: "Try developing on Docker for Mac/Windows and deploying to production with Docker for AWS/Azure"
Developing on a platform that isn't the same as your production platform is an avoidable situation. I could develop on .Net and deploy on mono as well, but...
Single-host Docker in production, yeah that's not terribly complex, but the new price of admission for production is a complete orchestration layer, and there is 0% chance I consider taking a Swarm cluster to production any time soon. And that's to say nothing of the DDC disaster...
I don't know if a few hundred lines of shell scripting and Helm charts are considered building around it though, but that's enough for a Git hosting service (Gogs) and a CI (Drone 0.5) with persistent storage with vboxsf.
There are 4 lines of configuration that differ between those two environments.
We do: https://kubernetes-on-aws.readthedocs.io/en/latest/admin-gui...
There are only a few AWS integration components we added, e.g. to use the "new" AWS ALB (see the linked text for details).
That's a bit of stretch. Everything that people use to build something bigger is a "component", but that doesn't make it not a platform.
Kubernetes is absolutely a platform, in that it is the base layer on which higher-level systems are built. It's somewhat less opinionated than OpenShift (which is literally Kubernetes++) or Docker (the full stack), but that is by design. Opinions are too fickle - Kubernetes is here to service the evolving fashion of opinions, while providing durable base abstractions.
It is the basis for many products (plural) and an ecosystem, which is really what we wanted to achieve.
That's a really interesting angle to take on it. I think most users of Kubernetes would view it as the opposite; that K8s is the platform and Docker is just one component of that platform. It probably depends on which camp you've really bought into.
- What kubernetes needs from Docker is a simple and robust container runtime. We are spinning out containerd (the core container runtime that powers the Docker platform) to provide exactly that. We are actively working with the Kubernetes community to make sure containerd is a perfect fit for kubernetes to integrate - a better fit than Docker itself, in fact, since it will be much smaller and change only very slowly. See https://containerd.tools and https://blog.docker.com/2017/02/containerd-summit-recap-slid...
- This in turn will free Docker to focus on serving its userbase of developers and enterprises, which do want an integrated platform. Take a look at Docker for Mac/Windows or Docker for AWS/Azure for a sense of where we are taking the platform.
- If you ask the core kubernetes maintainers, they will tell you that kube is intended to be the "kernel" of your distributed system, and it's up to you to build a platform on top. So in that way, I think we agree that kubernetes is ultimately a component - nothing derogatory about that!
- You mention "camps". I think this evolution is very exciting because it allows us to move beyond the concept of camps. With containerd, a lot of bridges are already being built - engineers are collaborating peacefully and focusing on solving technical problems, which is a huge relief to everyone. Nobody likes drama.
- Lastly, we are making sure Docker is a very modular and loosely coupled platform. So, who knows? If enough of our customers ask us, maybe we'll eventually integrate kubernetes as an optional component ;) The point is, we have an opportunity to refocus the conversation on technical tradeoffs rather than silly pissing contests.
For all these reasons I think 2017 will be a good year for the entire container community.
I have agitated for a long time that it's silly to be competing at the layer of orchestration - it's not a revenue-producing product on its own. I'd love to see more alignment here. At this point, the systems are very similar in a number of facets.
We're wasting a lot of time copying features and ideas back-and-forth, when we could be pushing the state of the art forward faster.
RedHat has OpenShift and support contracts.
Google/AWS bill the usage directly, and get returns on other products you use.
Docker didn't have much to sell.
This is why, for example, Microsoft is seeing much more success with less tight coupling of Windows, Azure and Office.
If you use Docker it's expected (by Docker) that you will use Swarm. And now it's expected that your organization will use EE (is that Java EE? no it's Docker EE.)
Docker followed the Apple II model into the enterprise, it was a fancy typewriter.
At the same time, we also make sure you can pop the hood, mess with the components directly, and swap them out in various ways. You can do this within the Docker ecosyste with Docker plugins (for things like networking, storage, logging, authorization); or you can do it outside the ecosystem by hitting the low-level open-source components directly: containerd/runc, swarmkit, notary, libnetwork, infrakit.. All these components are usable standalone, and Docker preserves the loose coupling.
You're right that currently Docker does not offer swappable orchestration - not because we don't want to, but because it's hard to do that without affecting the quality of the platform. Sometimes excessive abstraction leads to bad engineering.
In early versions of Docker Swarm we experimented with pluggable support for Mesos, Kubernetes etc. It worked in demos but we didn't find it fit for production.
I hope this helps understand our approach better.
- End-to-end content trust with crypto signatures and verifications using Notary/TUF https://docs.docker.com/engine/security/trust/content_trust/
- Secrets management with encrypted storage and transport (https://blog.docker.com/2017/02/docker-secrets-management/)
- A vulnerability scanner that can detect vulnerabilities in arbitrary binaries without distro lock-in (ie. even if your developer built from source on a non-Red Hat distro, it will still catch vulnerabilities) https://docs.docker.com/datacenter/dtr/2.2/guides/admin/conf...
- Secure orchestration out-of-the-box: https://docs.docker.com/engine/swarm/how-swarm-mode-works/pk...
- Somewhat counter-intuitively, the default security profile is more secure in Docker than in Openshift, because the focus on "systemd everywhere" requires loosening the sandbox to allow systemd's tentacles to get through. In the past Red Hat has actually introduced CVEs in their forked version of Docker that didn't exist in the official Docker.
- In Docker for AWS, Docker for Azure, Docker for Mac, Docker for Windows, we embed a specialized Linux distro that is trimmed down and locked down to the extreme, making OS surface area much much smaller than a traditional OS like Red Hat.
Specifically things like best practice guides for securing Kubernetes are currently thin on the ground compared to Docker which has a fair amount of information covering that sort of thing.
Also the Kubernetes security model is still being developed with things like locking down the kubelet API still to come in 1.6. Whilst that's less likely to be important for some companies, enterprises tend towards solutions with that sort of thing sorted out.
Designing security in the absence of real customers would have been a mistake.
My point was around maturity of things that enterprises tend to focus on like hardening/security best practice guides.
The kubelet API bit was just an example, although I do think the Kubernetes docs could be a bit clearer that this is a critical change to make after install to secure the cluster, given that all the install methods I've tried so far (kube-up, kubeadm etc) leave the kubelet API available unauthenticated by default.
I do expect that many of the docs/articles/blogs written about 1.6, 1.7, 1.8 are going to focus on hardening, security, etc. I just hope it isn't selinux style: "how do I turn it off" :)
In many environments, however, other aspects of Kube's security prove inadequate. For instance, there is currently no way to protect secrets in an environment requiring an HSM for certain keys. In contrast, secrets are stored in an etcd server accessible to the entire cluster (please correct me if this is out of date.)
One article discussing this:
A setup that's appropriate to say a start-up environment may very much not be appropriate to a bank for example, so hopefully security docs will be able to lay out the pros and cons of each configuration choice.
The CIS guide for Kubernetes has started up so that will hopefully see some of these things mentioned.
Interestingly we had many "hard" problems with Docker itself (race conditions, stuck Docker daemon) so my confidence in Docker getting the Enterprise thing right is not very high.
It will be interesting to see how Docker gets on in more enterprise environments.
Is there some set of automated tests my container has to pass? Can I run them today? More to the point, how much will it cost me?
On the other hand it looks like you have to purchase an EE license to test your code for certification: "Content that runs on the Docker Community Edition may be published in the Store, but will not be supported by Docker nor is it eligible for certification".
So looks like a minimum of $750 for one node EE license to play?
The only sad part of this annoucement is the part were they talk about "certifications" this will open the world to the next stage "docker developer certifications" and will soon start seeing HR departments asking for it.
[root@sandbox ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker.repo
grabbing file https://download.docker.com/linux/centos/docker.repo to /etc/yum.repos.d/docker.repo
Could not fetch/save url https://download.docker.com/linux/centos/docker.repo to file /etc/yum.repos.d/docker.repo: [Errno 14] HTTPS Error 403 - Forbidden
Also check out the full docs: https://docs.docker.com/engine/installation/linux/centos/
In the meantime here is the correct URL: https://download.docker.com/linux/centos/docker-ce.repo
- I'd be more comfortable using Docker if we had alternative runtimes, Docker being just one (maybe primus-inter-pares) among them; I'm aware of runC but don't know if Docker images are realistically portable (after all, Docker with its quarterly release and only 1 year enterprise support seems relative immature still)
- I'm not 100% sure on the legal situation re: distributing Linux and GNU userland binaries along with non-F/OSS commercial software; the practice of running eg `apt-get` and fetch the base OS userland on first start (and to a lesser degree, using union'd file systems, though I like that part actually), for me, has the smell of circumventing implied GPL conditions (but IANAL)
- in that light, I'd like a characterization of Docker vs. basic built-in POSIX/Linux/FreeBSD chroot jails
- the permissions story (must start as root, effective UID in container typically not resolvable with /etc/passwd) is suboptimal
Quay, the Docker Registry competitor from CoreOS, has automatic support for converting Docker images into rkt images (as in: you push the Docker image, and pull with rkt, and it supposedly just works). I don't know how well it works in practice, though I can't imagine (from the top of my head) why it shouldn't. A Docker image is mostly a layered tarball with a few fields of metadata. Nothing particularly obscure.
(It wasn't clear from your comment if you were aware of this)
Since last year (docker 1.11) docker itself no longer is a runtime, and uses runC as the default runtime (https://blog.docker.com/2016/04/docker-engine-1-11-runc/)
Additional OCI compliant runtimes can be configured on the daemon (https://docs.docker.com/engine/reference/commandline/dockerd...), and can be selected per container, using the "--runtime" option on "docker run" (https://docs.docker.com/engine/reference/commandline/run/#op...)
Until recently Cloud Foundry ran Docker images with a custom runtime backend (garden-linux). It's since switched to using runC (garden-runc). So in principle it was always possible to do this.
A major reason for the switch was to reduce duplicated effort.
Disclosure: I work for Pivotal, a major contributor to Cloud Foundry. Insofar as Docker move towards the value line, we're competitors.
- Docker Community Edition (Docker CE) is not supported on Red Hat Enterprise Linux.
Previous version of documentation have had no such sentence.
I wish I could switch to rkt, but there are so many things such as docker-compose which don't exist as equivalent for rkt yet.
What do you mean by that? Docker CE will continue to ship features in exactly the same way as before. If anything the new monthly edge releases will allow us to ship features faster. A common complaint from enterprise customers was that they were tied to the same release trains as the community version. Now that the CE and EE releases are clearly distinct, CE has more flexibility to move fast.
A few resources about containerd:
If you consider people to be reacting based on an expectation that you're quite possibly going to do the exact same thing, this thread makes significantly more sense - or at least it does to me.
Exactly what, if anything, can/should be done about this, I'm not sure. But I think that's probably what's going on.
In any case, in the end actions speak louder than words. If we consistently ship solid, open code that actually solves problem, and no frankenstein crippleware materializes... Then we will gradually earn the trust of more and more people.
FWIW, we took some of our inspiration from the original RHEL/Fedora fork by Red Hat in the early 2000s. And more recently from the Gitlab CE/EE product positioning.
rktnetes leverages the fact that rkt can natively execute a whole pod to avoid a lot of the extra code integrating with Kubernetes that docker requires.
I like what I'm hearing about rkt but I'm having a lot of trouble even getting a single simple container image together, let alone run it...
Maybe that's just me, but it would be better if there'd be one core to rule them all, and extras would be managed by plugins/wrappers/companion daemons. So if you have Docker Engine installed you're good to go.
In the free / open-source side, Docker CE is the "one core to rule them all" that you describe.
I just didn't wanted to have different daemons (server, not client), and having to replace them if I'll want to use Cloud, EE or whatever (or stop using that or replace them with something else, etc).
The Docker Cloud team is working on improvements to reduce segmentation too - stay tuned.
You guys have a big PR problem when it comes to swarm. I believe that Swarm is a nicer, simpler alternative to those trying out kubernetes and its well worth running in production.
But there are absolutely no case studies, success stories, etc around Swarm. Possibly that's happening as a consequence of your Datacenter product. For example, this entire page has zero mentions of Swarm - https://www.docker.com/enterprise-edition
On Azure, it took me quite a while to figure out (after going through support) that their cluster management is the pre-1.12 cluster product... not the Swarm mode.
Then someone pointed Docker-for-Azure which is way down google search results when you search for "azure docker swarm".
So what's the future of swarm? will you go back from being integrated to a separate product like Datacenter? why is it conspicuous by its absence in every post and press release? Including a path to your enterprise product.
The consequence of that is the comments on this very thread on HN : "Docker does not implement service discovery like kubernetes".
If I'm already paying $OS_VENDOR to support my OS, wouldn't whatever packages I'm running be covered as well? If not, why am I paying them?!