
Adding Kubernetes support to the Docker platform - sz4kerto
https://www.docker.com/kubernetes
======
amouat
This really helps with the dev-to-production story for containers.

When people first started using Docker containers, we were promised things
would run identically in dev and production - no more "but it worked on my
laptop" issues. Then the rise of orchestrators meant that there again became a
significant difference between running an app locally (in compose) and in
production (on Kubernetes). Docker for Mac/Windows will now bridge that gap,
giving me a k8s node to run against in dev.

Whilst Kubernetes has provided a great production orchestration solution, it
never provided a great solution for development, meaning most users kept
developing with Docker and Compose. It's great to see these worlds now coming
together and hopefully leading to a first-class solution all the way from dev
to prod.

~~~
grabcocque
There's minikube. There's been minikube for a long while.

~~~
pmezard
Is there an easy way to build an image locally and start it in minikube
without an external registry or running a local one?

~~~
nickjackson
yes. `eval $(minikube docker-env)` will setup the docker cli to use minikube's
docker demon.

[https://kubernetes.io/docs/getting-started-
guides/minikube/#...](https://kubernetes.io/docs/getting-started-
guides/minikube/#reusing-the-docker-daemon)

------
stonewhite
Disregarding the swarm compatibility bit (which is irrelevant because swarm is
kind of irrelevant), I don't really like what this "support" really means. As
others mentioned minikube and k8s-cluster is already providing dev-to-prod
compatibility.

Kubectl was already providing docker cli commands like "exec" "logs" etc. So
you can execute some of these commands on a k8 cluster with the docker binary
too? And why would you do that?

All I see is struggling for relevance, duplication of functionality and a very
unnecessary vendor lock-in vector.

~~~
raesene6
Not really sure I see the vendor lock-in here. If you use Docker EE sure
you're going to be locked in to their solutions to an extent but then that's
true of adopting any commercial supported solution that provides layers on top
of the base k8s clustering tech (e.g. Openshift).

The API is still k8s and the YAML files are identical, so migration at a
technical level off that platform should be easy enough.

I think this move is about Docker maintaining the trajectory in enterprise
where people want the management GUIs and extra features but where Kubernetes
and particularly Openshift is making progress at the expense of Docker EE.

~~~
SEJeff
Openshift is open source, Docker EE is not. Big difference.

[https://github.com/openshift/origin](https://github.com/openshift/origin)

~~~
raesene6
As InTheArena says, both Docker EE and Openshift have Open source cores and
then supply commercial support and additions to suppplement them.

It's a pretty common model amongst these companies, and hey they've got to
make money somehow :)

~~~
smarterclayton
Everything in openshift is open source at
[https://github.com/openshift/origin](https://github.com/openshift/origin).
The commercial version is long term support, security response and errata, and
the stability around that. There's nothing that is withheld from the open
source project. It is not open core.

Edit: I forgot, the logo is not open source. So the logo is withheld :)

~~~
SEJeff
Its surprising how many people don't realize this!

------
InTheArena
Great news for everyone except for VMWare (this is a simple compelling
operating system for data centers that spans both windows and mac) and
Openshift (which was one of the few viable ways of actually purchasing
Kubernetes support). A lot of egos on both sides had to be suppressed to make
this happen. Docker Swarm was a key driver in making Kubernetes popular
because everyone realized that they needed swarm, but the implementation was
so poor, no one could use it. That kicked K8s up into hyperdrive. Parts of the
K8s community have been particularly partisan in doing everything they can to
minimize docker.

Hopefully both sides _now_ come together and sing kumbaya, and we don't see a
continuing KDE versus Gnome war a embrace and extend attitude by Docker or a
continuing push to marginalize docker by the Kubernetes folks.

~~~
erikb
Surpressing egos usually means one has found a joker to beat the other in the
fight for leadership. I don't think that battle is over yet, though. It's a
very strong move Docker does here, but at the same time k8s is considering
choosing another container engine as their main component. Currently at least
in Enterprise k8s has a lot more traction than docker (I personally love
docker more, but every day need to focus 99% of my effort on k8s because of
that).

And given enterprise support Openshift is still ongoing as the best solution.
They are afaik the only ones that offer a complete set of answers to most
questions you can have in the PaaS space. Everybody else is like "here's an
API, choose one of 3 billion plugins" (just thinking CNI here). In the end for
the customer it doesn't matter though. Customers just want things to run
smoothly and if possible reduce their maintenance work force. They don't want
choices, they want solutions.

~~~
InTheArena
Yes, but guess what, Docker CNI is now going to be supported, so CNI is no
longer a issue. Ingress will still be a issue, but Openshift is still doing
their own random route thing there anyways.

Being low on the stack is a power move. It's like a NFL lineman, the lower
player has considerably higher leverage then the higher player. Docker can go
in, run Kubeadm legitimately, but use Docker based CNI and volume plugins and
displace Openshift.

Plus, Openshift is _12k_ a application node on AWS:
[https://www.openshift.com/dedicated/index.html#pricing](https://www.openshift.com/dedicated/index.html#pricing)

~~~
erikb
Interesting idea, but reality looks different. Everything underneath the PaaS
layer becomes less and less important. With container engines it may be hard
to see for most people yet, I have to admit that. But with OS and hardware you
can see it. E.g. think about what OS you run your PaaS on. It doesn't matter.
The only limit here is integration with Docker/Kubernetes. If these are
available on the OS then it doesn't matter which one you choose. That's also
why many people now start to use complete unmodified OS images that don't
update individual packages anymore but the whole OS layer together or nothing.
Then hardware. Would you say anybody running k8s has an advantage when running
on a super computer compared to a cluster of hundreds of desktop computers?
Probably not.

------
grabcocque
This seems a lot to me like Docker Inc. caving in to what has been painfully
obvious for a while: K8s won and Swarm/Mesos lost the battle for hearts and
minds in container orchestration. We can argue about why it happened, but I
got the impression Docker Inc. were desperately trying to wish it away.

Now reality has intruded and I am glad, though I predict they'll continue to
maintain that Swarm is a first class platform for a while, then quietly let it
wither on the vine until one day it's forgotten about.

Also of note is Rancher 2.0's dropping support for Swarm and Mesos and
focusing solely on K8S going forward.

~~~
jadbox
Not sure why you would use Rancher if you have Mesos DC/OS? Mesos eclipses
every significant feature, but I'd say Rancher is easier to set up initially.

~~~
HyperLinear
We run a DC/OS+Traefik stack here and can only praise it. Shame it doesn't get
the same amount of love the other projects enjoy, but so far its rock-solid
and we are more than happy with it. :)

------
InTheArena
There is some real meat here, and things that should have been done long ago.
A key thing is that the Docker network drivers (libnetwork) are becoming CNI
compatable. This will vastly simplify one of the worst aspects of setting up
Kubernetes, and ensure a consistent network space across containers, even if a
given container is not in kube. That's nothing but awesome.

[https://github.com/docker/libnetwork/pull/1978/files](https://github.com/docker/libnetwork/pull/1978/files)

~~~
zenlikethat
Yeah setting up networking has been one of the worst parts of using Kubernetes
in my limited experience -- just using default Docker networking would be
sweet.

~~~
erikb
Networking is a topic where you really need debugging functionality. That's
why it hits you the hardest there. Same experience here. It's terrible. But
the underlying problem is that there's basically no debugging functionality in
cluster environments until you are skilled enough to set up your own (ELK
stack etc). But you can't even get there in a reasonable amount of time.

The same incomplete debugging is also in kubeadm. It often hangs in the
"waiting for control plane" level without any additional info. Helm also has
such problems, reporting networking erros when there's no networking problem
(if it checks ipv6 first but hten switches to ipv4 for instance). It's also
possible that a helm deployment fails, gives no real reason why, then you
can't uninstall it at all without restarting the k8s master.

It's maybe even more general, and a problem in the whole Go programming
language world. Everytime I see a tool written in Go I immediately cringe and
already know that there will be debugging problems. No ideas why nobody inside
this community realises it or how they debug. I suspect they don't really
debug and live in the illusion that actually others know something better, but
actually the others also don't know it.

~~~
zenlikethat
Yeah, I've run into that same issue with kubeadm. It was hanging because of
the network not being set up.

I'm not really sure where your original comment is going, but I don't really
feel the problem is endemic to Go. Using/debugging most software is an
exercise in frustration. Just look at Linux on the desktop (which I use btw,
I'm not criticizing). Fixing things is usually reduced to tribal knowledge,
IRC, and Googling.

~~~
erikb
I agree, a lot of software NOT written in Go also has this problem. I don't
know how it is in C world, but in many programming languages wriitng good
activity reporting (i.e. logging) is considered a core skill of each program.

It is sometimes hard to read the logging messages and understand how they came
to be. But just having a different status report for a different problem is
already so helpful. For instance if kubeadm fails with "I need cheeseburgers"
when you actually forgot to configure your proxy correctly, and with "I need
more minerals" when you forgot something else, then the first time debuggin is
quite frustrating. But after that you know "cheeseburger means proxy" and you
can continue. But if you hit "waiting for control plane" for ALL the problems,
then your brain can't even remember for what to check right now. I'm the best
example, I already forgot the other five things that can go wrong and I would
need to check my work internal wiki for that.

I think that's the main reason why logging exist, to increase the speed of
hitting a symptom to discovering what's actually going wrong. And Go in
general, k8s specifically, simply goes in the other direction the whole time.
They don't report any errors, and sometimes even report errors when there is
no error. This is systematic in some way but I would need to study the
community to tell you more specifically what's wrong.

------
ealexhudson
Have to say, I don't see the value in being able to have Swarm and k8s in the
same cluster.

"Docker: powered by Kubernetes" seems to be more of a marketing thing to move
down the value chain, and not be seen as a basic piece of infrastructure.

~~~
shykes
You can disable Swarm or Kubernetes at will in each Docker EE cluster. Hybrid
is very useful for enterprises who already have to manage both - it was a
highly requested feature.

~~~
ealexhudson
I think I can understand "We have both and need to manage them more easily" as
a request, because it's about pain right now.

The thing I'm unsure about - and it would be really interesting to get your
perspective on - is what this means for Swarm longer-term. Is there still
going to be a reason why people will want hybrid? Is it a migration play?

In a hybrid, over time I'd want to two to behave the same, and getting k8s up
and running is a one-time cost and likely decreasing maintenance. It seems
like a point solution.

~~~
shykes
Swarm has a very special role, because it's custom-built to integrate in the
Docker platform. Because it's so specific, it has a smaller standalone
community than Kubernetes, but it makes up for it in focus and speed. You
should expect a lot of bleeding edge features to ship in Swarm first, and a
generalized version to land in kubernetes later. That's already been the case
in the past: Windows support, secrets, node identity & promotion - those all
shipped in Swarm first, then made their way to Kubernetes. Not because Swarm
developers are smarter, but because they can focus on a narrower, more
integrated problem set.

Longer term, I think all orchestrators will converge to look more and more the
same. Orchestration will become a commodity, and it will matter less and less
which orchestrator you use, especially to developers. But this process will
take a long time, and in the meantime enterprises (our primary customers) need
to deal with the situation on the ground, which is a lot of Swarm and
Kubernetes living side by side because of historical decisions made in
2015-17.

~~~
pacala
> You should expect a lot of bleeding edge features to ship in Swarm first

The development of those bleeding edge features is well hidden. The
contributions graphs seem to indicate that Swarm is at best a ghost town.
Perhaps the action is happening somewhere else and/or Docker will start
investing in Swarm once more.

[https://github.com/docker/swarm/graphs/contributors](https://github.com/docker/swarm/graphs/contributors)

~~~
TheDong
You may be looking for swarmkit
[https://github.com/docker/swarmkit](https://github.com/docker/swarmkit)

~~~
shykes
Correct :)

------
InTheArena
So now the last question is how long does it take for AWS to finally abandon
ECS and formally support K8S as a service? I think this makes it kind of slam
dunk, but it is forcing aWS to give up a lot of proprietary lock in.

~~~
HatchedLake721
AWS re:Invent is few weeks away, so hope they announce something Kubernetes
related

~~~
InTheArena
I'm going to be there front and center. I know that they had something they
chose not to announce last year... hopefully they make up for it full tilt
this year ;-)

------
bonsai80
I see them stating "...for developers using Windows and macOS" but not
mentioning Linux. I feel like I'm missing something in how I'm reading that
page. How can I make use of this on Linux?

~~~
shykes
We're going to support Linux also. But we want to careful not to disrupt the
users of the original container engine, as we transition it to Moby. In the
future there will be a cleaner separation between "Docker CE, the developer
tools" and "Moby engine, the open-source container engine". The last thing we
want is someone to upgrade their production Linux engine to find an unexpected
and unwanted kubernetes distribution wedged in.

That separation is already in place for Windows and Mac, so we're starting
there.

~~~
ntnn
That sounds like a shoehorned explanation. You're leaving developers on linux
out to dry because people aren't paying attention to their production systems?

Don't get me wrong, the work you guys do is cool and all, but that isn't a
valid explanation from my point of view. Any company should have some sort of
staging to test updated before rolling them out - it isn't up to the
developers of the software to take care of this.

And not only that - the switch will come at some point or another either way,
so it doesn't make sense to hold that back from CE on linux so that someone
doesn't 'find an unexpected and unwanted kubernetes distribution wedged in'.
Those who would find that now would also be surprised by that later on.

To add to that - containers are tested using CI/CD tools anyhow, which are
predominantly powered by linux machines, which again makes this decision less
convincing. The build may be fine on the developers machine and in production,
but the CI/CD environment wouldn't reflect both of these environments.

This looks more like a facade for selling more Docker EE licenses rather than
wanting to protect users. Which is fine, of course - but then please say that.

~~~
rsanders
There has been plenty of hue and cry in the past about the rapid rate of
change of Docker, and new features bundled in when many users would have
preferred a more deliberate and planned change in what is to them a critical
piece of infrastructure. You're assuming quite a lot.

~~~
ntnn
If they were to plan this over a longer period of time for all distributions
CE is available for I wouldn't have said anything. However they are
specifically leaving out the platform most people are using docker on. And not
only that - they _are_ providing k8s support with EE on linux. That pretty
much deliberately points towards 'buy docker ee if you want this specific
feature'.

Tbh, I wouldn't even have said anything if they were making that an EE-only
feature. Thing is - they want to make money with that move and they're not
honest about it. And in the process they're throwing the larger demograph
using docker on the bleeding edge side of things in the mud. The people who
are trying the new features on their own servers in their own time.

> There has been plenty of hue and cry in the past about the rapid rate of
> change of Docker

Yes, well - that is what happens when a company decides to use bleeding-edge
hipster software. With puppet one minor version may not work with the server
whos a few minors behind, with ELK in pre-5 versions the cluster may have gone
keel over if the migration of the version hadn't been planned meticularly,
with consul you may get better performance (dc-local speaking) than with etcd
on one release and way worse the next.

Crying to the devs not to produce good software so quickly shouldn't be the
solution.

------
sz4kerto
Seems that people think Docker has gave in, but I am not that sure. If you can
switch between Swarm/Kubernetes transparently, then why wouldn't you start
with Swarm? (I'm talking about small companies who're just starting with
containers.)

~~~
mrmondo
Why wouldn’t you start with Kubes? Less vendor lock in and a much bigger
community.

~~~
organsnyder
If "Docker == containers" continues to hold true in many people's minds, then
it's possible that Docker Swarm could feel like the "vanilla" orchestration
platform. Of course those of us familiar with the platforms know better, but
that mindset could persist, especially with pseudo-technical decision-makers.

~~~
monsieurbanana
Could you elaborate what alternatives to docker are worth checking out? I'm
unfamiliar with containers and out of my head I can't name anything besides
docker.

~~~
dharmit
There's CRI-O. v1.0 was announced yesterday -
[https://medium.com/cri-o/cri-o-1-0-is-
here-d06b97b92a98](https://medium.com/cri-o/cri-o-1-0-is-here-d06b97b92a98)

~~~
InTheArena
The lesson of Linux is that fragmentation is bad. This is a chance to fight
fragmentation.

------
caleblloyd
This will give IT organizations the option of getting an Enterprise supported
distribution of Kubernetes from Docker.

Historically, most IT orgs requiring supported k8s have either gone cloud with
something like Google Container Engine or gone with OpenShift and get support
from RedHat. OpenShift is a fork if kubernetes though and lags a year or so
behind. It also adds opinionated features such as Image Streams.

Docker's announcement said they were using "real" kubernetes, not a fork or a
wrapper. I've setup kubernetes by hand before and it is no easy feat. I'm
looking forward to evaluating Docker's solution and maintenance upgrade
process.

UPDATE: My goal with this post is not to sell people one way or another, but
moreso to explain where some of Docker's reasoning for this integration is
coming from.

Disclosure: I work for a Docker partner

~~~
InTheArena
This. It's hard not to see Redhat (and to a lesser degree VMware) as the big
looser of today's news. I absolutely want to see how this is implemented.
Openshift's pricing is highway robbery.

Disclosure: my company is (was?) a big Openshift consumer...

~~~
merb
VMware actually supports K8s. (you can even pay for support)

------
sandGorgon
How are they doing this? The big difference in swarm and kubernetes is the
ingress. If this is seamless, then it has to be a batteries-included version
of kubernetes with ingress, overlay network choice, etc already mapped out.

What are the details here?

Docker Swarm is beyond awesome and a great path for someone to scale up at the
lower end of the scale spectrum (two containers). I really hope that this
brings more people into Swarm.

I'm also keen to see what it means for the Kompose project.

------
wyc
From The Information's article "When Docker Said No to Google":

> In 2014, Google approached a startup called Docker proposing the two
> collaborate on software each was developing to help companies manage lots of
> complex applications, according to people with knowledge of the proposal.
> But Solomon Hykes, Docker’s founder and CTO, said no. He wanted to go it
> alone.

> Three years later, the cost of Mr. Hykes’ previously unreported decision is
> becoming apparent. The software that Google was developing was Kubernetes,
> an open-source product that now dominates its segment of the cloud software
> market. Docker’s rival software, Swarm, is also open-source but isn’t
> anywhere near as popular, two former Docker employees say.

[https://www.theinformation.com/when-docker-said-no-to-
google](https://www.theinformation.com/when-docker-said-no-to-google)
(Sorry...it's paywalled :/)

~~~
InTheArena
Yep, Kudos for Docker for learning from their mistakes... But it's also a
reminder of how powerful being low on the stack is.

------
alexlarsson
Is this a response to cri-o taking Docker out of Kubernetes?

~~~
iamdeedubs
That was my initial thought as well. CRI-O hits 1.0, and then this. To me, it
comes across as an attempt to stay in the news. Possibly to start changing the
narrative from Docker vs Kubernetes to Docker <3's Kubernetes.

~~~
zenlikethat
Docker's conference (Dockercon) is happening right now so announcements coming
from them are no surprise. A Kubernetes integration has probably been in the
works for a while.

It seems more likely to me that the CRI-O 1.0 announcement was a tactical move
from Red Hat to hijack the conversation during Docker's own conference. CoreOS
did the same thing 3 years ago when they announced rkt, trying to capitalize
on Dockercon as a time to make a bunch of noise for themselves. Docker
themselves have been no perfect angels in this regard, for instance with their
infamous "accept no imitations" shirt at Red Hat's conference, I'm just
calling it as I see it.

Disclaimer: I worked for Docker, Inc. for 3 years.

------
kozikow
I use minikube for dev and it's pretty good. My description of my
minikube+pycharm setup: [https://kozikow.com/2016/09/16/using-pycharm-docker-
integrat...](https://kozikow.com/2016/09/16/using-pycharm-docker-integration-
with-minikube/) .

The only problem I didn't solve yet is debugging python code running in
kubernetes using pycharm. If I run a container in pycharm using "Debug..."
dialog it launches inside "docker context" rather than "kubernetes context".
For example, I can't connect to kubernetes services via their ClusterIP - the
container launched via pycharm does not see it. The only solution I found is
using docker compose to set up an environment similar to kubernetes and using
docker compose from pycharm. Hopefully, this announcement from docker will
simplify this story.

------
familyit
And Amazon making K8S a first class citizen

~~~
malaporte
Any special insight here?

~~~
hijinks
Amazon did join the CNCF

~~~
oblio
Yes, but it will be a while, I think, before they actually offer something
based on it. I don't see any service based on it at this point. AWS ECS is
based on Docker.

~~~
freeman478
AWS Re:invent is in the end of november. I'd guess there will be some
annoucements there ...

~~~
islanderfun
That's what they said last year. We got Blox instead.

------
andy_ppp
What does this mean for Docker Swarm, are they saying Kubenetes is "better"?

~~~
why-el
Not necessarily. It's like when Apple supports PowerPoint when it has Keynote.

~~~
InTheArena
Ehh. I think the writing is more then on the wall with Swarm. K8s is a better
solution, and at this point. Docker + K8s should be the standard.
Fragmentation is bad.

~~~
why-el
Yep, I agree with you.

------
joevandyk
Anyone have any recommendations for getting k8s on a 3 machine system, like
Digital Ocean or OVH?

~~~
chmielewski
[https://ronanquillevere.github.io/2017/05/16/kubernetes-
ovh....](https://ronanquillevere.github.io/2017/05/16/kubernetes-ovh.html)

[https://github.com/antoineco/kOVHernetes](https://github.com/antoineco/kOVHernetes)

Go with OVH, unlimited resource usage (including traffic) and they allow you
to create/own your own private network (vRack) of dark fiber. Look into using
multiple points of presence that they offer. If you don't need it nownow, wait
for OVH to offer local US machines rather than just geolocated IPs.

You can do this well with bare metal servers or with a number of whatever
dedicated and/or shared cloud stuff they offer.

