
Docker for Mac with Kubernetes - watoc
https://docs.docker.com/docker-for-mac/#kubernetes
======
linkmotif
So confused by all the posts from people who say they run Swarm because
kubernetes is too complicated or is only for huge deployments.

I’ve had all sort of difficulties installing Docker. By hand it’s not trivial
to get a secure install. Docker machine is great except it’s often broken. The
Docker machine dev team is a tired, understaffed bunch that’s always playing a
sisyphean whack-a-mole against dozens of cloud providers and very needy
posters on Github, myself included.

Kubernetes on the other hand is trivial with GKE. It’s great for single node
deployments. I run a single node on GKE and it’s awesome, easy, and very
cheap. You can even run preemptible instances. The myth that kubernetes is
complicated is largely perpetuated by the same kind of people who say React is
complicated: the people who’ve not tried it.

And like React, once you try kubernetes you never go back. Kubernetes is
actually the orchestration equivalent of React. You declare what should be
true, and Kubernetes takes care of the rest.

And the features it provides are useful for any-sized application! If you try
kubernetes you quickly discover persistent volumes and statefulsets, which
take away most of the complexities out of stateful applications (ie most
applications). You also discover ingress resources and controllers, which make
trivial so many things that are difficult with Swarm, like TLS termination.
Swam doesn’t have such features, which any non-trivial app (say, Django,
wordpress, etc) benefits from tremendously.

~~~
tedmiston
Do you really find k8s useful for a single node deployment or were you just
making an example?

I haven't used but I have used DC/OS, Mesos, and Marathon extensively, which
is not a setup I'd do for a small number of nodes personally.

~~~
dominotw
>I have used DC/OS, Mesos, and Marathon extensively

Would you use DC/OS-marathon (vs k8s) now if you were to make that decision
now.

I've heard that mesos stack is good for machine learning/big data stacks but
how does marathon compare for deploying webapps.

~~~
nemothekid
I've run 2 Mesos stacks in production and have experience setting up a k8s
stack (on prem). First off in my experience k8s ops is way more complex that
the DC/OS stack. I recently setup a new DC/OS deployment (80% of the cluster
resources was Spark, which works natively with Mesos and I'd rather run the
ancillary services on Marathon, then spend another 80% of my time on k8s). If
I didn't have the Spark requirement I would have went k8s.

Despite going with mesos I really had to contend with the fact that k8s just
has way, way more developer support - there are so many rich applications in
the k8s sphere. Meanwhile I can probably name all the well supported Mesos
frameworks offhand. Next, marathon "feels" dead. They recently killed their UI
interface as I imagine that they are having trouble giving resources to
marathon. 3 years ago I wanted a reverse proxy solution that integrated with
mesos as well as non-mesos services so I hacked Caddy to make that work [1]. 3
years later, I was looking for a similar solution and found traefik. It
claimed to work with mesos/marathon, but the marathon integration was broken
and the mesos integration required polling even though mesos had an events
API, so I hacked traefik to make that work [2]. On the other side of the
fence, you have companies like Buoyant who rewrote a core piece of their tech
(Linkerd) just to support K8s (and only K8s). This has a compounding effect,
where over the years things will just become more reliant on assuming you are
running k8s.

That "cost" you pay to setup Mesos/k8s is usually a one time cost on the order
of a month. I feel however, that k8s is going to give you a better ROI
(Unless, you are managing 100s of nodes with Spark/YARN/HDFS, then Mesos
continues to be the clear winner).

[1]
[https://github.com/mholt/caddy/pull/40](https://github.com/mholt/caddy/pull/40)
[1]
[https://github.com/containous/traefik/pull/2617](https://github.com/containous/traefik/pull/2617)

~~~
mholt
From me, an eternal thank-you for upgrading Caddy's proxy middleware!

------
alexellisuk
I normally used minikube for openfaas development - I appreciate the efforts
of the project, it's an invalueable tool. The DfM integration works very well
for local development and I've got some screenshots below:

[https://twitter.com/alexellisuk/status/949595379326210048](https://twitter.com/alexellisuk/status/949595379326210048)

Make sure you do a context-switch for kubectl too.

I see some people talking about Swarm vs Kubernetes. Swarm has always
maintained a less modular approach which made it simpler to work with - it's
obviously not going anywhere since there are customers relying on it in
production.

Out of the two it's the only one that can actually fit into and run on a
Raspberry Pi Zero because K8s has such high base requirements at idle in
comparison. For a better comparison see my blog post -
[https://blog.alexellis.io/you-need-to-know-kubernetes-and-
sw...](https://blog.alexellis.io/you-need-to-know-kubernetes-and-swarm/)

~~~
fapjacks
I find that Swarm is the only thing that fits my use case of just shipping.
There's so much complexity and overhead with kubernetes and often I just want
something that works so I can ship it. You just can't beat Swarm for building
an insta-cluster that works well and quickly gets you where you want to be.
I'm sure kubernetes is great if you have forty datacenters around the globe,
but I don't, and neither does anybody I'm building services for.

------
perceptronas
Docker for Mac is not usable today, because of high cpu due IO [1]

[1] [https://github.com/docker/for-
mac/issues/1759](https://github.com/docker/for-mac/issues/1759)

~~~
justincormack
It is usable for the vast majority of users. If it is causing an issue please
file a detailed bug report, with diagnostic ID, also try the Edge releases,
and give some information about what you are actually running, for example how
to replicate it. Most of the bug reports in that thread are totally unhelpful.
Quite likely it is not even the same cause for different people, as some
people said it was fixed on Edge while others did not. Even a single well
thought out detailed bug report would make it much easier to investigate the
issue.

~~~
zygimantasdev
Are you sure it works for vast majority of users? At least in my developers
circle who use macOS for web* development - all of them have issues with
docker high cpu usage due IO. Some use docker-sync to go around the issue.

As for bug reports - zero feedback from anyone on that thread from
maintainers. If you are one of the maintainers - it might be good to write
this comment on that thread instead of HN

* - I understand that web developers might be a small percentage of users and my case doesn't represent everybody

~~~
justincormack
I am not a maintainer but do work on LinuxKit which is used. If docker-sync
helps, then that suggests that you have an issue specifically related to file
sharing. Please file a new issue, do not add to this one, which explains how
to reproduce your development setup. Different setups work very differently
(eg polling vs notifications), and people use things very differently, there
is no one set of tooling and setup that is "web development", but it sounds
like in your company you all use similar tools and setup, so it is not
surprising you all have the same issue. We have a performance guide here
[https://docs.docker.com/docker-for-
mac/osxfs/](https://docs.docker.com/docker-for-mac/osxfs/) that covers some
use cases.

------
pacavaca
Wow! Did Docker give up swarm? I thought there was a time when Docker didn't
like the existence of k8s all that much.

Anyways, I envision this being very useful for development, may even replace
my docker-compose based test setup.

~~~
sz4kerto
No, it didn't. Yes, k8s has 'won' in large-scale deployments, but if you're
working at a small shop, then just imitating what Google does with millions of
servers is dumb. Do what works at your scale -- and Swarm is extremely easy to
manage.

Many people ask why would someone use an orchestrator on a small cluster
(dozens of hosts). Why not? Swarm is very easy to manage and maintain, using
Puppet or Ansible is not less complicated at all.

The future of Docker, Inc. is of course Docker EE, and the future of Docker EE
is _not_ the scheduler, it's everything around it.

~~~
xorcist
> Swarm is very easy to manage and maintain, using Puppet or Ansible is not
> less complicated

The idea that dockerized software somehow is less dependent on configuration
management seems to be a popular and completely misguided one. The two trends
are completely separate, but I would argue from experience that unless you
have absolutely nailed the configuration and integration of all your moving
parts, don't even look at containers yet.

Containers tend to lead to more moving parts, not less. And unless you know
how to configure them, and perhaps even more importantly how to test them,
that will only make matters worse.

~~~
amazingman
If you design your infrastructure and choose your tooling well, then
_containerized_ (not "dockerized") software is far less dependent upon
configuration management; indeed, using Chef/Puppet/etc can be completely
unnecessary _for the containerized workload_. To be clear, however, there is
absolutely still a need for the now-traditional configuration management layer
at the level of the hosts running your containerized workloads. What's kind of
exciting about this is that the giant spaghetti madness that our configuration
management repo has become—and I'm pretty sure it's not just us ;-)— at our
org is going to be reduced in complexity and LOC by probably an order of
magnitude as we transition to Kubernetes-based infrastructure.

~~~
xorcist
> indeed, using Chef/Puppet/etc can be completely unnecessary for the
> containerized workload

This is more than naive. As long as your software needs any kind of
configuration, there is a need for configuration management. There will be
access tokens, certificates, backend configuration, partner integration of
various kinds, and monitoring and backup configuration and you will want
guarantees that these are consistent for your various testing and staging
environments. You will want to track and bisect changes. You can either roll
your own framework for this or use Ansible/Puppet.

Whether you distribute your software pieces with tar balls, linux packages or
docker images or completely orthogonal to how you tie these pieces to a
working whole. And the need for configuration management absolutely increases
when moving towards containerized solutions, not by the change in software
packaging format but by the organizational changes most go through where more
people are empowered to deploy more software which can only increase
integration across your environment.

I see organizations that have ignored this because they believe this magic
container dust will alleviate the need of keeping a tight grip over what they
run, and find themselves with this logic spread over their whole toolchain
instead. That's when they need help cleaning up the mess.

~~~
amazingman
I never said anything about magic container dust, nor did I say anything about
having less of a grip over our operations. I was attempting to make a point
about how your workloads themselves (applications/jobs) can be free of a
direct need for Chef/Puppet/etc, which can dramatically simplify your
configuration management _layer_. I never intended to claim that somehow
magically our pods need no configuration bits at all, and honestly I’m not
sure where you got that idea.

~~~
xorcist
The statement was that containerized workloads are less dependent on
configuration management. That could easily be interpreted as if configuration
management gets less important when you containerize, which is an idea that
seems to spread easily on its own, while I have found the complete opposite to
be true. That's why number one guideline is to get a grip on your
infrastructure and configuration before you move to containers. Otherwise you
will end up with a mess worse than before.

------
amq
Kubernetes is also coming to docker-ce: [https://github.com/docker/docker-
ce/releases/tag/v18.01.0-ce...](https://github.com/docker/docker-
ce/releases/tag/v18.01.0-ce-rc1)

------
Exuma
One of my engineering friends told me he didn't use Kubernetes in the past
because there was a single point of failure with it for distributed setups.

I really wish I could remember what SPOF was pertaining to, but I just can't
remember. Does anyone have any idea if this is still relevant/accurate
information?

He told me this maybe 2-3 years ago, so I was wondering how things have
changed since then, or if anyone knows what he might have been talking about.

~~~
atombender
Kubernetes supports HA masters now: [https://kubernetes.io/docs/admin/high-
availability](https://kubernetes.io/docs/admin/high-availability)

Note that even if you don't have HA, Kubernetes being a SPOF isn't necessarily
_critical_. Barring some kind of catastropic, cascading fault that affects
multiple nodes and requires rescheduling pods to new nodes, a master going
down doesn't actually affect what's currently running. Autoscaling and
cronjobs won't work, clients using the API will fail, and failed pods won't be
replicated, but if the cluster is otherwise fine, pods will just continue
running as before. Ny analogy, it's a bit like turning off the engines during
spaceflight. You will continue to coast at the same speed, but you can't
change course.

~~~
Exuma
Interesting, well thats great to know. Thanks for the ELI5 explanation.

------
eggie5
Does this obviate minikube?

~~~
InTheArena
For me, yes. I just deleted my minikube and mini shift directories.

~~~
eggie5
sounds good to me!

------
daveevad
Anyway to get this feature without updating to High Sierra first?

~~~
guillaumerose
Kubernetes is enabled for all OSX version

~~~
bartvk
Minor nitpick, it's called macOS now.

~~~
switch007
I don't understand why people refer to pre-Sierra releases as macOS. Sierra
and onwards is called macOS.

guillaumerose was referring to all versions, so both "all OS X versions" and
"all macOS versions" would be wrong, no...?

~~~
alangpierce
Apple rebranded "Mac OS X" to "OS X" and later rebranded that to "macOS". It's
not like they're different lines of operating systems; it was a rename of the
whole line, so my impression is it's fine to use the term "macOS" to refer to
any of the versions since 2001, or it's also fine to use the name that was
given at release when referring to a specific version. In other words,
probably best to not worry about any particular phrasing, and not try to put
exact technical meaning on any of these terms. :-)

For example, Wikipedia[1] has a page called "OS X Yosemite" which describes it
as "A version of the macOS operating system", and the Wikipedia article on
macOS[2] says it was first released in 2001.

[1]:
[https://en.wikipedia.org/wiki/OS_X_Yosemite](https://en.wikipedia.org/wiki/OS_X_Yosemite)

[2]:
[https://en.wikipedia.org/wiki/MacOS](https://en.wikipedia.org/wiki/MacOS)

------
stephenr
I don't understand the use-case here. You want to use Docker+Kubernetes but
can't work out the bits to run it in VM's on your own?

~~~
watoc
If you need to experiment with k8s locally it's pretty easy to install
Kubernetes in one or 2 VMs on your laptop with kubeadm but then you need to
install Docker inside this VM. Minikube is also installing another Docker
daemon inside their own VM.

Because I already have Docker for Mac installed to be able to build and test
images I think it's useful to have this local k8s integration.

~~~
eicnix
minikube is still my default for running a local kubernetes instance since it
supports more customisation of the k8s settings.

------
abusaidm
Once kubernetes become stable in docker ce can one attempt to run their own
cluster on baremetal?

------
sigjuice
The phrase "Docker for Mac" is super misleading. If we run Docker in a Linux
VM on macOS, I don't think it counts as "Docker for Mac", IMHO.

Docker is primarily for running Linux applications on Linux (yes, I know there
are things like Joyent SDC, Docker Engine on Windows etc).

------
k__
Half-OT: Is it possible to run Xcode CLI tools from inside Docker for Mac?

~~~
RantyDave
No. Docker for mac runs a Linux VM.

~~~
k__
Doesn't that defeat the purpose of Docker?

But anyway, thanks for the info :)

~~~
alangpierce
Running a Linux VM on Mac defeats some of the purpose of Docker, but it's
still valuable:

* Docker is useful for production and has various other benefits, and Docker for Mac is a nice way to develop locally with Docker even if it's not as efficient as on Linux.

* Docker for Mac uses some built-in virtualization tools in macOS to share network and filesystem more efficiently than you could do with the older VirtualBox approach. So it's maybe a little closer to native OS support than you're thinking.

* A typical configuration has a single Linux VM holding many Docker containers, which is better than the alternative of many VMs.

~~~
k__
I see. okay.

I assumed the main idea of Docker were reproducible builds on different
machines and wanted to use it for building iOS apps.

~~~
OJFord
That's certainly a use case, it's just not compatible with Apple's
restrictions.

