
Kubermesh: self-hosted/healing/provisioning, partial-mesh network K8s cluster - guifortaine
https://github.com/kubermesh/kubermesh
======
web007
This sounds like buzzword bingo.

I can't tell without reading the source, is this any different from minikube?
I see a mention of NUCs and netboot in the README, so I'm guessing it's
slightly more, but not documented.

~~~
mbryant
There's a better explanation of what's going on over here:
[http://ocadotechnology.com/blog/creating-a-distributed-
data-...](http://ocadotechnology.com/blog/creating-a-distributed-data-centre-
architecture-using-kubernetes-and-containers/)

There's some libvirt stuff, but that's mostly for local testing without having
a bunch of physical hardware.

The key things I'm trying to do are: \- Automatically provision new machines,
just by plugging them in. \- Don't have a traditional network (i.e. switches)

~~~
corndoge
From the article, you're using Quagga -- are you aware of the active fork
FRRouting[0]? It was forked about 8 months ago and is over 3,000 commits ahead
since most of the core developers have stopped developing Quagga.

[0] [https://github.com/FRRouting/frr](https://github.com/FRRouting/frr)

~~~
mbryant
Yes, but thanks :)

FRRouting took off after I'd already moved on to other things. I've dropped an
issue in the repo so I remember, when I get a chance to work on Kubermesh
again

------
oarsinsync
As someone who doesn't know enough about K8s, this looks like an amazingly
easy way to get stuck in.

Can anyone who has experience with real deployments advise on what pain-points
may be encountered by growing something like this further than N nodes (and
talk about what N might be?)

~~~
dguaraglia
I've found a few not-so-obvious pain points on my very limited k8s experience.

1) k8s makes a lot of sense for stateless applications (such as your website)
but not so much for stateful applications that require a client to connect to
the same container every time (there are ways to do it, but they are a pain in
the ass.)

2) Tooling is getting better with time, but it's still pretty green. Packages
for your usual orchestration tools like Puppet and Ansible are volunteer work
so they get easily out of sync, or require more work than you'd expect to get
going. Using their suggested YAML format leads to another problem: there's no
easy way to keep secrets outside of the configuration files, unless you build
your own process around it

3) Some pieces, like a replicated DB, might be easier to be kept outside of
the k8s cluster. You can technically run them there, but they weren't designed
for running in that kind of environment and sometimes it shows

4) The CI/CD story using Jenkins pipelines is far from solved. There are
several packages that provide some solutions, but the documentation tends to
be horrible and that leads to days of debugging through trial and error

5) Leaky abstractions everywhere. As an example, the Jenkins plugins to build
using your k8s cluster suggests using the "credentials" system, but you need
to add them manually after you boot the Jenkins service. Then your slaves stop
receiving the credentials and you have to reboot Jenkins (I had to reboot it
on average every 3 or 4 builds.)

Don't let that discourage you from using k8s as a "run" cluster, if your app
is a shared-nothing, stateless app. It's so much easier to setup (specially on
Azure and GCP) and it obviates the need for setting up Puppet + Sensu + load
balancing just to make sure your service keeps running when a node dies.

~~~
alexnewman
1) That's true, but why not support shared state on shared storage. If that's
not an option.

2) What exactly do you not like tooling wise.

3) Totally agree. I am actually working on replacing etcd with postgresql
though.

~~~
dguaraglia
Regarding #2, I think it's hard to integrate k8s with the rest of your
infrastructure. If you are using, say, a combination of Terraforms + Ansible
or Puppet to keep everything in shape, right now you have two options:

1) Write a bunch of bash scripts around kubectl and a bunch of YAML files.
While painful, this the way I ended up going for (plus blackbox to GPG
encrypt/decrypt secrets on the repo)

2) Try to use your usual tools (Ansible/Puppet) as a replacement for kubectl.
This is the dream, but the plugins for Ansible and Puppet only support subsets
of the latest k8s features, or require annoying stuff like setting your API
endpoint on every task.

In other words, getting reproducible deploys between different k8s clusters
(say, one for staging and one for production) is not really a thing yet. I
guess there's an argument for saying that one should use k8s-native solutions
for that (such as namespaces) but what about having clusters in different data
centers?

~~~
alexnewman
2) This is a very bad idea

------
dominotw
kubernetes has so much momentum and developer interest. I rarely see anything
about DC/OS which is what they choose at my work last month, apparently its
DC/OS is more 'battle tested' in enterprise.

I don't think i even want to setup dc/os on my laptop and play with it, seems
almost impossible to run mesos on osx . All these articles are making me
resent DC/OS :D.

~~~
justicezyx
DC/OS was developed with objectives that are not overlapping with K8s'.

Namely, they target machine management. While K8s is about container centric
job management. Seems subtle, but if you think along these 2 lines, design
decisions in 2 systems fall through naturally.

~~~
dominotw
>DC/OS was developed with objectives that are not overlapping with K8s'.

from their home page ' deploys containers, distributed services, and legacy
applications into those machines; '

Containers is listed first and its definitely an overlap with kubernetes . I
understand that mesos was originally developed with non container centric
focus but dc/os now definitely container focused. First example on their site
in graphic is 'Containerized Workloads '.

