
Gardener: Manage Kubernetes clusters across multiple cloud providers - rusht
https://gardener.cloud/
======
madmulita
I don't mean to hijack the thread, sincere concern.

I've been trying to build a minimal kubernetes cluster in our lab to see what
it would take to host this kind of infrastructure. It's not clear if we are
allowed to use the public cloud yet. (We are a bank, yeah, I know)

I've tried, at least:

\- kubeadm

\- rancher

\- canonical kubernetes

\- canonical kubernetes core

\- some random internet recipes

And for some IaaS: \- cloudfoundry

\- openstack

\- cloudstack

\- opennebula

\- ganeti

Not one has worked out of the box in our environment. Every single one expects
to have a direct connection to the internet. Any proxy in the middle creates
havoc.

I've been able to hammer some of this solutions until the cluster started and
had some pods or VMs running, but it feels like this are not ready for
production or not for 'secure' on-premise deployment.

~~~
cmdkeen
Rancher has an "air gap" installation option, though requires you to be
running your own on-premise registry (which you'll be doing if you're not sure
you can use the public cloud.

I'm busy deploying 1.6 at a financial institution, and thinking about the 2.1
upgrade to Kubernetes. The Rancher team have been great so far.

[https://rancher.com/docs/rancher/v2.x/en/installation/air-
ga...](https://rancher.com/docs/rancher/v2.x/en/installation/air-gap-
installation/)

~~~
madmulita
Noted. I didn't see this option. Thanks!

------
rusht
Here’s the blog post:
[https://kubernetes.io/blog/2018/05/17/gardener/](https://kubernetes.io/blog/2018/05/17/gardener/)

------
number6
There is also rancher.com

~~~
jacques_chester
And BOSH, via Cloud Foundry Container Runtime[0].

BOSH is also used to deploy Cloud Foundry and a bunch of data services
(RabbitMQ, Redis etc). It has the advantage of having been under continuous
development for 7 years, with backends (CPIs) for most IaaSes being provided
by the IaaSes themselves.

I work for Pivotal, we sponsor most of the work done on BOSH. As it happens,
SAP released an experimental BOSH CPI for Kubernetes[1] and there is work
underway to make that experience smoother[2].

[0] [https://docs-cfcr.cfapps.io/](https://docs-cfcr.cfapps.io/)

[1] [https://github.com/SAP/bosh-kubernetes-cpi-
release](https://github.com/SAP/bosh-kubernetes-cpi-release)

[2]
[https://www.dropbox.com/s/6jv9su650a76qmq/BOSH%20Kube%20CPI-...](https://www.dropbox.com/s/6jv9su650a76qmq/BOSH%20Kube%20CPI-20180401.pdf?dl=0)

~~~
vasu1124
That's all nice. The issue though is that BOSH has been completely leapfrogged
by Kubernetes with its extensible API. Nowhere will BOSH ever get to the
community reach and acceptance of the level of Kubernetes. That Boat has
sailed. And with implementations of the machine specs [0,1,2,3] you get rid of
the "media break" that you deem BOSH is filling (it is not). Maybe you could
implement BOSH as implementation of the machine spec and integrate into K8s,
the other way round than KuBo?

As for the bunch of data services, I guess it's only a matter of time until
you see a cambrian explosion of _productive_ operators. I mean this non-
comprehensive list [4] is already impressive.

SAP works in many projects where obligations with long term commitments have
to be kept. And that is ok. BOSH CPI is one experiment to get CF on K8s. Have
a look at the one which seems more attainable [6]. But these activities are an
indicator of the elephant in the room, namely CF & K8s: Will it blend? [5]

I work for SAP in the inter-junction of SaaS, PaaS & IaaS and K8s.

[0] [https://github.com/kubernetes-sigs/cluster-
api](https://github.com/kubernetes-sigs/cluster-api)

[1] [https://github.com/kube-node/nodeset](https://github.com/kube-
node/nodeset)

[2] [https://github.com/gardener/machine-controller-
manager](https://github.com/gardener/machine-controller-manager)

[3] [https://github.com/kubeup/archon](https://github.com/kubeup/archon)

[4] [https://github.com/operator-framework/awesome-
operators](https://github.com/operator-framework/awesome-operators)

[5]
[https://www.youtube.com/watch?v=4ow7IumxkOM](https://www.youtube.com/watch?v=4ow7IumxkOM)

[6]
[https://docs.google.com/document/d/1qs6UQQDWMkfOpY19XqS3CfvI...](https://docs.google.com/document/d/1qs6UQQDWMkfOpY19XqS3CfvI00jCns876TjplJ6E95s/edit)

~~~
jacques_chester
It may surprise you to learn that I disagree about your core thesis. My
qualifications are working for Pivotal at the inter-junction of PaaS, CaaS and
FaaS. IaaS is just a hobby.

> _Maybe you could implement BOSH as implementation of the machine spec and
> integrate into K8s, the other way round than KuBo?_

This is being investigated too. The main difficulty (as Brendan Burns has
noted for virtual kubelet) has been that Kubernetes, ostensibly providing a
smooth abstraction away from machines, actually has layer-breaking assumptions
about the existence machines after all.

Cloud Foundry always had BOSH to insulate it from that concern. But BOSH-
on-K8s was not super pretty in the early days, because they had overlapping
concerns (mostly disks, I believe).

Kubernetes-on-BOSH is a natural fit. Standing up large distributed systems on
IaaSes is BOSH's bread and butter. More to the point, that is its _sole_
focus. Its mission is not spread amongst a cambrian explosion of alternatives
(almost all of whom, you may recall, went extinct).

But in any case, it's doable. Which has the nice property that as the cluster
API matures, BOSH will happily take workloads that run on VMs and run them on
pods. The same experience we have today -- run an upgrade, everything is
upgraded, nobody bats an eye -- will be exactly the same.

> _But these activities are an indicator of the elephant in the room, namely
> CF & K8s: Will it blend?_

If you look at the community activity, the answer is pretty clearly yes: Diego
can be placed behind an OPI (Project Eirini) and CF itself can run control-
plane components in containers instead of VMs. Personally I am all for it.

But as you pointed out, enterprise vendors need to keep their word. Adopting
Kubernetes isn't a button-press operation. We need to prove that CF-on-K8s is
_at least_ as safe and performant as CF-on-Diego has been. Your customers, and
Pivotal's customers, and IBM's customers, and SUSE's customers, expect _all_
of us to provide the roadmap and prove that it is something they can bet a
company on.

------
msohn
video recording of the Gardener demo at the Kubernetes community meeting
2018-05-17:
[https://www.youtube.com/watch?v=DpFTcTnBxbM&t=27m0s](https://www.youtube.com/watch?v=DpFTcTnBxbM&t=27m0s)

------
pietromenna
I used Gardner to create k8s clusters for testing, I found it really easy to
use and just have the configurations needed.

------
bauerd
Letter spacing makes this painful to read …

------
donttrack
I did this last year by using wireguard VPN between all nodes. It worked quite
good I would say.

It was for a proof of concept project, so I don’t know how it would perform in
production but it was promising.

