
Virtual Kubernetes - alexellisuk
https://github.com/ibuildthecloud/k3v
======
fen4o
For anyone looking for K8S multitenancy - here is an excellent talk by Alibaba
/ Alicloud about their approach:

\-
[https://www.youtube.com/watch?v=ptOLV2wjQUw](https://www.youtube.com/watch?v=ptOLV2wjQUw)

\-
[https://static.sched.com/hosted_files/kccnceu19/3f/kubecon-s...](https://static.sched.com/hosted_files/kccnceu19/3f/kubecon-
speak.pdf)

~~~
erulabs
I've just quit my job at Stripe and founded a startup to tackle K8S hard
multi-tenancy, particularly for companies who want to "build a cloud" for
their customers but aren't sure where to start. If anyone is interested please
feel free to drop me an email! [https://kubesail.com](https://kubesail.com)

~~~
dwild
Your service is exactly what I wanted for a pretty long time for an accessible
price.

Are you going to offer bigger package too? The hobby one is most probably more
than enough for me for a long time but I may need more later on.

The templates are nice at first too but they are hard to get to afterward (I
found it back when clicking on my user image).

EDIT: I also love that you include the kubectl commands on each page, for a
newbie that's amazing!

~~~
erulabs
Heya! Yes, bigger and more flexible plans coming soon! Templates are a brand
new feature and we're working hard to enable Helm template support there - but
eventually the "template search" will make its way to the main deployments
page (and will get a looooot more templates!)

------
alexellisuk
It's also worth looking at the work of Pau and Angel from k8spin -
[https://k8spin.cloud](https://k8spin.cloud)

They are trying to build something like OpenShift, but self-service and for
vanilla Kubernetes.

~~~
drondin
Thanks Alex! Pau here, please do not hesitate to contact us if you are
interested in our platform.

Our mission is to allow users to benefit of kubernetes whithin seconds. We
take care of ingress, SSL certificates, backups... so you can focus on what
really matters.

We are going to bill by resources/minutes wich is really interesting for CI/CD
where you need a real cluster intead of KinD.

We are going live in a few days, join our slack or email me for feedback :)

------
polskibus
What's the story of multitenancy in Kubernetes? Is it true that it does not
provide sufficient isolation between services by default? Is this something
that could be handled by configuration only or is it a bigger issue with
Kubernetes design?

What's the best way to make Kubernetes the layer that isolates tenants in a
multitenant platform?

~~~
smarterclayton
As some of the other comments have mentioned, kubernetes is “multitenant
capable”, but the questions come down to:

1\. Do you understand all the primitives and how they interact

2\. Do your use cases require some or all of these primitives be bypassed
(host network, root containers, custom extensions / operators), which may
invalidate most security

3\. Are the human costs to manage 1 and 2 less than the human/infra costs to
manage multiple clusters?

Many applications have a few assumptions that violate “safety on a single
machine” - databases use raw block devices, processes run as root by default.
The harder it becomes to isolate an app on a single machine, the harder it is
to isolate that workload on a cluster.

In addition, the moment you install something above kube (helm, ingress
controllers, operators that can read or write secrets across namespaces) you
start to blur the security boundaries at a cluster level and thus make it
harder to reason about who can do what. It’s a lot like the Linux kernel /
userspace separation - everything secure is in the kernel, but if a driver or
subsystem has a vulnerability that tends to compromise the whole kernel.

I would suggest most people run multiple clusters until you get to the point
where you have cluster sprawl (and are paying for a ton of redundant / idle
infra), and then start either with an opinionated distro designed for picking
all the defaults for you (OpenShift, a few of the other vendors) or hire
someone who can design a top to bottom policy (and even then you’ll want
multiple clusters for failure isolation).

~~~
drondin
I mostly agree with you that using current tooling seems like patching holes
constantly but it's getting much better with sandboxing and automation is key
and of course, it's not for everyone but can provide lots of benefits with
"low" effort.

There is a Kubernetes Working Group where the definition and implementation of
soft multi-tenancy and any feedback is welcome.

------
PopeDotNinja
It'd be an adventure debugging that...

~~~
MuffinFlavored
what are some common k8s debugging rabbit holes that people have to go down on
the day to day?

~~~
PopeDotNinja
I'm not a K8s wizard, but have a little experience with it. I found there to
be a steep learning curve with simultaneously learning RBAC, side cards, Helm,
YAML templating, storage, logging, operators, service discovery, etc. There's
so much hype around Kubernetes that it can be hard to sift through the noise
to figure out what you really need to know. I found many people wanted to sell
you on how "how all you have to do is...", but before you even get a basic
service deployed, you can easily end up with a complex problem that is unique
to your situation. And unique can mean you ain't gonna find an answer to your
problem on the Google. That becomes even more of an adventure when you're
trying to do something any a little non-standard because K8s doesn't expose an
API endpoint to solve a problem you have. Now throw slicing up one cluster
into N virtual clusters and... debugging that sounds like an adventure.

~~~
pojzon
Sorry for hijack,

May I ask about the sources you would propose to go through for someone
starting out with Kubernetes ?

I was able to (with examples) create a simple K8s cluster with dashboard and
Im learning now about external dns. What else would be good to get to after ?

~~~
PopeDotNinja
I mostly learned from the K8s docs and brute force trial & error.

My first recommendation would be to learn Kubernetes on the platform to which
you'll be deploying, if that is possible & practical. For example, you can
create a lot of work for yourself trying to debug the differences between
Docker, Minikube, and Google Kubernetes Engine. Every provider supports
Kubernetes in a slightly different way (in my experience).

My 2nd recommendation would be to put in as much time as you can learning to
read logs/errors/messages. You'll learn a lot about your cluster(s) and how it
is laid out by tracking down the errors the system is generating. They aren't
always going to be where you expect them to be.

My last recommendation would be to not allow yourself to become intimidated by
people who sound like they know more than yourself. The K8s hype machine is in
full growth mode, and while K8s is very cool in many ways, there's a pretty
darn good chance that someone who insists "all you gotta do is..." doesn't
actually know what they are talking about and/or how what they understand maps
to the nuances of your particular challenge.

Be prepared to grind on hard problems. Clusters of containers are still very
much in like sailing to The New World. Few know it all that well, and those
who do probably don't work with you :)

------
colemickens
See also: SAP's Gardener. Their website reminds me of a spam landing page more
than a valuable, robust project, so I'll link an official Kubernetes blog post
about it:
[https://kubernetes.io/blog/2018/05/17/gardener/](https://kubernetes.io/blog/2018/05/17/gardener/)

EDIT: per the replies, it does look like a fairly different focus, my
apologies, just got excited when I saw the virtualized control plane. (To save
you reading the blog, Gardener launches a control plane in an existing cluster
to manage workloads on a _different_ , dedicated set of worker nodes.) It will
be fun to think about k3v and multi-tenancy.

~~~
p932
Gardener project focus is orchestration of multiple Kubernetes clusters on
IaaS cloud providers, whereas k3v focus on running a dedicated control plane
over an existing Kubernetes cluster. Some overlaps but different project's
goals.

~~~
cwyers
Yeah, Gardener looks like it's meant to address one tenant on multiple k8s
clusters, and k3v looks like it's meant to address multitenancy in a single
k8s cluster.

------
unfunco
> This allows one to take one physical Kubernetes cluster and chop it up into
> smaller virtual clusters.

What is the benefit, or difference of this over namespaces in Kubernetes?
Namespaces provide resource quotas, and they work with RBAC too, the benefits
listed in the README and their descriptions, they sound exactly what
namespaces already provide.

~~~
p932
Currently if you have only access to a namepsace on a shared Kubernetes
cluster without cluster wide admin control you won't be able to:

\- Create cluster wide RBAC ClusterRole or ClusterRoleBinding

\- Create or get access to cluster-scoped resources (nodes, CRD)

\- Use custom webhooks for example using sidecar injection

Many of the things that complex Kubernetes deployments are doing nowadays.

~~~
alexellisuk
The point is that it's unsafe to allow tenants ClusterRole / admin on a shared
cluster, but this is needed for many CRDs and Operators.

The Operator pattern is getting more and more popular, and most of then need
ClusterRole.

As the service provider (internal team, or SaaS provider), this is a
liability. The aim, from reading the README.md, is to provide the ability for
each tenant to be ClusterRole / admin within their own cluster, hosted in a
larger real cluster.

Jessie Frazelle has talked about this before, I'm not sure if this is the
exact blog link, but it's related: Kubernetes in Kubernetes -
[https://blog.jessfraz.com/post/hard-multi-tenancy-in-
kuberne...](https://blog.jessfraz.com/post/hard-multi-tenancy-in-kubernetes/)

------
lpasselin
It cwould B useful to run multiple clusters for home/work but using a single
cluster.

I would like to be able to let my clients start jobs and use their private
cluster and append their machines to it, running in my cluster.

Is there a way to do this with kubernetes currently?

------
gtirloni
Is it fair to call it an k8s API server proxy?

------
dahfizz
Why? Kubernetes is already a virtualization layer on top of a virtualization
layer. Do we need more virtualization layers? All it does is add complexity
and degrade performance.

------
okop
Why is Rancher allowed to shill their crap?

