We've just released a really cool feature called Snapshots that allows you to image a running Kubernetes cluster, including the state of all applications, and then create new clusters from that image. It's great for creating consistent development environments or quick starting test environments.
Happy to answer any questions people might have.
For me, this was always ops smell - why do devs need to spin up k8s clusters? As long as you're not working on some low-level k8s features (your own operator, or testing cluster-wide resources, or developing k8s components themselves), then why not use a 'real' cluster for testing? k8s multitenant/process isolation is definitely good enough for semi-trusted users like developers, as long as you take sensible measures (ephemeral low-priv namespaces, podsecuritypolicies, networkpolicies, quotas, etc).
In particular good RBAC design, that doesn't end up leaking information across namespaces, PSPs that are flexible enough for developers but strict enough to prevent privesc and strong network policies present challenges.
For those, less mature, organizations, a solution like this might present an easier option.
From my experience with companies that haven't done their organizational or engineering homework: half-assedly deploying Kubernetes ends generally ends up being an unmaintainable disaster.
One of the high-return-value aspects of k8s is having little clusters available to multiple tenants. Without this in place k8s really stops making, being too complex for its actual usecase - so you might be much better of using something simple like Nomad.
In fact, the goal of many solutions (GKE, AKS, EKS, etc) is meant to be "We managed the entire cluster for you, just deploy your workloads!".
In many situations, if a company is running a single application in their cluster, many of the these management aspects (networkpolicies, quotas, etc) are not at all necessary for their use-case.
You say they shouldn't be running k8s in the first place, and I half agree with you. They don't _need_ to be running k8s. Large platforms have done a lot of work to make "Run in a Kubernetes Cluster" as approachable as "Run in Heroku".
Regarding Nomad, sure, but if someone hasn't done their engineering homework, the chance that they are even familiar with Nomad is slim (no offense to Nomad)
Edit: A bit of clarity in the first sentence
If you have to pay someone else for the k8 and cannot do some management yourself then stop. You do not need it to begin with.
Nothing against the original posters idea/company. Looks like a good idea to me.
I think it's reasonable for everyone to have their own cluster if their day-to-day work involves developing cluster-scoped resources. Admission webhooks, service meshes, CNI plugins, etc.
I agree that most people have too many clusters, using them to separate batch and interactive jobs or staging and production jobs. Generally, computers are expensive and manually scheduling jobs to clusters reduces utilization, so it costs a lot of money. It also gives you less operational flexibility, like being able to throttle batch jobs to scale up interactive jobs.
In a similar vein, if you have 10 or even 100 end-to-end test suites, with Krucible you could run them all in parallel, significantly reducing the time taken, without fear of them impacting each other. In your shared cluster scenario you would be limited by the size of your cluster.
Kubernetes supports resource requests and resources quotas to combat this. You should be protecting your production workloads this way anyway.
> In your shared cluster scenario you would be limited by the size of your cluster.
On the other, with a shared cluster, it makes sense to dedicate more resources to it, and share it across both developers and CI systems.
That's certainly good advice and would significantly reduce the likelihood of issues but it doesn't handle all cases. For instance it's not particularly easy to quota network bandwidth.
Ultimately all of these problems are likely solvable—we just think that Krucible is easier, simpler and safer.
Also, can you describe the performance (cpu / memory) of the clusters? Obviously if I am running lots of pods that have:
The snapshots feature is also a big differentiator: you can set up a cluster, take a snapshot and then share that with your team so that you're all running identical Kubernetes clusters.
I like the idea of having a developer be able to play entirely with their own local infrastructure to get as close to production as possible.
A hosted service will definitely make this less painful but once you get up to speed with k3s then I doubt there is anything faster than using it.
You could also prepare a cluster with a certain configuration in advance, snapshot it and then create new clusters from that snapshot so that all students could quickly get started with a Kubernetes cluster in a given state.
Get in touch and I'd be happy to help you get things set up. Email is firstname.lastname@example.org.