Hacker News new | past | comments | ask | show | jobs | submit | researchiteng's comments login

Thanks for sharing your project (as well as Hetzner cloud which are indeed great). Our target was to use official (vanilla) k8s, the official installer (kubeadm) and the official helm charts for all things we deploy. While it would also work on a Raspberry Pi, indeed k3s would be more appropriate. We wanted to have something with the same functionality and HA, and tools like in a cloud k8s: GKE/EKS - but for people that like to own it, and use on-prem (or something like your cloud).


Nice project! Please correct me, but it looks sugarkube is mainly focused minikube or for cloud (e.g. kops/etc), right? We chose Ansible as our primary focus is on prem, and setting up linux vms parameters, for all machines in a cluster, and Ansible sounds like a fit. For on prem "consumable" clusters is what we also do (that's why the project accepts any size, from 1 vm cluster to thousands vms clusters). We should look into sugarkube for AWS cluster activities - Thanks for sharing!


Thanks!

Sugarkube orchestrates other tools. In this case where you have to actually SSH into machines to install kubeadm then Ansible may be a better choice for actually creating the clusters (since there's probabaly no single binary that does that).

However, Sugarkube can also be used as a ready-made release pipeline for applications. In general, using Sugarkube to release applications gives you a standardised way of being able to install apps onto different clusters (local/on-prem/cloud), keeping your options open for the future. So you could use it to promote applications through various dev/testing/staging clusters to production, some of which may be on-prem, some on AWS.

If you're looking to create a common set of applications in your project (monitoring, CI/CD, ingresses, etc.), using Sugarkube would allow users to install them into clusters regardless of how they were created (either by your project via kubeadm, or by Kops/EKS with or without Sugarkube, etc), so your applications may be more generally useful to more people.


Thanks for your question, very good one. We are currently looking to find the right balance of what perms should be OutOfBox and what should be custom setup. E.g. Running things like telepresence CNCF need sshd. And running sshd needs root (and RunAsAny). We are working closely with them and just proposed a work version which should be fine with less perms (currently for Openshift), and then we can tighten such things also in k8s.


what you are looking for is called kubeadm, which is an official CNCF project, in go. BUT: 1. kubeadm won't change your kernel params, or docker or anything on your hosts. It expects someone took care of it (and according to their docs). And that's how we started. 2. your list of features: it's not one size fits all. Not everyone wants the overhead of _all_ of these, and not all developed under CNCF. Therefore one needs the post cluster deploy steps, with these things. Also, things like DNS, HA does depend on your infrastructure (it's where things get hard). E.g. We tried to solve HA on prem by using keepalived.


Hi, it's because metric server is not yet "stable" and fully integrated with dashboard. We keep monitoring it closely and ready to do the switch.


Exactly to the point: this project is mainly focused to on-prem: configures kernel params, k8s HA is with keepalived (or your HW LB should you have one), etc. Should you see something important missings, I would be happy to answer/add it.


Hi, exactly to the point, these are the ones included Out Of the Box: Prometheus (operator, with Grafana, alertmanager,etc), Nginx Ingress, Heapster (next will be metrics server once helm chart will be stable), Kured, etc. Adding any service mesh on top is simply adding the helm chart and its params in the addons.yaml


We also think the future directions are Serverless (Knative based) and Istio. They can be added using their helm charts either post install or via the addons.yaml The idea of this project is to prepare the k8s base for deploying either of them quickly, building on top.


Getting (aka renting) a k8s from a cloud vendor is probably the most common case, but does not cover everyone's scenarios => hence this project.

Along the years we actively looked for best tools and practices and incorporated them in this project. We try to bring it closer to a common/initial generic k8s based platform. (it does not plan to compete with RH's OKD, but only it's basic features: out of the box network, ingress controller, monitoring, HighAvailability, etc). Authentication, better security hardening, logging are in future plans.

Do you find it useful now, when one can simply pay jump to gke/eks/aks/pks ?

If so, what should be the next steps to make it a successful project (measured by users and community around it)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: