I find it refreshingly straightforward for personal and testing setups (and more practical than microk8s for me right now), and am waiting for rio to hit a couple of stable milestones:
(I try openFaaS now and then, but after contributing a deployment template early in the beginning, I lost my enthusiasm for it - it also ran the gateway and the admin UI on the same process, which I considered a design flaw).
Rio is a wrapper around Knative and Istio, from what I can tell. The thing I don't see (and I haven't tried Rio, so maybe someone who is using it can say this better) how does it build your apps? Because it wraps Knative, I can assume it uses Knative's build.
I don't know if that means I'm responsible for writing Dockerfiles, or if I can change out for things like Buildpacks.io buildpacks v3? But I do think that means the system can scale to zero replicas when the traffic diminishes to the point where we haven't seen any requests for something like 10 minutes, and it's fully in doubt whether or when I'll see another request I need to serve.
I have been wondering about Rio but so far not enough to break down and try it.
It's been a while since I last used minikube, but it was a bit slow then.
There is a new alternative called kind: https://github.com/kubernetes-sigs/kind
I only tested it briefly (on linux), but it seemed faster than minikube.
In contrast to minikube, kind does not use a vm but instead implements a cluster node as a single docker container.
I've never seen microk8s work. It depends very heavily on iptables rules, and I suspect that if you have routes to anything on 172.16.0.0/12 it will work unpredictably. (I had a similar problem with a VPC that had subnets that conflicted with what Docker chose to use.) Obviously microk8s works for someone, but it's never worked for me. But I work at an ISP and our route table on the corp network is excessively large.
One of my coworkers tried to use microk8s instead of minikube and we debugged it extensively for a couple days, but ended up baffled. We had to setup some rules to forward localhost:5000 into the cluster for docker push; instead we got a random nginx instance that we couldn't figure out where it was running. Even after uninstalling microk8s, we still had a ton of random iptables rules and localhost:5000 was nginx... It was weird.
Minikube works great, however. You will still need some infrastructure to push to its docker container registry in order to run locally-developed code. Out of the box, you can persuade your local machine to use minikube's docker for building, but it runs in a VM and unless you use non-default minikube provisioning settings, it doesn't have access to all the host machine's cores, which is kind of slow. I ended up making minikube's container registry a NodePort so that every node (all 1 of them) can get at localhost:5000 to pull things. I then added some iptables rules to make localhost:5000 port-forward to $MINIKUBE_IP:5000 so that "docker push localhost:5000/my-container" works. It's kind of a disaster.
I also had to write an HTTP proxy that produces a proxy.pac that says "direct *.kube.local at $MINIKUBE_IP" so that you can visit stuff in your k8s cluster in a web browser and test your ingress controller's routing.
After those two things, I quite like it.
I still don't think minikube is a good platform for developing microservices, though. The build/deploy times are too long (and things like ksync don't work reliably, even if you generate a docker container that can reliably hot-reload your app, which kind of involves a lot of setup). I once again wrote something that takes a service description and a list of its dependent services, allocates internal and external ports, puts them in environment variables, starts Envoy for incoming and service-to-service traffic, and then runs the apps wired up to receive requests from Envoy and make requests to other services through Envoy. It took a while but now that I have it, it's great. I can work on a copy of our entire stack locally, it starts up in seconds, and is basically identical to production minus the k8s machinery.
I am still surprised I had to solve all these problems myself, but now that they're solved, I'm very happy.
There are similarities and differences. The thing I wrote to run everything locally obviously doesn't call out to external services; it runs everything it needs locally. I also didn't use the xDS Envoy APIs, instead opting to statically generating a config file (though with the envoyproxy/go-control-plane library, because I do plan on implementing xDS at some point in the future).
What I have is as follows. Every app in our repository is in its own directory. Every app gets a config file that says how to run each binary that the app is composed of (we use grpc-web, so there's usually a webpack-dev-server frontend and a go backend). Each binary names what ports it wants, and what the Envoy route table would look like to get traffic from the main server to those ports. The directory config also declares dependencies on other directories.
We then find free ports for each port declared in a config file, allocating one for the service to listen on (only Envoy will talk to it on this port), and one for other services to use to talk to that service. The service listening addresses become environment variables named like $PORTNAME_PORT, only bound for that app. The Envoy listener becomes $APPNAME_PORTNAME_ADDRESS, for other services to use.
Once Envoy has started up, we then start up each app. The order they start in doesn't matter anymore, because any gRPC clients the apps create can just start talking to Envoy without caring whether or not the other apps are ready yet. And, because each app can contribute routes to a global route table, you can visit the whole thing in your browser and every request goes to the right backend.
I used Envoy instead of just pointing the apps at each other directly with FailFast turned off because I needed the ability to send / to a webpack frontend and /api/ through a grpc-web to grpc transcoder, and would have used Envoy for that anyway. This strategy makes it feel like you're just running a big monolith, while getting all the things that you'd expect with microservices; retries via Envoy, statistics for every edge on the service mesh, etc. And it's fast, unlike rebuilding all your containers and pushing to minikube.
It kind of solves the same problems as docker-compose, but without using Docker.
https://github.com/rcarmo/azure-k3s-cluster
...as well on my own ARM cluster, with a private registry:
https://taoofmac.com/space/blog/2019/05/18/2034
I find it refreshingly straightforward for personal and testing setups (and more practical than microk8s for me right now), and am waiting for rio to hit a couple of stable milestones:
https://github.com/rancher/rio
(I try openFaaS now and then, but after contributing a deployment template early in the beginning, I lost my enthusiasm for it - it also ran the gateway and the admin UI on the same process, which I considered a design flaw).