Docker and Kubernetes both immediately made sense to me when I found them. They both obviously addressed pain points I've had before and were easily recognizable to me. (Docker lets me share server applications with their environment, and Kubernetes makes sure I always have N replicas of that Docker image running on my hosts.) What are the short and sweet issues that service meshes and control planes each solve? I see a bunch of things listed on Kuma's page, but I thought those were what service meshes did.
If you have one service trying to communicate with another service but you want to have reporting, it seems to me that something like Kuma sits in between the services and feeds information back to a central place to see how much communication is happening, without each service needing to build in its own reporting. Ex. Is my website and my redis service communicating and how much?
The short and sweet issue is that they improve connectivity between applications and all connectivity oriented issues (tracing, versioning, load balancing, etc) without having us—the developers—write any code for it.
So now I need to run a proxy on every machine, and a service mesh, and a control plane for that, which depends on its own other services.... does it ever feel like we need more infra to run our infra than to actually run our stuff?
The alternative is writing more code in our applications.
As soon as we make any request that goes over a network, we need to make sure that connectivity cannot be disrupted and that observability is in place. LinkedIn famously wrote a "smart" client to do that in Java, that every team must use and they keep maintaining over time but that limits them in adopting non-JVM technology in the organization and creates technical debt over time.
Are you saying my application doesn't need to have any logic for handling network connectivity problems once I have a service mesh? That sounds hard to believe. Maybe I don't understand what code I wouldn't need any more.
You won’t need code for mutual TLS, certificate rotation, routing, canary load balancing, tracing, logging, service ACL, retries, circuit breakers, cross-region failovers and so on. The out of process proxy model makes these features portable across multiple stacks and programming languages too, which is a nice benefit.
Oh. Well, I hate managing many of those things, so maybe I am sold on the idea at this point. The key for me would be to find some case study of converting a traditional setup to use one of these so that I could get a grasp on where you actually start for wiring it up.
I have Consul going, and making decent use of it for discovery, and have a feeling that Consul Connect may be a half-assed version of what you're describing; but I don't hear of many people using it.
This is the way forward for service meshes. Consul connect is doing something very similar.
The early service meshes were all built on top of Kubernetes, due to the fact that k8s/etcd is an easy platform to build control planes on top of (SD, networking, etc. already done) but very few large companies are ready to move everything over to Kubernetes.
However, the one thing that every service mesh promises is that the clients don't need to do anything. Except that they do. End users need to rip out functionality that is coupled already tightly to the application, such as authentication, "smart" routing and metrics. They need to orchestrate tracing to pass on the headers (how do you know what routes in your application were bad if all you have is a black box!). It needs to play nicely with already built in enterprise wide things.
I'm not sure that I'm sold on the universal service mesh unless you have a high level of engineering talent. The benefits are immense but don't expect to see universal service meshes* running at enterprisey companies any time soon.
* Universal means the entire company is governed under a service mesh, not the POC running on 3 servers for a random team :)
Funny that, as a Swahili speaker, i wouldn't be able to say this publicly. This is the word for vagina and reminds me of the Japanese city Kumamoto which would roughly translate as hot vagina. But, good luck, the name shouldn't change anything.
I think it's very clever move. Looks like Istio & Envoy is becoming the standard for service mesh and Kong needs to do something about it. Tapping into the Envoy community is definitely a nice try and not everyone likes Istio.
If Kuma gains traction, they can later offer additional capabilities in Kuma to swap Envoy with Kong (their API gateway) as my guess is that Kong API gateway is their cash cow at the moment. (of course they can potentially make money from Kuma via enterprise support, training, etc if Kuma goes mainstream).
These are purely my guess. From a user point of view, not a bad idea having Kuma in addition to Istio and other open-source/commercial alternatives.
Right, I understand that, but I was just commenting that it is odd they didn't built it on top of Nginx. I would have expected that rather than switching to Envoy.
Envoy has a proven track record in Mesh deployments, whereas in NGINX those primitives have to be built from scratch. On the other end, NGINX has proven track record as an Ingress Gateway and therefore it is the natural technology as an APIM solution.
Unlike Istio, this is built to work natively with minimal dependencies on every platform, not exclusively on Kubernetes, which means it can support both new greenfield applications and existing VM-based apps, so that networking policies that would typically be available only on K8s for new apps, can be applied on existing workload today. That, and it’s meant to be easier to use, which is another big problem with existing Service Mesh implementations.
Was hoping to see Docker Swarm integration but it doesn't seem to have any. Looks like it could be done with a bunch of manual work, just not supported out of the box.