Hacker News new | past | comments | ask | show | jobs | submit login
Nginx Service Mesh (nginx.com)
90 points by bhaavan on Oct 22, 2020 | hide | past | favorite | 22 comments



I see people writing this off already in the comments. Probably people who are knee deep in k8s and already have their favorites.

For the 20 year data center guy getting reborn in the cloud, walking down the cereal aisle of service mesh offerings, nginx is going to look like a warm blanket of familiarity.


A cloud-ready/native nginx has very little to do with the old way of configuring nginx. This solution seems to be taking parts of Istio and plugging nginx as the "high-performance proxy" component. The complexity of Istio seems to stay, e.g. centralized PKI managed by operators where if something goes wrong and you don't have deep knowledge of how operators work, your nginx knowledge won't save you.

However, I'd think that this "warm blanket of familiarity" is exactly what F5 are trying to capitalize on. Basically, the name.


So this is to compete with Envoy? If Nginx can solve the documentation and the "why the hell do I need this" problem, it might be worth looking into.


I thought this competes with Istio, judging by the naming of it.


Istio is built on envoy.


Everyone is talking Istio/Envoy but what's up with HAProxy? Last I looked(about a year or so ago) they were making huge advancements in platform at every level and in particular extensibility. It looked all open source too.


The use cases for Istio/Envoy or any other Service Mesh is that they comprise a set of far higher-level capabilities. Routing Policy Enforcement, mTLS, Circuit Breakers, Certificate Management, Traffic Tracing and Visualization, and many many more features [0][1].

I love me some HAProxy too, but it only operates at the level of a single service/backend group or lists of relatively independent services/backend groups. Services Meshes intend to make deploying dozens, hundreds, thousands of complex services in cohesive and consistent fashions such that they form one or more applications or APIs.

[0] https://docs.openshift.com/container-platform/4.5/service_me...

[1] https://istio.io/latest/docs/concepts/what-is-istio/


Why wasn't this just open sourced?


Because of F5 and because of nginx-plus. They keep trying to push the closed stuff, but that's not what people want. We don't mind paying, but we do mind not having public sources.

Ironically, this is practically stillborn unless people are already nginx-plus customers, as everyone else is using Istio (with Envoy) or bare Envoy or Traefik or Consul Connect or HAProxy or Linkerd or any of the other billion options out there. And when you are an nginx-plus customer you are unlikely to have a model where a mesh fits in. For a mesh to make sense you need to have more than just a need for service discovery; i.e. you have to have dynamic locations and counts of sources and destinations (or as people say: Kubernetes with automatic rollout waves and lots of changes and deploys all day long). The scenario where you have 'all the cool toys' but also 'need' nginx-plus (and NSM) seems unlikely to me.


Agreed, I don't see there being a big market for this. They are late to the party, cost more money, and are more closed source than the other alternatives. I don't know why anyone would choose this unless perhaps they already have Nginx Plus in place.


While built using nginx-plus as the sidecar the mesh is still free to use. There is no requirement to retain an nginx-plus subscription or to pay anything for the mesh.


I'm pretty sure Nginx isn't interested in open-sourcing any more code than they have to at this point. Most of their development seems to be closed-source these days.


It's too bad they didn't. Otherwise, we could run istio and nginx-service-mesh side by side, and use dns to traffic-split between them. And then it would be network traffic splits all the way down, into retries and fail-overs in the app logic.


Ningx Plus isn't open-source, and this lives on top.


This is very much too little too late. Hopefully soon we’ll have the WASM sandbox hitting master in Envoy and that’s going to enable a lot of really cool use cases like hot deploying custom filters into your side cars.

https://istio.io/latest/blog/2020/wasm-announce/


What real world problem does this solve that couldn't be as well solved before? Is the improvement significant?


This is a good starting point to your question, from William Morgan who's attributed with coining the term service mesh: https://buoyant.io/service-mesh-manifesto/.

It's a fair question. Service meshes are a relatively recent development, and there aren't many papers on them, despite rapid adoption in industry (e.g. many large systems use a service mesh, and AWS App Mesh is in general release as of last year). This is a decent survey paper: https://ieeexplore.ieee.org/document/8705911.

Service meshes are intended to address some of the operational complexity of running microservices. To take Morgan's definition: "A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application."

To answer your question briefly, a service mesh is not a completely brand new thing, the pattern seems a natural improvement to having a set of SDKs (like Twitter's Finagle) and other things tying an SOA together. Consistency in an SOA is pretty valuable, and separating infrastructure logic (like retries, service discovery) from application logic is pretty nice too. Whether the improvement has been significant, I'd recommend searching for "service mesh" and you'll find some talks describing use cases.

What I have found tricky is finding critical analysis of service meshes beyond "do we need this" (i.e. what are service meshes missing, does the pattern give rise to other opportunities), but this should come as more research is done in the area.


Another perspective than what BillFranklin wrote above (which is still a great answer) when compared to an internal load balancer, which is what you’d traditionally have to route internal traffic otherwise:

* typically mTLS is provided and abstracted away for applications (traffic between services is now authenticated and encrypted)

* no single point of failure in the same way

* no traffic bottleneck or needing to scale the LB (e.g. limited to bandwidth of what LB can handle)

* typically integrates natively with your existing service discovery/control plane (kubernetes/consul/nomad)


I won't trust Nginx until I can understand its source code. It has to be one of the largest most complex mostly undocumented, uncommented pieces of code in common use. There could be demons in there and no one would know.


How does this compare to Istio? On a high level it seems to claim exactly the same features:

- Envoy-like sidecar proxy based deployment model

- mutual TLS

- traffic mgmt with rate limiting, circuit breakers

- traffic monitoring/visualization with Grafana, OpenTracing

- hybrid deployments for non-container support


i started to use Linkerd and its way too cool :D Is there any reason to choose Nginx service mesh over linkerd which is already secure , supported by CNF.


I ditched nginx as soon as I could after the F5 acquisition. Once you have a bad experience with a company it stick for a long time.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: