Hacker News new | comments | show | ask | jobs | submit login
Google and IBM announce Istio – easily secure and manage microservices (ibm.com)
443 points by ajessup 9 months ago | hide | past | web | favorite | 116 comments



Interesting, but there's currently a lot of overlap between competing things that want to inject themselves between service consumers and service producers.

There's API gateway products (Apigee, Kong, etc). Load balancers and proxies of various types. Caching and CDN products. More niche stuff like bot blocking, and this attempt to bundle control and statistics.

It would be nice if some sort of standard pattern emerged, where something was the main orchestrator. At the moment, you can end up with suboptimal stuff. Like a CDN that routes to a cloud API gateway that then routes to a (not geographically close) load balancer, that then hits the actual service.

I'm surprised that Cloudflare, Akamai, and the like haven't offered all of these things at the edge. Some things are service to service, but a fair amount is client to service...putting this stuff closer would help.


Istio is focused on service-to-service traffic (i.e., in your data center). It uses Lyft Envoy's L7 proxy to add security, resilience, and observability to your L7 traffic.

Imagine you've got 50 microservices and you're using RPC to communicate. You're going to want global rate limiting and circuit breaking behavior to insure resilience, particularly as your topology gets deeper.

There is a use case for extending Istio to the Edge, which is why we wrote Ambassador (it's an API Gateway built on Envoy), and we just released an Istio integration (http://www.getambassador.io/user-guide/with-istio.html).

Full disclosure: Work on Ambassador


Regarding API gateway products, Apigee will actually work with Istio

https://apigee.com/about/blog/digital-business/simplifying-m...


That's an example of the breakage though. Apigee's cloud runs in only 2 specific AWS regions. So once you tie all these pieces together, you end up with a long path, with some functionality that should be closer to the end user.


Apigee's cloud runs in a lot more than two AWS regions today, not to mention GCP regions, and the whole product can be installed in your own datacenter. We also offer a "micro gateway" that lets the proxy component run anywhere and communicate with the rest of Apigee via an API. We'll be taking this hybrid mode further and the Istio integration is one of the things that will take advantage of that hybrid model.

(I work for the Apigee part of Google.)


>the whole product can be installed in your own datacenter

Could you please provide a link to documentation/articles that showcase this use case... on-premise installation of Apigee?


Here is what i found http://docs.apigee.com/private-cloud/latest/overview

(I work at Apigee)


Sorry, yes. I should have said only two of the four US regions.


Disclaimer: I work with Fly.

We're building something a lot like that! https://fly.io

Currently, we support AWS Lambda, Heroku, and self-hosted backends. Bringing things to the edge can give some major benefits; a smart proxy, too, can give developers a lot of power with minimal effort.



I am wondering how Istio is related to Openshift (the Redhat blog post didn't give that much information). Will Istio be integrated into Openshift or simply added as an application to be installed from the catalogue?


OpenShift is based on Kubernetes, and I expect istio to integrate well as a general service to service mesh. Catalog exposure will be useful and something that will get targeted at some point (maybe not right away)


and isn't the entire solution just packaging a proxy developed and open-sourced by Lyft?


Envoy is a big part of the solution, yes. But it isn't the only component. A control plane layer, including an auth component and an adaptable policy enforcement and telemetry collection component, is included as well.

More info is available in the overview doc here: https://istio.io/docs/concepts/what-is-istio/overview.html


Istio adds an automation layer on top of Envoy proxy mesh that allows global cross-cutting policy enforcement. Many of us actively contribute back to Envoy, and there's a lot of exchange of ideas between the two projects on designing the next generation of the config for Envoy.


This may be the most important project in distributed computing in a long time. It solves some fundamental problems that layer 3 networking has been unable to tackle. Its initial integration with Kubernetes is great but long term it could be the basis of all application level communication whether it is deployed in a container orchestration system, VMs, bare metal or as an enabler for Lambda (function) frameworks.


What fundamental problems does it solve that layer 3 networking has been unable to tackle? Not pushing back - just ignorant and want to learn!


If you adopt micro-services in earnest, a challenge you face is how to ensure that the right set of services can communicate (and only communicate with) the right other set of services. In a large organization, it's not unrealistic to have hundreds of services, and not all of them are fully trusted (some may be run by vendors, etc).

What's more, these things are being constantly deployed to a wide variety of environments. Some may be on cloud VMs (or a dynamically scaled cluster of VMs), some on bare metal, some in orchestrators like Kubernetes. Some will run on networks that the organization maintains, some may be maintained by a DC or cloud provider.

Historically the answer to securing this communication has been to use L3 network segmentation with strict rules to decide who can send packets to who. But, particularly in an increasingly heterogenous and dynamic environment it's very difficult to do this reliably and quickly. Networks are also a pretty crude authorization system - it implies that just because you can reach an endpoint that you are authorized to use it, which isn't necessarily true in practice. Some of the other benefits of Istio - like system-wide circuit breaking and flow control are also difficult to do purely at the network layer.

If you're interested in this, I'd encourage you to check out https://spiffe.io/about which has some more detailed thoughts on the limitations of the L3 micro-segmentation approach and how it can be solved.


I agree with your point about layer 3 networking being unable to easily tackle these problems. I question though, whther Istio is "all that".

Securing an endpoint without requiring changes within the endpoint has been done for some time - Whale Communications, which became Unified Access Gateway, F5 Big IP, IBM DataPower... They are called web application firewalls, and unless I'm missing something Istio is no more than that, but targeted at micro services.


You are missing experience working in an environment with endpoint scale. You can't configure the O(N^2) paths between application instances changing every hour with those kinds of systems.




The Istio blog post is excellent, and provides a lot more detail. https://istio.io/blog/istio-service-mesh-for-microservices.h...





What happened to dumb pipes, smart endpoints? We do the same things again that we did before with SOA, having hard-to-replace middleware / bus systems.


The REST architecture always included the possibility of gateways and proxies in the end-to-end communication path to delegate shared responsibilities out of the user agent or origin server. This balances the need for centralized admin of some things and decentralized deployment of other things. Most microservices systems, even if they're not using HTTP in favor of something like gRPC, Kafka, or Rabbit, are taking a lot of WebArch lessons to heart in how they manage their policies, routes, etc. balancing centralized management over decentralized evolution.

The problem with SOA-in-practice was that everything flowed through a monolithic ESB as both client and origin server, that needed to have omniscient knowledge of every route, transformation, etc., and was often a single administrative bottleneck, fault domain, etc. Some SOA frameworks had service mesh patterns where you could deploy decentralized engines with your services, but without cloud IaaS/PaaS circa 2006-2007, there was no way to maintain/deploy/upgrade these policy agents without a heavy operational burden.

In sum: CORBA, COM+ or SOAP/HTTP were about mostly-centralized approaches to distributed services, REST was about extreme decentralized evolution over decades, most are looking for something akin to a dial where they can have something a bit more controlled than dozens of independent gRPC/HTTP/Rabbit/Kafka producers-consumers but not stupid like the SOAP/HTTP days.

Modern cloud native service mesh approaches like this Istio thing (NetflixOSS Zuul+Eureka+Ribbon or Linkerd are alternatives) are just decentralized gateways and proxies, possibly with a console/management appliance that makes it easy to propagate changes out across a subset of your microservices. This has the benefit of allowing you to default to decentralized freedom for your various microservices but for areas where you want administrative control over policy over a set of them , you don't have to go in and tweak 15 different configs.

NetflixOSS really pioneered this pattern. Netflix managed to use things like Cassandra and Zuul hot-deploy filters as the means to updated routing/health/balancing configs across their fleet of microservices. There are alternative ways to handle this of course - Hashicorp's Consul piggybacks DNS and expects your client to figure things out via their REST API or DNS queries. There are also things like RabbitMQ or a REST-polling mechanism to propagate config changes perhaps, as not everyone wants Cassandra. New frameworks like Istio or Linkerd are further alternatives. We're spoiled for choice, better or worse..


Well said.

Besides Netflix, I'd put Twitter as an early pioneer, with their work on Finagle. Both of these companies, for better or worse, took a library-centric approach (Eureka/Hystrix/etc or the Finagle lib). This limited their applicability to the JVM.

The sidecar model that AirBnb pioneered with SmartStack, later adopted by Yelp and others was the cheapest way to get non-Java langs to have similar resilience/observability semantics. And now given the popularity of polyglot architectures, it's probably should be the default choice for companies adopting microservices.


Maybe a local proxy, deployed with the service, is a good answer to my objections, rather than having a centralised approach. This can help in polyglot environments, but remove any limitations a centralised solution would impose. Something like Istio would be an agent a service connects to locally, used for service discovery, complex routing or rate limiting. The configuration is service specific. Load balancing is done by "dumb" proxies, like in the old days.


Maybe you don't understand how Istio works. The Envoy proxy is locally deployed as a sidecar next to each process. The centralization is entirely for the control plane. The local proxy uses the centrally managed configuration for making local decisions about routing.


Thank you for your opinion and explanation.

As long as proxies remain transparent to services, I see no problem. It becomes a problem, when proxies are getting smarter in terms of providing cross cutting features, like routing on payload level (not speaking of message headers) or do authentication and authorization on a per-resource level. That puts constraints on how services are built in this particular environment.

But I see the logic behind approaches, developed by Netflix, and now Istio. If you have a lot of services, orchestration and more central communication management is probably a good way to govern, if the constraints (described above) are accepted and services still have the ability to opt out and pursue a different strategy.

The old SOA world was driven by governance. This was a result of the general engineering methodologies and mindsets of this time. Still, API Gateways / smarter proxies / etc. could bring that back...


The commercial products in this space (Apigee, Layer 7, MuleSoft and the like) seem to have learned their lessons over the years, but we'll see. Things like Eureka require RESTful discovery protocols that aren't exactly standards, for example, and rely on well-written client libraries.


For me "SOA" doesn't imply "an ESB" in the way you seem to understand the term, and even though I've actually worked with Sonic MQ/ESB which brought the name to the scene, I still don't know what people really mean when speaking about "an ESB".

From a developer perspective, service-oriented just means that you're offering/accessing functionality via a well-defined app-specific network protocol interface with a standard taxonomy/representation of cross-cutting concerns such as auth, transactions/compensations, message synchronicity and QoS semantics (eg. request/response, at-least once delivery etc.), most of which define the shape of your service implementation code fundamentally. For example, if you're operating under the assumption that no distributed transactions are available, you'll have to fold the necessary logic for restarting and state management into your application code.


SOA doesnt have to imply an ESB, true, but it usually did in practice. I too worked with Sonic , IBM broker, Mule, but mostly BEA WebLogic and AquaLogic. What was often missing in popular SOA was services-oriented delivery, where the unit of deployment was independently evolvabke from others. It was more focused on interface modularity for some future implementation decomposition of the monolith. It fulfilled half the problem.

Microservices have been enabling a new generation to accomplish this decomposition through by focusing on services oriented delivery and deployment, and by dramatically constraining the protocols to HTTP, maybe some pub/sub or gRPC, not likely many others (and thus no distributed transactions, simpler QoS levels, etc).


Very well put. From what I see this is like moving hysterix from the sidecar level to the network level. No need for applications to care of even know about leveraging the circuit-breaking. Very excited to try it out!


I'm curious if circuit breaking is better at the method level of the code ala Hystrix or at the network level. We need more in-the-wild experience reports I think...


The thing you can do at the code level is integrate it with your exceptions and protect against software integration bugs. Imagine that your RPC against a new version of a service fails because of bad data. With a library level circuit breaker, you can catch the failed exception and blacklist the new version of the service. With the network level, your resolution of failure detection is more limited.


It depends on how you view it. Baking everything into the application logic works if you have a homogeneous stack. When you start going down the polyglot route, with 5-6 language runtimes, multiple databases, it becomes really really hard to maintain all the communication logic (discovery, load balancing, resilience) in the app code.

Take a look at AirBnB's smartstack, Yelp, Lyft's Envoy mesh. These are all polyglot applications, where the communication aspects are abstracted out into sidecars.


Maybe it depends on the size / complexity of the services. In the 'serverless' trend, such a middleware makes more sense. But essentially, moving too much logic into a middleware may support bad, lazy design decisions that make the middleware a single point of failure - like ESBs.


I'm a big fan of buses or perhaps more accurately I guess semi intelligent pipes with semi intelligent endpoints.

I have used RabbitMQ (and Kafka at times) for large portion of my career and anytime I tell people this the bring up the single point of failure.

The single point of failure arguments is getting really old and is fairly baseless particularly if your storage (e.g. RDBMS) or network (single zone or even multi zone load balancer) is also not a single point of failure.

So to avoid this red herring of a problem for many who really don't need to solve that problem they create massively complicated endpoints.

This endpoint code is proprietary and often has to be deployed at the same time and creates fairly tight coupling. The Netflix Hystrix creator (Ben Christensen) discussed this issue at length here: https://www.microservices.com/talks/dont-build-a-distributed... (its also ironic that he built Hystrix and various dependencies which are sort of the antithesis of this... I guess 20/20 hindsight).

There are pros and cons and its not always "lazy design decision".


> The single point of failure arguments is getting really old and is fairly baseless particularly if your storage (e.g. RDBMS) or network (single zone or even multi zone load balancer) is also not a single point of failure.

That sounds like multiple single points of failures, which just means you have more work to do, not that you can throw your hands in the air and say "welp".


I agree. Essentially it boils down to whether communication protocols and formats are open enough and replaceable with reasonable effort.

But with single point of failure, I don't mean outages. I mean, is it open and transparent enough towards your endpoints, that you can easily replace and / or fix things. Vendors go out of business or even a larger issue: your business changes.


The middleware in this case doesn't work like an ESB in the sense that all connections are point-to-point, intermediated by a service discovery mechanism. So you're not putting all your L7 traffic onto a big bus.


Just pointing it out, but Envoy is a key piece of Istio (as I understand it).


Smart endpoints in N different languages means you have to do the same work (auth, rate-limiting, etc.) N times.


Possibly. However, tooling and code generation can help.

If the smarts is just a state machine, and the rules are abstracted at the right level, it would not be that hard to support.

"Thick" smartpoints can save on operating cost (infrastructure, personnel, complexity, level of expertise) when your system is "simple enough"

At massive scale, from an economies of scale (all things considered), it may make better sense to go with "Thin" smart endpoints.

The migration between the 2 models is where the art of the design comes in play :) It's an art because it is a decision process involving many players with conflicting and competing goals, decisions, time frames, motivations etc.

It's often not just a "technical" decision


Actually, I didn't like the sidecar idea at first, given my experience with it ten years ago (we ended up changing architectures twice). But that was in a project where everything was in the language (C++) that was best supported by all the internal libraries.

I've since been exposed to the vagaries of dealing with multiple languages. I think code generation can only help so much, if you care about performance and latency. Things like flow control and handling memory pressure are easier in some languages vs. others.

Not going very far, if you look at gRPC, it really has three implementations: C++, Java and Go. All other supported languages wrap the first. They have different defaults, feature sets and performance characteristics. Not to mention the behavior under load. In theory it shouldn't be like that.


app <-> agent <-> hub <-> agent <-> app OR app <-> agent <-> agent <-> app OR

app|agent <-> ... <-> agent|app

Fundamentally, it is the same problem space.

Does the apple pie taste differently with the pie in the apple vs apple in the pie? Maybe in some cases and maybe not in other cases


Great documentation and some really great tools included. I was able to get the platform running in minikube really quickly. Interested to compare this to linkerd.


My read of the documentation also indicates a large overlap with linkerd. Hopefully one of the creators is around and can do a compare/contrast.


Some thoughts:

Linkerd is great technology but it is restricted to traffic management only. Istio provides a complete mesh that incorporates authentication and policy enforcement, in addition to traffic management and telemetry.

The Istio Auth subsystem provides certificate management and we are working on extending it to support authorization primitives as well.

The telemetry model is also different. Rather than having direct integrations with different metrics backends, we normalize metrics and pipe them through a single engine that can then re-route to any metrics backends (or multiple).

In contrast to Envoy, linkerd provides a minimalist configuration language, and explicitly does not support hot reloads, relying instead on dynamic provisioning and service abstractions.

Disclaimer: I work on Istio (on Mixer).


The overlap with Linkerd is around routing, resilience, metrics/tracing, and the deployment model (at high level).

Our deployment model is a bit more transparent. Traffic gets transparently routed via Envoy, without using HTTP_PROXY or direct addressing of sidecars. This implies zero change to application code.

Secondly, Istio brings two more things to the table: policy enforcement (rate limits, ACLs, etc.), and authentication/authorization. Istio enables mutual TLS auth between services with automatic certificate management. The policy plane is extensible, where you can plug in adapters or specific policy implementations.


Not a complete comparison, but here's[1] a breakdown of Linkerd vs. Envoy (the L7 proxy that sits at the heart of Istio)

[1] https://github.com/lyft/envoy/issues/99


One important difference I see is istio works only with kubernetes/bluemix, whereas linkerd has many more deployment options.


The initial release for Istio is targeted at kubernetes. However, Istio is designed to be easy to adapt to other environments. With community help, we anticipate extending it to enable services across cloud foundry, VM, and hybrid clouds. We hope to have major new releases every 3 months, including adding new environments.


Thanks, This is interesting. Currently where I work, we have not started using containers/schedulers etc. We simply use VMs for services. For now I simply want to experiment with these new technologies.


Not sure it's right to compare linkerd with istio. Likely you should compare envoy (which istio is built around) with linkerd. Envoy can be deployed without kubernetes.


Is this going to be a linkerd vs. istio thing? Like a Docker Swarm vs. Kubernetes?


I was just wondering the same thing, since we use linkerd in production to handle thrift traffic to, from, and within our k8s clusters. But this statement from the examples page put me off a little: "If you use GKE, please ensure your cluster has at least 4 standard GKE nodes."


That suggestion is based on the requirements of the underlying bookinfo sample application and is not related to Istio itself.

Disclaimer: I work on Istio.


Thanks for the clarification!


Spectators love a fight. Ultimately it's more about using the right tool for your situation.

Istio and Linkerd are both very young and the adoption will be with those that really need their unique features. Most companies, including startups, in my experience are using roll-your-own routing/traffic mgt via Consul , or using NetflixOSS' stuff today and probably through 2018. Enterprises are buying API Gateways and the like too - IBM API Connect, MuleSoft, Apigee Edge, etc.

As for K8S vs Swarm, I rarely see Swarm in the wild, really. From my vantage point, the battle has been Kubernetes vs. roll-your-own-Docker vs. Mesos in startups using the pure open source. For enterprises paying a vendor, it's been IBM Bluemix vs. Pivotal Cloud Foundry vs. OpenShift, as they've been making more money out of all these container vendors combined (I work for Pivotal) - probably around $300m+ annually across the vendors. Some just use their native cloud's platform, like AWS container engine, etc.

The main X-factor will be serverless frameworks eventually taking over, possibly rendering today's battles over runtimes somewhat moot.


It seems like it. Envoy was already competing with Linkerd, and "Istio uses an extended version of the Envoy proxy" (source: https://istio.io/docs/concepts/what-is-istio/overview.html). So it does that and more.


linkerd guys are going to need to find a new hobby.


Why does Lyft need 10,000 microservices? They probably have less than 100,000 active cabs at any point in time?


They don't have 10,000 microservices, they have

> a production system spanning 10,000+ VMs handling 100+ microservices.


"spanning 10,000+ VMs handling 100+ microservices" ?


Sorry...thanks...let me clarify the question, why on earth does Lyft have 10,000 VMs??? They have less than 200,000 rides per day. That's a 1 VM to 20 rides per day ratio.


I guess they're building with future growth in mind. What if they decide to expand to the rest of North America in the near future and have a sudden spike?


Then you spin up more VMs when you do the expansion.


They likely do a huge amount of data analytics.


Why aren't more people using MQ for inter-service messaging (something like RabbitMQ) instead of HTTP?


One of the goals of Istio is to move the machinery out of the app. While a message queue/broker such as RabbitMQ or NATS is brilliant at distributing requests, you do have to build the glue — both the client and the server — into every single app, in every single language. You also need to write the edge proxy that mediates between HTTP and whatever the message queue/broker solution uses.

Istio (or rather, Envoy) acts as a plain HTTP proxy, meaning a client can just respect the standard http_proxy environment variable (which most client libraries do), meaning a client can just do HTTP and doesn't even need to know about the proxy. Even "curl" will work with Istio. And it's even simpler on the server end; all a server needs to do work is to accept HTTP connections.

Last I heard, Istio aims to go even further by supporting the proxying of any networking protocol (Layer 4).


I architected and built a system based on MQ a number of years ago. The very same design / implementation went from supporting 0 messages to a billion messages a day quietly and silently with full message tracing, retries, multi language support, etc.

Maybe I should have blogged about it :)

Anyway, although all the design principles are the same: 1) Transparent and seamless to application code, 2) Control communication end points, the MQ constructs and languages are not as accessible and readily available and visible as HTTP and RESTful.

A MQ can be temperamental and hard to operate. On the other hand, even middle-schoolers can master nginx, haproxy, write their own, etc with a bit of guidance :)

Popularity and rate of adoption depends on availability to the masses and the ease with which the masses can learn to use. If it takes to long (say, 15 minutes) to bring a framework up, hordes of engineers will find a massive number of reasons to roll their own ...


Just for your benefit, we've moved from AMQP related products to nats.io. We've been satisfied with RabbitMQ and are extremely happy with nats.


See GRPC for a common alternative to HTTP. Kafka and Kinesis are common event buses.

But message queues are usually an added complexity for realtime inter-service messaging.


MQ shines in async calls/event delivery. gRPC / http2 / websockets / thrift is better for synchronous calls ?


With the ISO/oasis standard AMQP you can actually do synchronous calls without an intermediary. You also have 'direct' message routing capability with components like apache qpid dispatch router.

I can't see a reason why one would use gRPC/thrift and then having to build flow control, delivery guarantees and other messaging features yourself.


gRPC is built on http/2. I imagine it supports flow control and delivery guarantees already? It's just http/2 + protobuf


Can someone ELI5 how is this relating to, and complementing, Kubernetes? What does it do that Kube doesn't, and what does Kube do that Istio doesnt?


Kubernetes supports a microservices architecture through the Service construct and performs rudimentary L4 load balancing and more.

But it doesn’t help with higher-level problems, such as L7 metrics, rate limiting and circuit breaking.

This is where Istio comes in.

Disclaimer: I work on Istio.



Is the sidecar-container-within-a-pod the only deployment option on Kubernetes currently? Is a daemonset deployment (like what Linkerd does) option currently in the works?


We have been looking into a per-node deployment model from the beginning, which is what daemonset is doing. Things get more complicated across the board with the transparent traffic capture at the node network namespace level, invasive installation requiring tight integration with k8s and reconciling iptables rules, and a more complicated workload identity story. We have started with the sidecar model, but are certainly interested in more deployment options


I'm curious what the benefit of side-loading proxies and load balancing versus centralization provides?


Check out [1] for an overview of the trade-offs.

Disclaimer: I work on Istio

1. https://groups.google.com/forum/m/#!msg/kubernetes-sig-netwo...


I'm not seeing any benefits though, there are a lot of assumptions and emotional comments here not any hard evidence of it being a better solution.

There is actually a lack of it, there is only connotative evidence here in one comment that refers to "hey look linkerd does this at scale" that sounds nice, except that is not necessarily the point or the case.

How is having side-loaded proxies _better_ or more beneficial than having it separate? It doesn't appear to be discussed or mentioned in this thread, all of the arguments made for doing so are equally applicable to the separation of the proxy.


Early in the design, we have looked at various modes of proxy deployment and found that there are pros and cons for each. You are right that the sidecar model is not always the optimal choice, it's a trade-off (see the referenced document in the discussion). The sidecar is the least invasive approach with respect to Kubernetes and was the focus for the initial release. We'd be happy to hear arguments for a more centralized proxy model, and if needed invest effort into making it happen.


Resiliency: Centralized proxies can suffer from shared failure domain issues (especially when large number of configs, etc., are deployed).

Performance: Side-loaded proxies tend to perform better, in part because they deal with a subset of config.

Our choice of sidecar deployment was informed by on our (Google, Lyft) experiences with both centralized and sidecar models. However, that being said, Istio is not predicated on per-pod deployment.


https://istio.io/docs/concepts/network-and-auth/auth.html

No option for OAuth2 or JWT? Maybe I'm not understanding the problem Istio solves vs. Envoy


Good question. The auth work for this release is mainly focusing on service-to-service authentication. We are looking into adding OAuth2 and JWT support for enduser auth in the future release.

Disclaimer: I work on Istio


BTW, if you are interested in Istio auth future work, here is the list: https://github.com/istio/auth#future-work

Disclaimer: I work on Istio


thanks all!


Cool. Also - what does Istio use for persistence? I imagine it's gonna persist the data for auth'ing stuff somewhere.


If you're referring to where the key/cert is persisted, it's currently facilitated by Kubernetes' secrets and mounted into Envoy container

Disclaimer: I work on Istio


Regarding API management in the context of Istio:

https://apigee.com/about/blog/digital-business/simplifying-m...


> Lyft developed the Envoy proxy to aid their microservices journey, which brought them from a monolithic app to a production system spanning 10,000+ VMs handling 100+ microservices.

Are those numbers right? Wouldn't it be the other way around realistically?


One of the benefits of microservices is the ability to trivially deploy many versions to improve performance and uptime.

You should be using service discovery and deploying multiple instances.


Nice to see that this comes with Prometheus and OpenTracing instrumentation!


There is a lot to like about this move. Microservices and service mesh's are certainly an interesting area right now...and this represents a big push from some big players.


I know it's a higher architecture. But could anyone tell me the difference between istio and springcloud/dubbo?


In case anyone is interesting in using Azure Container Service with the builtin Kubernetes orchestrator, I wrote an easy tutorial [1] for deploying Istio.

[1]: https://readon.ly/post/2017-05-25-deploy-istio-to-azure-cont...


This is a great product! Kudos!


[flagged]


Please don't do this here.

We detached this subthread from https://news.ycombinator.com/item?id=14410714 and marked it off-topic.


> Really interested to hear user feedback from today's #istio announcement and where it will have the biggest impact.

Okay: I immediately deeply, profoundly, bitterly hate and despise your announcement and for just one, really simple, dirt simple, but totally unforgivable reason: What the heck is a "microservice"? You never said.

That word, microservice, is not in a standard English dictionary, so you are writing undefined jargon, gibberish, junk and not English. You are insulting me and even worse yourself.

Instead, write English. Get rid of the undefined jargon.

Got it?

This was a difficult lesson?

> the biggest impact

Until you learn to communicate at, say, the late elementary grade school level, e.g., learn to write English, the impact promises to be minimal.


I'm not sure if you're trolling or genuinely don't understand, so I think I can help you a little bit.

Microservices are services that are generally containerized and are easily distributable through some form of a network to be easily replaceable parts. These services are defined by a specification where they do a single task, expose some endpoint or API and are composable with other microservices.

There is a need for these services to talk to each other, and to external services and to do this they use some form of a meshing network. These networks right now are done through container networks, such as the Docker Overlay Network, the Kubernetes "POD" system, linkerd, serf, and a multitude of other systems like istio. It is a space or area of concern, because there is no singular approach to all these differing container services right now. So all of them are vying to be the one that wins.

The issue that I find here is that you're looking in an English dictionary while these are technical networking / operating system terminologies and that dictionary will not help you in this namespace.

This page might help you a little more: https://en.wikipedia.org/wiki/Microservices

Also: Service being, a backend or software that performs some action based on an input / output


Nice. Thanks. I'm perfectly serious -- obviously the OP didn't define microservices -- you did. Good for you. Bad for the OP.

Okay, microservices look like what used to be called agents. For their communications there have been various efforts at ways to define data objects, complete with a registration hierarchy (that is, a case of public naming) and an inheritance hierarchy (roughly like some of inheritance in some cases of object oriented software). So, we had object request brokers, CORBA or some such. And we had the ISO/OSI CMIS/CMIP where ISO maybe abbreviates international standards organization, where OSI may abbreviate something in French, where some international telecommunications group, maybe part of the UN, was involved, and where CMIS abbreviated common management information system, and CMIP, common management information protocol, all mostly aimed at computer and network system monitoring and management. The work was somehow close to some old Unix work with management information base and ASN.1 -- abstract syntax notation version 1. Whew!

Okay, if there are to be lots of such microservices, as you nicely described, then they will want to be able to communicate. So, maybe they will want to use JSON (as I understand it, essentially just name-value pairs -- from Google, JavaScript Object Notation and, thus, maybe more than just name-value pairs), other mark up languages,

https://en.wikipedia.org/wiki/List_of_document_markup_langua...

etc. But we'd have to suspect that there needs to be some common global standards and data definitions.

Okay. Gee, in the past I went through a lot of that CMIS/CMIP, ASN.1, etc. stuff, wrote some internal papers, wrote some software, etc. so was totally torqued at microservices without definitions. Right, maybe the hidden secret is that we're supposed to retreat to Wikipedia for the undefined terms and acronyms -- bummer.

But, I still wonder: How popular, pervasive, important, practical so far are microservices? E.g., are they a lot like agents for system monitoring and management? Where are microservices getting to be important.

No, I'm not trolling. And with your description of microservices, we're making progress here.



Good. Maybe the OP should have given the reference.

My main interest here is not microservices but just to tell the HN and computing community to be much more careful with undefined terminology and acronyms.

Here we now have some good descriptions and references on microservices. Good.

My main point is that articles on computing need to have, gee, call them links, to explain jargon and acronyms, to explain stuff not in an English dictionary. The OP on microservices I am using just as an example.

To me, poor technical writing in computing and computer documentation has been one of the worst obstacles to my startup -- darned near killed my startup -- and is a sore point.


I guess my point is that microservices have been well trodden in this forum context, I would not expect every article to require a basic primer explaining every concept


Sounds like you've got a good grasp on learning terms and acronyms already. Glad to see that the Wikipedia article helped ;)


[Deleted]


Well surprised or not, what you see in Istio is the result of contributions from all 3 founding parties. My team on the IBM side did much of the work on the Istio Manager and added capabilities in Envoy too, such as support for zipkin tracing.


IBM Research continues to do a lot of interesting research in AI/ML (and also into basic sciences in general). I'm not sure about Watson; I've heard both good and bad things about it so I do understand the skepticism. But to call it a joke is probably taking it too far :).

Full Disclosure: work at IBM, but not in Watson.


Um. Yes.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: