- Subroutines - Not a free lunch
- Libraries - Not a free lunch
- Client-Server - Not a free lunch
- OO - Not a free lunch
- Multiprocessing - Not a free lunch
Improved tooling can bring both down, but not by that much. That's why you will not see many micro services that are truly micro. You won't see (I hope) a 'printf' micro service, or not even a 'ICU' micro service. A regex service might make sense, if the 'string' it searches in is large and implicit in the call (say a service that searches Wikipedia), but by that time, it starts to look like a Database query. Is that still micro?
The author speaks of the complexity in managing and coordinating processes in a distributed system. By designing a microservice, you trade simplicity of software design for increased operational complexities. There's a shift in terms of problem solving too--it migrates from the cross functional engineers to the DevOps and distributed systems engineers.
Don't get me wrong--I think microservices are brilliant and make reasoning about application design easier. But be prepared to bring in some heavy engineering talent to coordinate all the things. Until there are open source frameworks for handling the difficult bits, it might be unwise for a start-up to go the microservice route. Microservices are purposed for scale (since reasoning about traffic is easier with a simpler service and there predictable traffic patterns from a microservice's clients).
In terms of the examples you mention, I you may have the wrong idea. Think about fulfilling a specific goal or class of functionality, and make that a microservice: email gateway microservice, user/session microservice(s), SMS gateway microservice, etc. Each is a component that responds to various request types (you really should use RPC instead of REST for deadlines, retries, cancellation, pooling, etc.), but they all sit within a single domain of purpose. The service could be stateless or not, but you try to make the requests idempotent to aid client reasoning.
To again cite your example, several services may use regex, but there might be only one authority on "normalized input" (eg. addresses, etc.). Or maybe there is a client lib for that.
This brings up another point. Services at the periphery (dependency graph-wise) are coordinated by interior services. That can get complicated...
Microservices (as described by Amazon, Netflix, and Thoughtworks) are effectively indistinguishable from Bounded Contexts (from Domain-Driven Design).
They're all about effectively decomposing large applications into independent teams and runtimes.
Edit: Since this question is getting many replies, I'll be a bit more specific in what I am looking for. Is there a tool that would let me describe the infrastructure and deploy it on a given cloud provider but also have the ability to deploy the same infrastructure on my local machine (using VMs/docker) for development purposes.
You download a BOSH release for your infrastructure, tweak the manifest for your chosen cloud, and push the deployment out. If you make a change, you just push it out.
Cloud Foundry itself (the elastic runtime) is a BOSH application that includes an HAPRoxy load balancer, dynamic router, and allows you to easily push/scale the stateless micro services and declaratively wire them up with other services you've deployed (with CF or Bosh).
It can run on AWS, vSphere, OpenStack, or on your own Vagrant VM in containers (bosh-lite). Or you can use Pivotal Web Services public cloud: http://run.pivotal.io
BOSH requires some wrestling early on but I find it to be a fascinating project.
The documentation is pretty rubbish, but here's an example config file:
It uses libvirt for provisioning, and Puppet for configuration. The coupling is extensive but not tight. It's also coupled to a specific way of deploying Java apps. There's a sort of backdoor that would let you deploy other kinds of apps by writing some Puppet code.
I do think the basic ideas are good, though. Possibly more of an object of study than a tool to actually use!
Locally, you'd use http://www.fig.sh to describe and run your app. You can then import this description into Tutum and run it on e.g. DO or AWS.
Seems like the https://www.hashicorp.com/ guys would have something, perhaps Terraform/Atlas?
I have not done this yet obviously but am in the same boat as you for our company goals this year
There's still a point where it's beneficial to split an application into seperate stacks, in which case the manifest concept breaks down and you're stuck doing a lot of work yourself, but they can take you a fairly long way.
Do you mean like having a different "infrastructure manifest" for each micro-service? In that case, couldn't each micro-service specify which other micro-service it depends upon? For example, I could have a central "API gateway" service which would specify all the other micro-services it depends upon. This reminds me a bit of a Gemfile/Package.json but for processes instead of libraries. On the other hand, I can easily see how this could turn into a dependency hell.
Supplementing this with Hashicorp tools like Terraform and Consul can get you to the last step.
Depending on your concept of "deploy", I can point you to two options, both produced by the company I work for.
If you think in terms of controlling from the VM up, look at BOSH. It's an IaaS deployment / management / update tool which has been used in production for several years.
You create two yaml files: a release manifest (describing the software components of your system) and a deployment manifest (describing the computers and networks of your system and how to map components to them).
This separation means that a single release manifest can be applied to many different actual deployments. At my day job I deploy a large PaaS release into a single virtual machine. My colleagues deploy exactly the same release into AWS with thousands of machines.
Systems defined using BOSH can be deployed to AWS, vSphere, OpenStack or Warden (a containerisation system).
For local development, the Warden backend enables "BOSH lite" -- releasing and deploying into a virtual machine that hosts a local cloud of containers.
If you want to go the next level up to a proper PaaS, you can try Cloud Foundry. It's defined as a BOSH release, which is how at my day job my co-workers and I are able to work both locally and at scale on identical software.
In Cloud Foundry we distinguish between stateless (apps) and stateful (services) software, and we provide a simple way to connect them. The easiest way to play with it is to log into Pivotal Web Services. IBM Bluemix is the same software and is also open to the public.
So for example, with Cloud Foundry, if you want ten copies of your app, you do this:
$ cf scale my_app -i 10
Need more RAM?
$ cf scale my_app -m 2G
And so on.
These things are not new, but like so many other ideas, they're just old ideas re-appearing in a context that had forgotten about them.
Traditionally, many of the potential problems the author relates have been solved by architectural conceits. For instance, standardize on a programming language and datastore, then share all persistence-related files. (I'd strongly suggest a FP language, preferably used in a pure manner). Then you've decreased the "plumbing" issues by a couple orders of magnitude, lowered the skill bar for bringing in new programmers, and you can start talking about using some common testing paradigms to work on the other issues.
I'm a huge fan of microservices, but it's good to talk about the bad parts too, lest the hype overrun the reality.
1) They're not really simple. If you take essential functions like a key-value store, that'd be a microservice. Simple ? Yeah right. Business logic ... simple ? Yeah right.
2) It's slower. You can wine all you want, but when it comes right down to it, there's a reason people avoid crossing the network, and the microservice approach essentially comes close to crossing the network every single time you cross module boundaries. You'll be spending 50%+ of your cpu time marshalling and unmarshalling and waiting for transmission, even on the same system.
This also manifests in all these "no-sql" datastores. They have all the problems of sql datastores, but they have one more. There's no way to have indexes. So in a table indexing all your sales by "id", the only way to find sales of product "X" is by going through each and every record (you can build indexes yourself, but it's not easy and it's sure as hell not flexible. Plus I guarantee you'll do it wrong the first 10 times). This means that retrieving 10 sales records takes the exact same amount of time as generating the yearly sales reports. In other words : you won't be doing it, because it takes too long.
3) There's just so damn many of them if you want to achieve anything useful. Every single one of them needs to be sufficiently well-designed to operate like an individual web server. So (d)dos-resistance, fairness, anti-slowloris, anti-starvation, autoscaling, resource exhaustion (like maximum number of file descriptors ...) correct sharding, access control (not too tight, not too loose, ... correct caching of security credentials, ...), and load testing, you've thought of all that for that "really small and dumb" service that basically copies information across the payment/non-payment firewall right ? If you haven't ... prepare to be surprised.
A related problem here is the shear amount of diagnostic monitoring you'll need.
4) They are inflexible, and don't deal well with different data types (this is one of the things object oriented programming solved). They work well when they deal with strings, or with company-standard datatypes. Only companies don't like to have company-wide standards. Yeah, I know Sun and to a lesser extent Facebook succeed at it, but does your company ? Last time I consulted at a bank they had systems interoperating using every interchange format from fixed-width fields (old cobol code), 5 different kinds of xml, and of course json and a dozen binary formats. Microservices can't work in such an environment. It destroys the flexibility of individual actors. The argument here, of course, is that that is a good thing, and I'd say you're right, but you've just created yourself a hell of a lot of enemies, some of them powerful.
5) It's extremely hard to orchestrate integration across multiple microservices. The first thing this will manifest in is testing. How do I test one service ? The answer is "unit tests" or "a system test". But this is the result of a fundamental misunderstanding. As a company, or even an IT department, you don't really care if unit tests and/or system tests succeed for a microservice. You care that if you connect a -> b -> c -> d -> e -> f -> g (and usually this is a network of microservices, not a sequence) that you can sell gizmos on your website.
Testing if that works requires throwing up three dozen services. Now first, having watched the netflix talks, they do this. Congrats. That's can't be easy. They also talk about the disadvantages : they have 3 teams doing nothing other that making that work, and it eats a significant part of their AWS resources. They can't do it on developer's workstations, nor even on a number of them.
The second thing this will manifest in is the cross-microservice redesign. Say a new law comes in. With a payment we now need to have a scan of the driver's licence of the buyer (say "gizmos" are treated like alcohol). So we simply need an extra image field going through the payment system. Oh-oh. The payment system is 8 microservices, and therefore around 64 interfaces to the rest of the system. Let's be generous ... only about 30 need to be redesigned. There is nothing to help. Refactoring, or even type checking doesn't work across microservice boundaries.
This presents 2 problems.
First is that it's a hell of a lot of work, even though most services only do basic things and don't care about the new data. But they still copy the data, save it, ... each of them needs logic to deal with missing data (historical data for instance), and the copy from one json dict to another needs to be implemented. And as you'll find every microservice has it's own methods for dealing with said datatype, you'll be checking up to 8 libraries for marshalling problems.
Second is testing. Any error you make won't come up unless you're testing multiple of those services simultaneously. If you manage to have just a bit more complexity in your system, those problems won't come out until the massive full-system integration test. You know, the one you can't do yourself, and even your whole department can't do. Oh-oh. That's an awfully long feedback loop to find those 3 dozen places where you forgot to copy that field across.
But, as you know, many of these problems are either solved or non-existent in Big-Monolithic-App-land.
So -- take the things that worked there and use them. Like I said, a common set of shared source code that handles all persistence means all apps can talk to each other using the same code. Changes don't break the chain. Stuff can be fixed in one spot. And so on.
The orchestration and testing pieces deserve special attention. I think you're going to end up with as many folks writing test/monitor/break code as you are microswervices. And that's probably a good thing. But you need to plan for that.
As far as the hype cycle, I've seen this over and over again. As far as I can tell, the driver is over-specialization of developers. Some new buzzword comes out, people teach and train around that, and suddenly you've got somebody called a "DBA" that can't write a web service. So then you need a "Front-end" guy, and so on. This industry is constantly labeling things, over-developing them, creating work silos that lead to poor performance, then going back and relabeling things again. There's some magic number of developers where you need specialists, but it's not 10, or even 40. The longer you put off creating silos, the better the entire effort is.
EDIT: In fact, I'll just say it: if you want to swim in the ocean of nirvana that is microservices, use pure FP and share all the source code that involves persistence.
That said, most of human history has involved specialization, modularity, and abstraction driving greater productivity.
So on one hand, you have work-silos, on the other you have the power of modularity. I think difference with software is that most of the productivity constraints happen at the interfaces. Teams need people that can integrate. They can be generalists, or specialists in a few areas.
Earlier in your post you suggest:
"But, as you know, many of these problems are either solved or non-existent in Big-Monolithic-App-land.
So -- take the things that worked there and use them. Like I said, a common set of shared source code that handles all persistence means all apps can talk to each other using the same code. Changes don't break the chain. Stuff can be fixed in one spot. And so on."
IME, multiple teams using the same library can be helpful (if voluntary) or a complete disaster (if forced). The incentives of the library maintainers are not always that of every team.
Up until now I've been operating under the assumption that "payments" separated out from the app is microservice, and it didn't need to be subdivided further. Your comment seems to imply this is still too monolithic for the microservice buzzword?
Could you provide an example microservice payments architecture?
I'm confused. I assume http://docs.mongodb.org/manual/indexes/ doesn't cover what you mean by indexes?
The other issues is joins across record types sometimes joins are supported, sometimes not.
>2) ".... there's a reason people avoid crossing the network, and the microservice approach essentially comes close to crossing the network every single time you cross module boundaries."
We've been trying as an industry to cross the network efficiently for decades. Ultimately it comes down to team structure. Microservices presumes its easier to scale an organization to run/evolve their teams/modules independently from other teams/modules, communicating over a network interface, than "stopping the world" to integrate a monolith or "enforcing a standard across the company" beyond the subset of published language required to make communication work.
Whether that's a reasonable tradeoff generally depends on the performance needs between the modules.
>3) ".... Every single one of them needs to be sufficiently well-designed to operate like an individual web server. So (d)dos-resistance, fairness, anti-slowloris, anti-starvation, autoscaling, resource exhaustion (like maximum number of file descriptors ...) correct sharding, access control (not too tight, not too loose, ... correct caching of security credentials, ...), and load testing, you've thought of all that for that "really small and dumb" service that basically copies information across the payment/non-payment firewall right ? If you haven't ... prepare to be surprised."
I agree. But there are plenty of examples of how to bake that in to each service for reasonable cost - the Netflix OSS services, for example, the emerging platforms like CloudFoundry that give you the resource isolation + access control + scaling + etc. There's no excuse to fail to analyze what's out there and pick an appropriate set to cover these areas.
4) "Last time I consulted at a bank they had systems interoperating using every interchange format from fixed-width fields (old cobol code), 5 different kinds of xml, and of course json and a dozen binary formats. Microservices can't work in such an environment."
Okay I don't really understand this one. Are you suggesting that every micro service should be able to parse all these different formats and semantics? If I needed a micro service that spoke several different languages, I'd probably include some kind of integration library to do such a thing (say Spring integration).
If I wanted to incorporate legacy systems into a new micro service app overall, I'd probably form a micro service that wraps it in a bubble, depending on the circumstances. e.g. http://domainlanguage.com/newsletter/2011-06/
If you look at how Netflix migrated from Oracle / SimpleDB to Cassandra, this latter approach is how they did it (though it was more of a synchronization service than just a bubble, since it was intended to shut off the old system eventually).
5) I agree, the tools for testing and test data replication in the enterprise aren't up to what Netflix pulls off - they rely on the broad facilities of Cassandra and S3 to be able to quickly replicate test data to a cluster.
However, do you have a link for where Netflix says they have 3 teams doing this work? I think this may be open to misinterpretation, because my understanding is that their development teams basically deploy into production - they don't do full integration tests, the canary deploys basically catch boneheaded changes - and there's certainly a lot of A/B testing.
There are testing teams, but they aren't the usual sort: they're building processes to do device testing, "failure injection testing", introducing chaos into the production system. It's just a very different approach to the usual "stop the world and test everything" we see in the enterprise.
"So we simply need an extra image field going through the payment system. Oh-oh. The payment system is 8 microservices, and therefore around 64 interfaces to the rest of the system. Let's be generous ... only about 30 need to be redesigned. There is nothing to help. Refactoring, or even type checking doesn't work across microservice boundaries."
This strikes me as hyperbole. If you take the adage of "team = microservice", you're implying 8 teams manage the payment system. That doesn't smell right.
Secondly, the biggest barrier to adding new fields IME in distributed systems are overly locked down interfaces. If the interfaces allow for extensibility, adding a field, even across eight micro-servcies, takes almost no time at all. I've seen this in a deep system with 4-layer service chain. We had to add a field to the GUI for the next (2 week) iteration, this required prioritization and coordination across 5 teams, one of which was a COBOL/MQ system -- but ultimately took maybe a few days because the interfaces through the chain were designed to be extensible (and the copy book had room for an extra field).
> " Any error you make won't come up unless you're testing multiple of those services simultaneously. If you manage to have just a bit more complexity in your system, those problems won't come out until the massive full-system integration test. You know, the one you can't do yourself, and even your whole department can't do. Oh-oh. That's an awfully long feedback loop to find those 3 dozen places where you forgot to copy that field across."
If the lesson here is that "most organizations are mediocre", you'll get no argument here.
If the lesson is supposed to be "there's a better way", I'm curious what it is. Is compile time type checking and refactoring monolithic software that's managed by 8 different teams is somehow better? I doubt it.
If anything the microservices approach makes it so that each team's service is evolving independently in production, and the system isn't going down or stopping while we all add this field independently across the teams. There is no "massive full system integration test" in the usual sense, there's continuous testing and failure injection. The automated test for adding this field will eventually pass, but there's no stopping the world to wait for it.
Taking your individual points in turn,
1) A Microservice should wrap a business problem not a technical problem. So "A Key-Value store" should never be a microservice. They should aim to be swan-like - making the difficult look easy. ie they promote encapsulation of difficult business problems.
2) It's a fundamental tenet of distributed systems design that the parts of the system which need to be fast should be identified and their development treated accordingly. Microservices optimise other constraints. (You can still get blindingly fast speed where you can suffer eventual consistency - which is about 90% of the time for most businesses.)
3) Continuous Delivery says, "When your deployments are difficult and slow it means you should do more of them." Most Monoliths handle all of the problems you described very badly. If your business can scale the processes for handling them (which it needs to) then it doesn't matter if you have one service providing them or many microservices - you can still handle it.
4) I don't understand what you are saying (particularly the OO bit). I think you're implying that microservices require a utopian ideal that isn't possible inside large enterprises. There is nothing wrong with evolving towards the ideal. DDD gives you "anti-corruption layers" to handle interfaces that cannot evolve fast enough for systems that depend on them.
5 i) This is a big challenge and I think that best practices are still evolving in this area. Contract testing (see Pact for one implementation) helps a lot. In general the challenges here drive businesses towards fully automatable IaaS - and this is also a good thing. Once you can spin up an entire environment on demand you can do (a limited number of) full end-to-end tests to smoke test your system.
In general this is a problem for everyone, though, regardless. It's just most multi-monolith businesses don't deal with the limited number of problems introduced across system boundaries and either fix them post-deploy or spend weeks in UAT system configuration hell.
Also it's something we're increasingly going to have to learn how to handle better as developers as we employ more SaaS in what we use. The problems are very similar.
5 ii) If a data change requires changes to 30 microservices then something has gone very very wrong. There shouldn't be 30 microservices that need to be aware of the existence of a driving licence. It sounds like you're used to doing very explicit marshalling. Coast-to-Coast marshalling can help here. Most microservices should look at the data they are interested in and have strategies for ignoring the rest.
W/regard to new data / historical data - the elephant in the room that hasn't quite reached the broad industry acceptance of say Functional Programming is Event Sourcing. This is a great introduction to the concept by (naturally) Greg Young: https://www.youtube.com/watch?v=KXqrBySgX-s
This is a crucial point. 'We use microservices' does not say much unless you can describe how you design consistent and granular service interfaces. Otherwise you most probably just produce microservice spaghetti.
* who cares if you're deploying 20 services or 1 when you run your deploy script?
* who doesn't have an operationally focused dev team?
* who isn't already dealing with interfaces even if not micro services?
* since when do micro services have a monopoly on distributed architectures or asynchronicity?
I can't agree testing is harder either. Too much fluff in this article to read much value into it.
Neither does it mean that you can't have an actual Java interface exposed as a JSON endpoint.
Something along the lines of declaratively stating what a JSON endpoint should be like and having a tool automatically assert that the services conform to that interface doesn't look impossible.
So you have to type check at runtime. You can build tooling to make it easier to produce conforming software at each end, but you can't guarantee that someone won't just go ahead and write any old thing.
So in practice you still perform validation on each message.
Put another way: distributed systems turn all type errors into runtime errors. The guarantees of compile-time checking are severely weakened.
But the tooling is already here,no question.
The issue is,should it be "baked" into a language?
Moving towards containerized deployments helps at some levels and is more difficult at others. Having internalized heroku-like abilities goes a long way too.. it really depends.
Even looking at Lambda as a service layer seems pretty nice... the bigger issue to me seems to be the coordination layers of these services. You have upstarts like etcd, along with the likes of zookeeper and others. Just the same, it takes a lot of effort. For the same note, you may want to just keep some more logical service/worker boundaries in place, and API all the things so to speak... that doesn't even mention caching and related issues.
It depends on the environments.. when you are trying to introduce half a dozen platform/technology changes in a small company, it's not easy in terms of training alone...
scroll to 20:40
This doesn't seem like an entirely fair comparison:
"It seems to me that all three of these options are sub-optimal as opposed to writing the piece of code once and making it available throughout the monolithic application."
There's a different scenario that could have played out with how to share a library between different services. You could have written the bulk of your application in the same language, like a monolithic application but split into several services. In that case you could create a library for your tax calculations and use it freely within your services.
For me, I split my application into a small number of services and, as much as possible, split things out into libraries to make reuse simple (more libraries, thiner applications).
Sometimes I use different languages, but when I do, I consider very carefully whether the (rather large) tradeoff that presents will be worth it in the long run for what I'm getting in the short term.
A question: when people talk about microservices, how small are they talking?
The final option is to share resources such as a tax calculating library between the services. This can be useful, but it won't always work in a polyglot environment and introduces coupling which may mean that services have to be released in parallel to maintain the implicit interface between them. This coupling essentially mitigates a lot of the benefits of Microservices approaches.
In the big app 1) a single syntax error breaks everything. 2) simply loading all the app takes a huge part of ram, and tests are very slow. 3) there is a gigantic dependency tree, as when you depend on a module you also depend on every ones of the module dependencies. 4) Almost no one knows everything about the app. 5) it is impossible to split the company into separate services without getting fights about shared code and architecture decision.
1) A single syntax error will _still_ break everything in a microservices architecture if the service in question is any essential service. In my experience, most services are essential. In a dynamically linked architecture, the failure occurs when you try to pull in the library. In a well-designed dynamically linked architecture, you can opt to not pull in optional components.
2) If you're an idiot and build everything monolithicly. A well-designed non-services architecture has many independent and independently testable libraries. Integration testing does require the whole system, both with and without services. In this case, both the resource usage and startup time is much lower without services.
3) Ditto for services, if you do integration work. Not so for independent libraries, if you do not.
4) Question of where you put the abstractions, not how you make them.
5) Again, question of where you put the abstractions, not how you make them.
The key reason for services is isolation, failover, and independent deployability.
There are a bunch of programmers out there who were only taught one way to make abstractions (be that services, objects, or what-not), and assume everything else means abstractionless spaghetti code. That's simply not the case. Structured, functional, and other modes of programming have the same quality of abstractions as OO. Likewise for whether you link statically, dynamically, or over a services boundary.
Another important performance hit has to do with scheduling and multithreading, as crossing IO boundaries (even on the same machine) coalesce all threads into one (and then multiplex the work again on the other end).
How are microservices usually communicating that avoids this problem?
I reckon this is hardly ever done. Microservices, like many other abstractions, usually just proliferate and quickly become a maintenance nightmare.
If I'm right then the microservice model has one major advantage over the monolithic app model, which is that if things stop depending on a microservice you generally can tell because eg logs are empty or instance utilization is zero or whatever.
With components in a monolithic app there are tools in most languages to find dead code but I've not seen them used regularly in many teams (especially startups) and they are often less helpful in that they can tell if code is no longer referred to at all but not if its just in a code path that is no longer executed.
My gut feeling is that this leads to dead microservices being garbage collected more reliably than dead code.
The old saying "team = product" applies: if your business requires lots of parts independently evolving, you're going to find ways to make them cooperate at the boundaries, or you will be in a mess.
Companies like Amazon have made this approach work at speed and scale. Others might fail out of incompetence. Mostly I think people are just looking for ways to operate at better speed and scale than the current enterprise practices.
This is usually introduced as "Conway's Law". We think about it a lot at Pivotal, because we are frequently starting, shuffling, merging and splitting teams while we work on Cloud Foundry.
-Tests for all response codes
-Test for all failure modes (timeout, service down)
-Load tests both services.
-Two sets of deployments to test.
-Two healthchecks to test.
Seems harder to test, not easier.
And yeah, I agree that it's absolutely harder to test. (Integration test especially.) But you gain a lot for the trouble: independent deployability, scalability, dynamic traffic shifting, easy sevice boundaries and ownership (data stores, state, downstream deps, etc.). For a large company, this makes a lot of sense.
Edit: microservices are not the easy path.
The newsletter service doesn't need to load email libraries, handle the protocol, failures, bounces, retries. It doesn't need to maintain state on this. Nor does it necessarily need to know if an email address is opted out. It can concern itself with building newsletter templates, A/B testing, feedback/analytics, scheduling, triggers, etc.
Consider how different teams may accumulate specialized knowledge. Now the boundaries of this knowledge are finely delineated. Service responsibilities are clear.
As the company and app grows, you can pull stuff out of the newsletter app. You may want analytics and A/B testing as a service.
Consider all of the possible service boundaries. If you can group 4-10 tables in a relational database into a cluster, that might constitute a valid service boundary. Does it model a particular business function / flow that can be wrapped up and exposed as a service? Will that grow with company and with changing needs?
I don't know how this really helps with RAM. If you're worried about that, you probably shouldn't be considering a microservice. There are operational costs that trump hardware costs.
Further, debugging problems becomes more difficult, not less. If a user complains that their newsletters aren't getting sent, you have to track the failure back to the failing service (and the network between them can fail, too), which means involving people from all of the teams in the dependency tree. Naturally, unless the failing service is obviously failing, pinning down responsibility to fix the problem becomes a problem itself, because service responsibilities are never that clear.
Microservices do offer more flexibility, while maintaining hard interfaces, and that is worth something. But I don't see how any of the advantages mentioned in the comment I replied to fit in.
Still a big proponent of SOA generally, though. I think the granularity when defining "service" is ultimately going to vary from application to application, however, leaving the sweet spot slightly larger than e.g. Amazon's microservices.
SOA == microservices, basically. Arguably one could say that Microservices = SOA minus ESBs, which is what necessitated the new term. IOW, don't hide your mess of dependencies in a black box, expose and manage them where they're needed.
The granularity of the service is generally determined by the size and responsibility of the maintaining team, and will vary from company to company.
With regards to tooling, I'm curious what you're referring to. Some selection of tools like Splunk, AppDynamics, New Relic, or Boundary certainly can handle both applications and microservice chains, no?
I maintain that it's not a panacea, and that complexity is pulled into other places (operations) where we perhaps don't have equally effective tools at this point. I'd also argue that it's difficult to understand the correct level of granularity (and potentially expensive to fix errors in this decision). But it's not the problem I made it out to be.
For example, breaking changes are breaking changes in either system. It's not an issue of architecture style, it's a matter of business needs changing, and thus protocols change. A change in protocol breaks the existing protocol.
We understand this intuitively when you are talking about a de-facto protocol like HTTP, but we seem to think our own programs are somehow different. They aren't.
Architecture is about taking the essential complexity of a problem and creating components and protocols to solve a problem in a way that makes the most sense for the team trying to solve it. Monolithic apps or microservices then should be a question more of what your team is going to execute well as much as it is a question of which structure more elegantly solves the problem at hand.
Don't get me wrong, I love microservices and will try to use them in all my future work, but I think where people often go wrong is that they over commit to it without realising the downsides. I often see people casually using AMQP queues for pretty much everything, only when they need the worker service to talk back to the originating service that they realise they've made a wrong architectural decision.
It was a complete joke of a system.