Or am I missing something?
In my experience, with microservices, if you don't put much effort into following best practices, you end with components coupled at the interface level at worst, but you can still deploy independently and don't have to worry about service B subscribing to some internal event in A or that kind of things.
In a monolith, for the same amount of effort, after some time you usually naturally end up with all the kinds of coupling mentioned here, which is very hard to then get out of.
A monolithic system's architect would have to make an explicit choice at some point, to reject/restrict coupling (by e.g. building the monolith on an actor-model language/framework, where components then become "location neutral" between being in-process vs. out-of-process, making the "monolithicness" of the system just a deployment choice — choosing to deploy all the components as one runtime process — rather than a development paradigm.)
Service-oriented code, on the other hand, has low coupling by default, until coupling is introduced via explicit choices like making synchronous RPC calls between components.
Sadly, people refactoring a monolith into SOA code will often introduce such coupling, as it's seemingly the cleanest 1:1 "mapping" of the monolith's code into SOA land. But the point of refactoring into SOA isn't to do a 1:1 semantic recreation of your monolith, including its inter-component dependencies; by doing so, you just produce a "distributed monolith emulator." A proper SOA refactoring should carry over the semantics of the individual components, but not the semantics of their interdependencies.
This is because coupling is a design instead of an implementation decision. "Make a synchronous RPC call between components" is still the "easy" and "obvious" way to get things done.
It's sometimes easier to spot high coupling when looking at code in a microservice world, but it still requires you to know what you're looking for and know how to avoid it, and not give in to the temptation.
Last time I was working with people trying to split up a monolith I pleaded with them to try to understand the coupling first, and figure out what a system would look like without that coupling before actually just moving all the code around and replacing direct method calls with ... other method calls that made synchronous RPC network calls.
Error rate went up.
It all depends on relative friction. Systems designed from the ground up to "think service-oriented" will use languages/runtimes/frameworks that make low-coupling options easier and more idiomatic than synchronous RPC calls.
For example, if your SOA is built on CQRS/ES, then adding an Event to the event store, for another Command to react to, should be easier than making an RPC call.
You can carve out A as a microservice with a clean general interface, and write an adapter layer to translate calls from your old messy interface to your new one, and B, C, D call that. Then start refactoring B, C, D one by one depending on your priorities.
That way new service X that needs to use A, can directly start using the clean general API even if B, C, D are not refactored yet.
The more everything in the pipeline know about each other, the easiest is to code it and the FASTEST will be.
You can say any performance gains is possible when you exploit this facts. And the more "abstract/invisible" everything is the more impossible is to make it so.
We need a definition of what it means for two services to communicate while being “uncoupled”. I think “are versioned independently” or “don’t have to be upgraded in parallel” meets the bar.
Yes... exactly. More things are monoliths than we want to admit.
> We need a definition of what it means for two services to communicate while being “uncoupled”
Take a dead simple example: an application server that talks to an in-house video encoding micro-service
Even if that video encoding service only has a single really well designed endpoint, there's STILL coupling between the application server and the video encoding service.
Just because we've replaced a method call with an HTTP request doesn't mean things aren't coupled anymore.
Sure -- you may be able to deploy certain changes to your video encoding service without changing your application service. However, you need to be keenly aware of what changes are compatible with existing application servers and which are not and that adds complexity and cognitive load. Maybe it's worth it in some cases. In many cases, it's not.
Granted, this is not something to worry about with S3 or SQS.
> If we go with your definition then my app is a monolith with S3 and SQS
okay sure, I agree that calling your app a monolith with S3 and SQS is ridiculous.
The point is that we all may think that we're writing another S3 or SQS when we write our own micro-services. However, in practice maintaining a stable, backwards compatible, public API like that is quite costly and we usually end up re-implementing function calls as HTTP requests and then calling it "loosely coupled".
Your definition of uncoupled makes some sense. It's a good starting place at the very least.
Sure you may be able to call it "looser" coupling, but you may also be able to call it a rats nest of complexity!
If you break out a separate service from your monolith and it’s not independently useful to things other than your app then you probably should put it back.
I had a misfortune to work on a project where this exact thinking was used to carve up relatively simple system into about a dozen microservices. Turns out referential integrity and transactions are really, really useful. For example, most task required calls to multiple microservices and if something broke in one of the microservices involved it would often leave other microservices in inconsistent state and retrying failed task became almost impossible. Project was scrapped after 3 years of development.
Sometimes you gain real benefits from this approach (e.g. maybe a component needs to be scaled independently), but I find that very commonly, people write tightly coupled micro-services and end up with a distributed monolith.
However, scaling a monolith (as someone who does this for their day-job) is significantly harder than scaling independent services. This is mainly because performance issues in one function can hog resources that other functions share. This is particularly the case when you have a single, central database where one poorly optimized query can cause performance for all users to degrade severely.
Being able to scale individual components/services will be the most efficient solution, but if there doesn't happen to be a particular bottleneck that bogs down everything else (besides the database), it seems like the traditional monolithic load balancing approach may not be so bad.
Especially compared to the effort of trying to split an existing monolith into microservices. And if there are other bottlenecks, you can focus your effort on improving or isolating those parts.
(I only have intermediate experience in this area and certainly way less than you do; definitely not trying to claim anything authoritative.)
It only makes sense to write services (note, "services", not "micro-services") when it is inevitable and obvious that your software is hitting scaling issues related to some subsystem, or when you have systems that need to be independently reliable. (i.e. ATM's should work even when the banks website is down)
I would've had to do all the same work to extract it to microservices, but also would've had to do a lot more on top. Had it not been a legacy system, probably worth it - but as it was, leaving it as a monolith querying a now-split-up set of DBs was cheaper and faster.
This is basically how everything should work. There's no need to rush things if you don't need it now, especially that some that do convert are disappointed at the results (it's not the microservices themselves, it's the specific migration planning, design, and maintenance.)
> So maybe there’s no point in keep discussing microservices vs monoliths.
Wikimedia has monoliths for text-based systems and microservices for (some) media-based systems, and it make sense: transcoding is unpredictable workload (for them, I imagine this is more common for YouTube) while database operations (at Wikipedia's scale) is very predictable and doesn't warrant the additional complexity. Even in their planning for multi-datacenter operations (https://www.mediawiki.org/wiki/Wikimedia_Performance_Team/Ac...), they are more concerned with disaster recovery and slowness to logged-in users, not quick scalability.
I'm really not sure why this is so difficult to grasp for so many people in IT.
I think it stems from a desire to not have to think too hard about how to solve a problem since "someone else has already solved this."
... or maybe an inability to do so.
yeah, there's a place for monoliths, and developers who are willing to ignore the industries ludicrous fads and faang-chasing have the advantage.
Sure, it probably gets easier when you're booting your third or fourth microservice, but there's a lot of overhead.
It dramatically simplifies so many aspects of development, which means you can have developers working on more impactful features and you don't have to worry about create a meta structure for keeping things consistent across all your micro services which is one of the biggest benefits I didn't see mentioned in this article.
And in many systems with interdependent logic (which ours is), not duplicating data and not having micro services calling other micro services in loops become incredibly difficult things to avoid.
I highly recommend the monolith for anyone who is not trying to cope with epic scaling issues.
All hail the monolith!
Then as it grows and it makes sense to decouple at the edges then it makes sense.
And these micro services only make sense when there is a team responsible for each one.
Otherwise the whole thing is like a game of chess - you will forget to make a move at some point.
It's alignment with the company structure is.
The same way Conway's law tells us that company structure and the communications should be in sync, the architecture should match the teams.
As a rule of thumb... the good architecture is the one that minimize the communication within the different teams.
First, "microservice" is a terrible name (we should have just stuck with SOA), it leads to all kinds of agile consultant snake-oil salesmen claiming some ideal size for a service. That's bullshit. Services should be defined by their interfaces, full stop. If you can't come up with a stable interface between two services such that the interface changes orders of magnitude than the code within each service, or if you find yourselves always having to deploy the services in pairs due to leaky abstraction on the interface, then they probably shouldn't be two services.
Second, SOA is primarily a tool for scaling teams. Yes there are some tangential benefits in terms of code base size, CI build time, etc, but those are false economies if you have a small team that has to deal with the overhead of many services.
Modern web-scale architectures are really about factoring things to leverage specialists to scale very large tech platforms.
Third, and perhaps most importantly. In any rapidly growing business you need to evolve quickly. You should not expect to design a perfect architecture day one, you should plan to evolve the architecture continuously, so that every two orders of magnitude growth you look up and realize you've replaced most of what you've previously written one way or another. Small startups that focus on "microservices" before they are anywhere near 100 engineers tend to die before traction.
I have the inverse experience: management wants to brag about tech and therefore force ill-suited tech onto the developers.
I've had managers that wants a CDN because that's what all professionals do and they don't take a moment to think about added risk by adding more components to the system (or even if it adds any benefit at all).
You may now have separate asset domains, interactions of cache expiry headers across different servers, custom header forwarding through your front-ends, new separate asset packaging and deployment steps during shipping, and a slew of other "new stuff" to think about during every deploy, that can all break, and that you ought to have multiple people on the team really understand to use properly, or to debug if it's not working.
If you have <100 users, growing to ~500 by the year's end, you maybe don't need to spend time on any of that stuff yet.
I'm hoping that the purported CDN is not for an internal, company-only application.
Please ELI5 how I get better code reuseability in microservices than in my monolith with separated concerns into libraries and helper modules.
If you have an authenticate() function in an auth helper library, you can move that around to new web apps to help users authenticate with different parts of your site(s).
Your authenticate() function for a microservice is probably going to be part of a broader "microservice api library" that gets reused, but I can't imagine reusing the microservice code itself - you're supposed to just spin up a new one.
> Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed.
But this is not true. I worked at some C# "monolith" ~2010ish, and we used MEF framework to build pluggable .dll files that you could just "drop" into the deployment folder. As long as you stuck to the same interfaces (through a shared interfaces project) you could build and ship individual parts of the application in separate teams. Even exceptions could not harm the full system (when they were properly caught in the host) - and segfaults shouldn't happen in a managed system (but yes, they did, mostly through P/Invokes).
I always liked the good old "Winamp model" where you would just drop some dlls and got new visualization plugins enabled - each more different than the other.
> Is there any place for monoliths in 2021?
maybe one year we'll get our answer!
You can just as easily build a highly modular and decoupled monolith as you can a tightly coupled and fragile microservice. The same point holds true for many of the other pro/cons the author brought up.
I think it ultimately it depends on the tools being used and the team building them because there's tradeoffs to going each way.
Way easier to debug