I really do not understand the debate on monoliths and microservice anymore.
Context matters so much. Should you have absolutely everything in 1 system. No. And I think no one thinks that anymore.
Should you spilt your system into as many pieces as possible?
No, of course not.
You are prop. storing files one place and have a data in a database another place. And most likely none are on the webserver receiving request. Or maybe you are, because context matters and I'm wrong in your particular case. Could be.
My advice would be to split your application up into enough pices so that;
1.your engineers feels like they have control over the code and are not afraid of doing changes.
2. if you get uptime issues. Get the heavy / unrealiable services over to new node. Don't let one service/endpoint take down other endpoints.
Think and act according to the problems you have in front of you. Don't go extreme in any direction. Context context context
Like any other well reasoned, balanced and pragmatic POV, the problem with your boring approach is you can’t wrap it in a clickbaitable blog and no flame war can emerge from it.
In no public debate do you get the participation of people who think everyone involved is a lunatic. We presume there are two extremes in any decision making process and ignore the third option, no matter how reasonable.
I agree completely. The current FAA outage is a good example of this. What if the system responsible for NOTAMs was the same system responsible for sending system outage messages. They weren’t so that status could be communicated while mitigation was done on a system with unrelated concerns.
At the same time, a few person startup should probably focus on what allows them to deliver the fastest. I’ve seen that work with relatively monolithic systems and with SOA, tooling choice makes a huge impact.
We had a major cockup at work a few years ago caused by bad organizational choices and Conway’s Law.
We had a disk array go sideways, which is when we learned that some dumb motherfucker had put our wiki on the same SAN with production traffic. You know, the wiki where you keep all your run books for solving production issues? Everyone was furious and that team lost some prestige that day. How dumb do you have to be?
Seems a little harsh. We all overlook things like this. Things like storage are so reliable we expect them to always be available. When you lay it out like you did, it does sound silly.
Two is one and one is none. This is not hard. But it's costly. The main problem with security and reliability is that they are expensive. Those operating in high margins, highly specced spaces need to do it right or they lose. Everyone else is cargo culting, box-checking for stakeholders or selling snake oil. There is a fundamental tension between optimizing for cost and doing things right, and short term gains will always come at the cost of mounting operational risks.
“mounting operational risks” is my main complaint with the “there is no maintenance” thread that was here the other day.
Maintenance is looking at all of the probability < 10^-4 issues that are just waiting for you to roll the dice enough times to eventually lose to the birthday problem. Every day you’re lowering the odds that tomorrow will be the day everything burns, because doing nothing is just a waiting game.
Yeah that was such a 'social media hot take' kinda thing I didn't even feel like getting into that conversation. Misaligned interests make it a good idea to get into a project, slash things to the bone, cut a fat bonus check due to savings achieve, cut and run. That's how we got the supply chain mess of 2021...
Our operations team didn’t have visibility because some other operations team told us not to worry our pretty little heads about it. You know how you can tell when someone is so mad that they stop talking? Two of our people hit that level. That was not a comfortable room to be in.
This experience ended up being the beginning of the end for the anti-cloud element at the company. Which is too bad because I like having people who understand the physics of our architecture. Saves me from doing all sorts of stupid things myself.
Yeah. For our ops-documentation I also have more of a bottom up approach of keeping it stable, which has resulted in the company having two documentation standards.
Pretty much all of our docs, and everything concerning more than one team is stored in an instance of our own software running on the software plattform. And it works well.
However, the core operational teams document their core operational knowledge in different git-based systems, all of which are backed up into the archiving as well. This way, if we really lose access to all documentation, we probably lost all workstations of a team to some incident, as well as a repository host, as well as two archive hosts in different parts of europe. At that point, the disaster recovery plan is a bar, to be honest.
You're right, and I think most engineers know it. The debate persists as a proxy for a separate debate:
E > I want to reorganize this because it's ugly and hard to work with as it is
M > I want you focusing on this list of features/bugs and not introducing risk by making changes for other reasons
Something doesn't feel right about:
E > I'm the expert, you have to take my word that this change is necessary
So if depending on whether E wants to join or separate, they put on their I-hate/heart-microservices hat and rehash whichever side of the tired old debate serves them.
People who agree or disagree dust off their I-hate/heart-microservices hats, and we go to town for a while.
We're not actually arguing for micro vs macro, we're talking about the context specific details, we just dress it up in those terms because M is in this meeting and M isn't close enough to the code to keep up with a conversation about the details.
If enough energy is not expended in this process, whoever lost the debate goes and writes a blog about why they were right. Except nobody will read a blog about their specific codebase, so it ends up being about whether they heart/hate microservices.
When discussing turning our monolith into microservices at a previous job:
My Boss: "I'm convinced this is the right architecture."
Also my boss: "Now how do we break this app up?"
He was certain we needed to shatter our monolith into lots of little pieces, but had no clear vision as to what those pieces would be. From my point of view, the "architecture" he was certain of wasn't an architecture at all. It was just a general notion of doing what he thought everyone else was doing without considering anything about our app. Total cargo cult mentality.
Insnt the problem that nobody have ever developed an complex system using modern programming languages where
"engineers feels like they have control over the code and are not afraid of doing changes."
And that trying to have it as a goal only results in the project getting fragmented in a messy ball of micro-components(services/classes/libraries) with complex inter dependencies that nobody feels like they have control over the code and are not afraid of doing changes to?
It's almost as if there is no real solution to complexity that avoids having to deal directly with the complexity inherent to a problem domain, and that means developing documentation and testing that allows people to touch/change scary systems.
Agreed. I very much like my workplaces approach atm, which I jokingly call medium-sized services, mostly.
Due to acquisitions, we have 6 or 7 big messy monoliths and actually, 3-5 infrastructure stacks. This in turn means, we have something like 12 different user management systems (because each monolith contains a few layers of legacy user management, it'd be boring otherwise), 4-5 different file and image stores (some products just don't have one), 4-5 different search implementations. It's a bit of a messy zoo.
Our path out of that mess is to either extract, or re-implement functionality common to the products in smaller scale, standalone services - for example so you end up with a centralized user management, or a centralized attachment system. And these things certainly aren't micro, user management does a bit of search, a bit of SAML, a lot of OIDC, some CRUD for users, groups and such. If you wanted to be silly, this could be 5-6 "micro-services".
But realistically, why? We do gain advantages by having this in a smaller service - we can place a bit of extremely critical functionality in a small service and we can manage that service very, very gently. And we can reduce redundant efforts, even if migrations to this service takes some work. But what would some dedicated SAML-integration-spring-app improve, besides moving 1-2 tables into another database/schema?
What if the effort were spent automating the disparate systems to stay in sync?
Granted, the data model reflects multiple systems, but each user could use their preferred "set of apps," and changes get reflected by some headless Windows container or browser.
For your core service. I think you are right most of the time.
But you are probably not talking about the website marketing wants to build.
And you are maybe using S3 or similar for documents storage?
And your logging system are probably not on the same machine?
But if by "system" you mean the main API + SaaS website or similar yeah. Sounds reasonable.
in reality we do so much more now. That I think some part of what was the monolith years ago is now just SaaS the company use. That the development department do not even need to know about.
I agree that I took it "a bit" far to make a point… that just ruined my case.
I do think that in a monolith you try to do most things yourself.
Take Auth for example. It's huge. And I think we more and more are excepting that even for a monolith we do not want to do that ourself.
If you start from scratch and your spec say that they want OIDC and SAML then most ppl will look for a saas service to get help.
Same with storing files, we would make use of S3 or similar.
But if we go to pre 2010. I think most people would try to both of these themself.
Im not saying that this is microservice at all, but I'm saying we are moving stuff that are commodity's out from the monolith compare to what we did 10-15 years ago. Because there are services that does a much better job for us.
And that was part of my point with the original comment. That trying to keep everything in one place is probably not the right choices.
Sorry for not being more precise on my previous reply.
> I really do not understand the debate on monoliths and microservice anymore.
> Don't go extreme in any direction. Context context context
That is exactly why debate occurs. All religions, be it microservices or TDD, are taking some good ideas to an extreme and removing context from the decision making process.
Otherwise the term "microservices" wound never be invented: people were doing "services" since the dawn of time. But you can't create hype out of common sense, you have to go to the extremes.
At this point I think microservices architectures don't exist. As if a "micro"service is a goal on its own. I think any sane organization strives for a healthy trade-off between manageable code and separation of concerns.
> My advice would be to split your application up into enough pices so that;
> 1.your engineers feels like they have control over the code and are not afraid of doing changes.
I'd add: split your team up into a similar number of pieces. Your application will come to resemble the shape of your teams, over time. Design your teams based on where you want the boundaries of your application to fall.
Agreed. I would add an extra thought around "Try to go monolith until things really breakdown" but you get at that in so far as the point where things breakdown is where either:
a) engineers are afraid of or find making changes difficult
b) uptime issues result in heavy endpoints/services taking down others
To your point, microservices should be introduced expressly to solve those two problems, and not before.
I tend to split things up based on the resources required.
We had a decent sized data pipeline that was entirely microservice and serverless and it was a real joy to work with.
- Our NLP code lived in a service that ran on GPUs
- our ingest service used a high ram, but relatively simple CPU service to do in-memory joins cheaply and efficiently
- we had a bunch of specialized query services that were just direct requests to AWS services, or a light lambda wrapper around a call to a service.
Coordinated things with airflow and it was very easy to maintain and scaling was pretty efficient since we just scaled the pieces that needed it without wasting money on unneeded compute.
> enough pieces so that; 1. your engineers … are not afraid of doing changes.
I don’t really understand that argument, and I don’t really feel safer making a change to a microservice inside a large system, as opposed to making a change to a monolith — the consequences of a mistake are equal in both cases (although harder to observe/debug in a microservice architecture) — am I missing something?
If you read Kent Beck's book Test-Driven-Development By Example you will find that a major theme is this idea of developer fear or reticence.
From the Amazon book description:
Quite simply, test-driven development is meant to eliminate fear in application development. While some fear is healthy (often viewed as a conscience that tells programmers to "be careful!"), the author believes that byproducts of fear include tentative, grumpy, and uncommunicative programmers who are unable to absorb constructive criticism.
Thinking through the inter-service interface is costlier and more transparent than calling the function in the same monolyth. That said, interfaces between services are inherently better than function calls.
Also, making mistake in microservice is more easily isolated, fixed and redeployed.
Other than that, microservices are business entities, rather than just tech entities. They can be scoped, evaluated and managed on business level. After all, you can ask another team rewrite the specific microservice. You can't do that with monolyth.
And the problem wouldn't be just buried in tech details and opinions.
Yeah, "it depends" is almost always the right answer but not a very useful one. The devil is in the details, and I don't see any issue in discussing the nuances so people can apply them to their own situation to make their own decision.
One decent rule of thumb is to have one service per 1-3 closely grouped SWEs, like Conway's Law, then likely split them up more to ensure no two services share a database.
Maybe that is not "architecture" or "technique" but is is definitely not trivial and is important in almost any context -- but also probably not what you were going for.
This sounds interesting, how do you mean? To me it seems like context is perhaps the key element that we should concern ourselves with to further the engineering or scientific aspect of programming.
Let's say we're deciding between floating point arithmetics, fixed decimal numbers, or rational numbers. All three of which are very well-established and useful solutions to the problem of subdividing integers. We have some context already here, and we'll need to provide a bit more context to know which solution to choose. Would you agree with that?
If that’s indeed the most useful general thing that can be said about this question, then it means the microservices crowd has won, at least insofar as it deserved to win. The point is not that everything needs to be a micro-est possible service, the point is that we all now have applications built out of loosely-coupled services as a primitive in our reasoning toolbox.
(In a world where microservices haven’t yet won, we think RPC is obviously always a good idea and build our web apps as ISAPI modules and AOLserver extensions.)
I think very similar things when I hear Casey Muratori’s opinions on OOP[1], which he titled “getting rid of the OOP mindset” but I’d summarize as “absolutely do make an object or two if that fits your problem domain, but remember that it doesn’t have to and use your judgment”: that’s what OOP winning feels like, from the inside, once you’ve internalized it. Once people can look at a thing and think “you know what, that looks like an object”, OOP has won, even though people won’t always be right and not literally all the things are best modelled as objects.
(In a world where OOP—or lexical scope—hasn’t yet won, we read papers about Actors, struggle to understand what the authors could possibly mean, and implement a small language in order to have a version of those ideas we can play with. The language ends up being called Scheme.)
The same probably applies to structured programming, though it seems that structured programming deserved to win quite a bit more than the others, and though I don’t have easy access to pre-structured programming lore the same way I do to pre-OOP or pre-microservices lore.
In hindsight, old, good, now-accepted reasoning frameworks always seem to consist entirely out of trivially right stuff on one hand and outrageously wrong stuff on the other. That’s entirely by design and doesn’t mean the ancients were stupid—this is culture working correctly and well. It means that you, who has before only ever been in contact with the consensus that emerged out of their bitter struggles, have such a deep and implicit knowledge of that consensus that the good points that were once revolutionary sound obvious to you. You’re only seeing the bad points because those are the ones that didn’t find their way into the culture.
Scott Alexander’s metaphor of “philosophy in the water supply”[2,3,4] is the best explanation of this that I know.
No, they aren't. The entire point of the big ball of mud is that there are no meaningful divisions in the code. Everything uses everything willy-nilly, at the smallest possible level of abstraction. There is, metaphorically if not always entirely literally, not a single line of code in the system that you can change without fear of bringing something else down that you may not have even known they existed.
Microservices are not a miracle cure or the solution to every problem, but they do force divisions within the code base. Every microservice defines an interface for its input and its output. It may be the sloppiest, crappiest definition every, with dynamic types and ill-defined APIs and bizarre side effects, but it is some sort of definition, and that means if necessary, the entire microservice could be entirely replaced with some new chunk of code without affecting the rest of the system, cleanly cut along the API lines. This microservice may sloppily call dozens of others, but that can be seen and replicated. It may be called by a sloppy combination of other services, but the incoming API could be replicated.
However bad the architecture of the microservice may be, however bad the overall architecture of the microservice-based system as a whole may be, this will be true by structural necessity. The network layer defines some sort of module.
They can create big balls of spaghetti, certainly. They can in total create a big mess; they are not magical architectural magic by any means. While a full replacement of a given microservice is practical and possible, if the boundaries are not drawn correctly to start with, fixing that can be much harder in a microservice architecture (with corresponding separation of teams) than a monolith.
But they fail to create the situation that is what I would consider the distinguishing characteristic of a "Big Ball of Mud", where there are no partitions between anything at all. Big Balls of Mud have no equivalent of "replace this microservice". Microservices by necessity, to be "microservices", have partitions.
> It may be the sloppiest, crappiest definition every, with dynamic types
Funny you said that since a microservice API is always dynamically typed and its usage cannot be checked by compiler. And the more microservices you use the more dynamically typed the whole project gets overall.
While you can opt into a free for all everything importing everything, all languages do also support creating modules which define APIs for consumer and maintain compile time type checking.
> Funny you said that since a microservice API is always dynamically typed and its usage cannot be checked by compiler.
Not true at all, I have a build system in place that when changes are made to Typescript, the JSON Schema on the endpoints is updated, and client libraries are updated.
Types are validated at both compile time and runtime.
This is just one of many solutions to the problem, there are a lot of ways to get type safety for service endpoints, at both runtime and compile time.
Except each service scales independently, can be updated independently (e.g. pri0 security bugs that require a breaking change can be applied to public IP facing services first), can be written in a different programming language, and can be rolled back/deployed independently.
Working in a microservice environment is nice, newly created services get the latest version of whatever tools are needed, older services can be upgraded as needed. Avoids the problem of being stuck on some ancient version of the JVM or Node.
Although true to some extent, this is also a tooling issue. We use OpenAPI and generate clients. Strictly speaking the interoperability layer doesn't know the truth on the other side, but when generated correctly, the developer doesn't need to worry about it.
When you change function signature it doesn't compile until you fix all the call sites. Maybe after you see all the broken call sites you think that's too much work maybe there is some other way to do what I want.
When you change anything about an endpoint, it compiles just fine no matter what clients there are or what they are doing. You proceed with the change without worry. Yes, you can have some self-discipline to also generate the open api documents, generate the clients and then check all clients but self-discipline like that is not reliable.
So while tooling can help it's still not the same thing.
> When you change function signature it doesn't compile until you fix all the call sites. Maybe after you see all the broken call sites you think that's too much work maybe there is some other way to do what I want.
Improperly versioned dynamic libraries want a word with you!
Perfectly possible to foot gun yourself with compiled code and broken API boundaries.
> When you change anything about an endpoint, it compiles just fine no matter what clients there are or what they are doing.
If I am exporting a DLL, same thing happens. Working on a true monolith, the entire app is compiled as one giant executable, sure, then I get compile errors.
If you use semantic versioning correctly, then you shouldn't need to worry about api<->implementation inconsistency. Or rather, when you need to, you know which parts you have to deal with.
> Funny you said that since a microservice API is always dynamically typed and its usage cannot be checked by compiler. And the more microservices you use the more dynamically typed the whole project gets overall.
I mean this is just not true in the general sense. I've setup plenty of microservices with various typed APIs. Protobufs are an example of an extremely easy to implement, strongly typed, API tool.
I don't think you have to define a microservice by http requets with JSON.
Until you want to update a protobuf definition and then tear your hair out because you cannot atomically ship changes to clients and servers. Even if the code in your source tree says everything is a-okay that one service that hasn't been updated shits the bed when handed a proto with mismatched types.
Thus, proto definitions become an "only add, never modify or remove" thing. Not ideal.
> No, they aren't. The entire point of the big ball of mud is that there are no meaningful divisions in the code. Everything uses everything willy-nilly, at the smallest possible level of abstraction. There is, metaphorically if not always entirely literally, not a single line of code in the system that you can change without fear of bringing something else down that you may not have even known they existed.
This is bit of a strawman. The equivalent strawman criticism for microservices is that every function runs in its own networked service. Is that truly representative of the reality? Of course not, and neither is your breakdown of big ball of mud.
Funny, I was thinking of linking to that page myself in support and decided against it. I particularly was thinking of:
"What does this muddy code look like to the programmers in the trenches who must confront it? Data structures may be haphazardly constructed, or even next to non-existent. Everything talks to everything else. Every shred of important state data may be global. There are those who might construe this as a sort of blackboard approach [Buschmann 1996], but it more closely resembles a grab bag of undifferentiated state. Where state information is compartmentalized, it may be passed promiscuously about though Byzantine back channels that circumvent the system's original structure."
and
"Such code can become a personal fiefdom, since the author care barely understand it anymore, and no one else can come close. Once simple repairs become all day affairs, as the code turns to mud. It becomes increasingly difficult for management to tell how long such repairs ought to take. Simple objectives turn into trench warfare. Everyone becomes resigned to a turgid pace. Some even come to prefer it, hiding in their cozy foxholes, and making their two line-per-day repairs."
Superficially it may seem like these criticisms apply, but they don't.
Microservice architectures can't have big global variables typing things together, structurally. Even if "everything talks to everything" (which is actually unlikely, sarcasm aside, even the absolute worst systems have more structure than that in a microservice architecture), it does it through a defined mechanism of RPC. State can't be passed through "back channels" because no back channels exist; you must go over the network, which is the "front" channel, not a back channel.
By structural necessity, a microservice confines the scope of changes in the microservice code itself. There are then of course still scopes of changes that are even harder with microservices at the global level, but for refactorings that aren't behavior changes a microservice necessarily confines the scope of changes to the microservice itself, a small fraction of the whole. As long as the API accepts the same input and returns the same output, it can not blow up another service three layers away because it has no access. But that's a distinguishing characteristic of a Big Ball of Mud.
Big Ball of Mud isn't just a slur; it's a distinct pattern as described in that paper. At best a microservice architecture can be a lot of smaller "balls of mud" hooked together, and that's still a problem, but it is its own problem. This is proved by the fact that the solution to a Big Ball of Mud won't work for an microservices architecture disaster... indeed, they aren't even sensible. If two problems require separate solutions and the solution to one is not even conceivably applicable to the other (that is, not even a "bad" solution but simply no solution at all), they are clearly not the same problem.
You can have comparable problems with microservices. Those BBOM problems you highlight occur when good development practices are ignored. So if good development practices are ignored under microservices, what can happen?
Multiple, different services sharing the same database seems comparable to global state under BBOM. This shared database is a back channel.
Furthermore, as the article describes, if you don't define your domain boundaries correctly then changes are not necessarily confined to a single microservice.
As for blowing up a service three layers away, of course this can still happen. Just because you inserted the network between those layers doesn't mean that layer 1 can't produce outputs that triggers an edge case in layer 3 that wasn't properly tested. Similar failure modes are mostly all still there, it's just easier to violate certain good practices in a monolithic system. Maybe that means it happens more often in the BBOM, but that doesn't mean it doesn't happen at all with microservices.
I think the article broke this all down exactly right. Microservices pushes complexity into infrastructure, and sometimes that's good, but often you want that complexity in code and encapsulated in well-designed abstractions that are enforced by the language (like a good module system).
> which is actually unlikely, sarcasm aside, even the absolute worst systems have more structure than that in a microservice architecture
Nope, the absolute worst systems have the same complex circular dependencies as monolithic BBOMs, the microservices just call into each over the network rather as function calls (imagine for example an API request into service A that calls into service B that calls back into service A). You may argue that makes it not a true microservices architecture, but one could make the same claim about a modular monolith.
> Microservice architectures can't have big global variables typing things together, structurally.
Oh sure they can, just call it “the DB”. Shared state between services/components is the most debilitating mistake I’ve had to deal with over the years. It doesn’t matter if it’s a monolithic, shared memory desktop app or a highly distributed architecture with central storage.
The main reason is to stop crap developers using globals all over the place, or passing around hash maps stuffed full of config that aren't clearly defined in one place just continually mutated (and similar bad issues).
Still, crap devs will just find their own ways to mess up with microservices, but at least they limit the blast radius.
They limit the blast radius to the scope of systems that either directly or indirectly call the microservice, which is commonly the same blast radius as a monolith that was replaced by the microservices.
"The entire point of the big ball of mud is that there are no meaningful divisions in the code.... Everything uses everything willy-nilly....not a single line of code in the system that you can change without fear of bringing something else down that you may not have even known they existed"
...then OK, perhaps the developers who caused the state you describe above would not cause the exact same problems with microservices -- but will they really move fast any not cause a mess given ANY kind of environment?
The state you describe is not normal of monoliths by any stretch.
It may be normal of old legacy systems with 5 generations of programmers on it. Also I believe microservices will have other kind of problems, but still deep problems, after 5 generations of programmers.
If preventing people from running in the completely wrong direction of the goal is your main concern -- why even be in the race at that point. Find new people to work with.
If you personally had success rewriting a Ball of Mud into microservices, consider if perhaps the "rewrite" is the important word (as well as quality of developers involved), not whetyer the refactor was to a new monolith or new microservices.
Microservices with boundaries drawn wrong can cause you to need to spend 20 programmers do the job of 1 programmer. Perhaps the mud looks different from a Big Ball, but it is still mud.
> if the boundaries are not drawn correctly to start with, fixing that can be much harder in a microservice architecture (with corresponding separation of teams) than a monolith.
Therein lies the problem. Nobody draws these boundaries correctly on the first try, and the correct boundaries can shift rapidly over time as new features are added or requirements change.
> Microservices are not a miracle cure or the solution to every problem, but they do force divisions within the code base.
Do they? The code itself may be in entirely separate repos but still be tightly coupled. Monoliths can have cleanly separated libraries/modules, those modules built from separate repos or at the very least, different namespaces.
The "macroservices" I've been seeing are many separate containers all sharing at least one data store. So they have all of the disadvantage of the "ball of mud" monolith combined with all of the disadvantage of much more complicated infrastructure. Yet the people working on them think they're "doing microservices" because k8!
The microservice separation is not just code in separate repos. It's also everything else behind the kimono - keep that kimono clasped tightly!
Microservices is a team organization technique, whereby disparate teams only communicate by well defined APIs. Any technology choices that come out of that are merely the result of Conway's Law.
Any time you lean on code in a random GitHub repository, where you never speak to the author and just use the API you're given, you're doing microservices. This works well enough so long as the product does what you need of it.
The problem is that when the product doesn't do what you need. If the microservices teams are under the same organization umbrella there is a strong inclination to start talking to other teams instead of building what's needed in house, which violates the only communicate by well defined APIs. This is where the ball of mud enters.
If your organization is such that you can call up someone on another team, you don't need microservices. They're for places so big that it is impossible to keep track of who is who and your coworkers may as well be some random GitHub repository.
> Any time you lean on code in a random GitHub repository, where you never speak to the author and just use the API you're given, you're doing microservices. This works well enough so long as the product does what you need of it.
No. That's not what a microservice is.
I understand you are trying to draw analogies, but a library is not a considered a microservice.
Quite right. As before, people provide service. A library is something that people can produce as part of their service, perhaps, but a library is not a person itself.
> Any time you lean on code in a random GitHub repository, where you never speak to the author and just use the API you're given, you're doing microservices.
> Everything is a microservice, even Linux kernel modules are... apparently.
A microservice is comprised of people. Not a whole lot different than a service, but narrower in what is offered such that the service doesn't provide something useful on its own and is meant to be integrated with other services to achieve its full utility, hence the 'micro' moniker. In the world of physical products we often call these people suppliers.
It is possible that a microservice may produce Linux kernel modules.
Sure, I guess if you completely redefine the word "microservice" to something completely different from the common understanding, it makes more sense.
If someone is writing about "microservices," they are generally talking about the situation where those APIs and team boundaries are exclusively (or at least primarily) composed of separate applications communicating over the network. Not what you're talking about.
> the situation where those APIs and team boundaries are exclusively (or at least primarily) composed of separate applications communicating over the network
It is always curious when someone writes that they are bowing out of a discussion, as if they don't realize that no longer replying conveys the exact same information. Was there an additional takeaway here that I missed?
Conway's law doesn't apply to microservice architecture.
Microservice architecture splits the functionality farther, than Conway's law talks about. When a single team owns 4-5 microservices - that's beyond Conway's law.
It is people who provide service. If the (micro)service produces 4-5 products as part of the service they provide, that's not beyond Conway's Law at all.
Get with the program. The world of software consists of one domain ("web apps") and two species: "microservices" and "monoliths". Older, cranky and ultimately useless, software zoologists insist this is all wrong and that there are all sorts of taxonomical layers and creatures yet unseen by the avid readers of blogs. But that's not what the internet says and hey, white hair? "Hmm. That's a red flag right there."
So many people have very little respect for the effects of the fact that this field is so ridiculously young, including the fact that all of these definitions are incredibly wishy washy.
Being able to take a statement like “an external dependency is a microservice” and not completely discounting the point being made is incredibly valuable.
No, sorry, the field is not ridiculously young. The issue is the technical decision making process is now delegated to "ridiculously" inexperienced developers.
> an external dependency is a microservice
Regret to disagree again. Flip that and you have a leg to stand on at least: A microservice is an external dependency. Service defines the architectural semantics of the dependency. Micro scopes the services provided.
The industry of which we speak is not in the sciences. Maybe engineering. The earliest recognition of software engineering seems to only go back to the 1980s, but there is a lot of debate as to whether it is even that. It's probably closer to gardening. Unfortunately, the industry is too new to have yet fully recognized what it is.
Software engineering is software engineering, but the process of creating software may often be more gardening than engineering.
I mean, what is it about haphazardly trying out an approach you read about on a random blog written by someone who has a different problem to solve than you says engineering? In my world engineering implies rigour, adherence to the data, etc.
There can be engineering in software. There can be science in software. But I'm not sure that means software work is categorically either engineering or science. I expect you will find that there is a lot of fad chasing and leaps of faith in hopes of stumbling upon something that will yield.
To the specific topic of conversation, where do you find the science or engineering, and not just people throwing shit in their plot and seeing what grows?
In some sense it is. It has an interface that your program communicate through. It might be loaded separately in memory (or even shared between several programs). That’s kinda of a local microservice.
Microservices is a collection of independent teams who only communicate using well defined APIs. Those teams may produce libraries, perhaps. That's up to Conway.
In practice, likely. The trouble with only communicating by API is that if you want to change the API you can't just tell those who use it that they'd better be prepared for breakage, so you have to juggle both the legacy API and your new API in your work to maintain compatibility.
If you use a linked library, interface compatibility can be a real challenge. Web APIs, as you call them, offer more flexibility in adapting to different callers. It also comes with the added benefit of defining explicitly clear boundaries between teams that something that shares memory doesn't necessarily guarantee (e.g. monkey patching). This oftentimes makes it a good practical choice.
> That's not what microservices means. You're intentionally equating all services to microservices, which is not the case.
Eventually everyone provides a service to someone, I guess, but there is a clear division between those who serve end users and those who serve the servers. The service that places your food down on the table at the restaurant is not thought of in the same was as those who get to the food to the restaurant. The latter camp is often known as a supplier, but we in tech call them a microservice instead. There is no suggestion that all services are microservices.
Microservices is defined to be extreme decomposition, which is not the same as a service. And the biggest pitfall of microservice architecture, is the creep of loose coupling.
Right now I work in an environment, where user account information is copied to every single microservice's internal DB... to minimize access time to that information...
Restaurant provides you with a service - one single interface.
Imagine if a restaurant used a microservice architecture? You'd be spending your time going from the fridge to the cooks, moving your food from one to another(because no cook will do the whole recipe), then going to the dish storage, having a separate cook arrange your food on your plate, etc.
> where user account information is copied to every single microservice's internal DB... to minimize access time to that information...
In terms of access time, what does that gain you over a bog standard shared database? With careful planning, you can likely achieve the lowest access times using a shared database. There is so much more room for optimization at every level.
The reason actual microservices (teams only communicating via API contracts) must carry out this practice of duplicating data is because a shared database requires non-API communication for various things, like enabling schema changes. If you can call up someone on another team to talk about how to upgrade the database and reach a shared understanding, what do you need API-only communication for?
Did your organization become confused about why they are doing that and then invented a performance argument to justify it?
You're not allowed to have a share DB with microservice architecture. Every microservice is completely self contained...
The architecture of my organization was created by ardent microservice advocates, that took the definition of a microservice and treated it like a bible. Based on the conversations I had here, there are still people that treat microservices as sacrosanct.
The more I work with microservices, the more I'm convinced that microservice first architecture is garbage.
The copying of the data is the solution to data access latency.
> You're not allowed to have a share DB with microservice architecture.
The law won't prevent you, but indeed it is impractical. As I said before, things like schema upgrades become impossible if you don't have the ear of other teams. When you can't call up another team you also lack trust, so security becomes an issue as well. This is why teams who operate in independent silos must ultimately have their own databases.
The practice has nothing to do with performance, though. Microservices has little to do with technology at all. It is about people.
> The more I work with microservices, the more I'm convinced that microservice first architecture is garbage.
I personally steer clear of organizations so large that they need to silo their teams internally, but I have had a positive experience using services provided by other companies with the only communication between us being API contracts. The concept works well enough. At least so long as the product does what you need of it.
If you try to force big business practices into a small organization, you're no doubt in for a bad time. That said, I don't get the impression you are talking about microservices at all, just a bad case of over-engineering.
A service is something someone provides. A microservice is much the same, but serves those who offer services to the end user – roughly analogous to a supplier in the physical goods space. A module may be the product of the service rendered.
Developers all provide the same service (more or less), giving no clear delineation in which to organize. More likely will you organize teams of developers around products.
There is a certain kind of "freedom" that is really slavery, but people feel so free when they hear about it is they often squee and hurt themselves with uncontrolled movements.
Microservices can be that way. Now that you have 25 different services in 25 different address spaces you can write them in 11 different languages and even use 4 versions of Python and 3 versions of Java. (I got driven nuts years ago in a system that had some Java 6 processes and some Java 7 processes and it turned out the XML serialization worked very differently in those versions.)
If you want to be productive with microservices you have to do the opposite: you have to standardize build, deployment, configuration, serialization, logging, and many "little" things that are essential but secondary to the application. If a coder working on service #17 has to learn a large number of details to write correct code they are always going to be complaining they are dealing with a "large ball of mud". If those little things are standardized you can jump to service #3 or #7 and not have it be a research project to figure out "how do i log a message?"
The problem with standardizing these things is, that it makes them hard to change. A breaking change in the way you deploy must also work for all other solutions - otherwise you immediately loose your standardization. And this will happen eventually. For all this different kind of problems it is near impossible to avoid inconsistency and force rules on them.
So, imo you either have a (very) large organization with independent teams that work on independent services and give them the freedom for everything - or you develop a proper modulized monolithic software and extract services only as a last resort.
I would avoid to use a microservice architecture with a small team of developers
No, standardizing these things makes them hard to change for devops. How many breaking deployment changes have you seen in reality?
You're right that microservices shift burden onto infra. But that does not make it a big ball of mud -- infra has gotten progressively easier over the past two decades. If you want me to create the 'ball of infra mud' mentioned in your article, I can do it -- and make it repeatable -- in a few hours. It will come with dashboarding out of the box.
This is why microservices have become more appealing to more businesses. The technology allowing you to provision this infrastructure and deploy your code has changed immensely, allowing you to shift some of that burden over to infra.
People don't need to be given freedom for everything. Like the parent mentioned, with this standardization, people writing application code are able to move quickly and understand how the pieces work under the hood without shifting their mental model.
Who is this devops of which you speak? Did you just mean that standardising these things makes them hard to change? Or, by 'devops', do you mean a particular group of people?
> you have to standardize build, deployment, configuration, serialization, logging, and many "little" things that are essential but secondary to the application
...so then you can go reimplementing those standards in 25 different services written in 11 different languages. Sounds like fun!
I don't think it's possible to just write a library/framework that would encapsulate all of those standards and re-use it in different services because, again: different languages.
I worked for a company that did microservices well, and this was the norm, too. The term in my head is "golden path" or "sandbox". The languages were Go and Python, speaking protobufs over gRPC.
Developing software outside that sandbox was not disallowed, but you were "on your own" in terms of infra support if you chose to do so.
FWIW, the main "tricks" we found were (1) using a good build tool (2) use a good ci/cd tool (3) use a monorepo.
There were some minor downsides, and it took a long time for tooling capabilities to line up with our ambitions. But when I left, we had 200-300 engineers happily deploying to `main` across 100ish services every day.
Just curious, why is using a monorepo a useful trick? I would think it'd be better to have internal libraries that provide common functionality across services, and have a repo for each service. Otherwise, you're deploying code changes for one service that could, in theory, mess with another service that you don't maintain.
In a phrase, it's having a single "bleeding edge" for the entire company, vs 1 bleeding edge per service. Some benefits of this include:
You have one commit hash in one repo that tells you what version service/consumer X is expecting / providing.
If you want to understand why a service isn't working as expected, it's trivial to grep its implementation and contribute solutions.
As a service owner, you can grep for all places where your service's client is initialized, and update them in one PR (vs 12 PRs in 12 repos, that you have to manage independently).
Basically, it reduces the coordination cost of breaking changes.
---
You can get these benefits in a multi-repo setup only if you have adequate tooling around multi-repo PRs, code search, etc. It's not impossible, but the barrier to highly effective work is higher, imo.
A standardized build and deployment process can be done in a language agnostic way with containers. Every service defines its own Dockerfile. The CI/CD process just runs docker build and whatever process is needed to pull an image and run it in whatever environment.
Configuration is either done with ENVs or with some standard configuration service (which is an API call that can made on service start)
Logging is just standard out in the container. Each team uses whatever logging library is appropriate for their language. Only requirement is that it supports logging to console. There are a plethora of options for shipping those logs to somewhere centralized (fluentd/fluentbit, logstash, etc)
Some of this stuff can also be done with sidecars (see something like Dapr)
So, no, you can't write a library but you can standardize.
You don’t have to standardize build or deployment or configuration. For serialization, you can use something like GraphQL which will enforce types. For logging, you need distributed tracing which services like Sentry have good support of.
Don’t you find it weird that everybody else is either over or underengineering, but you, you engineer things exactly the right amount?
I bet when you’re driving, you also tend to notice that everyone else is either an idiot driving way too slow in the middle lane or a maniac speeding past you. Nobody else drives as well as you do.
It must be exhausting for these people to live in a world surrounded by strawmen, while they alone have achieved perfection. Which they will detail how they do in a later blogpost.
What the author or many people miss is that this is still an unsolved problem. We still don’t know how to do this right. Doing it either way results in unmanageable complexity. The same still goes for the front-end. It’s too complex that someone decides to start another JS framework to fix the status quo.
What many people miss is that these attempts are the solution to the problem. People will keep trying different ways until they find the “optimal” (or close) solution and the problem space is solved. React, in some sense, made front-end development at this level of complexity accessible to so many people with little engineering know-how.
At some level it both is and it isn't. The problems behind scaling a distributed application are quite well understood at this point, the problem that hasn't been solved is that too many won't accept the answer in terms of development time and discipline. The mess that could be created with micro-services which simply recreated the mess within the monoliths, was both predictable and predicted.
At some level it's the Forth Bridge all over again. The Forth bridge was constructed within 10 years of the Tay Bridge disaster. It wasn't that civil engineers of that time didn't know how to build bridges that wouldn't fall down, they just hadn't learned to not go with the lowest bidder.
Hello there, long time no see! Hope you are doing well! Yes, you are spot on it is an unsolved problem, especially for a business that starts off small and then grows. No matter what stage you observe them at some things will be undersized because they were built in the past, some oversized anticipating growth that may never materialize and hardly any of it will hit the sweet spot. Growth is the killer for any architecture.
We are closer than that. Services are just aggregation layers for functions. The functions are the things of interest, and the aggregation/abstraction is meaningless from a runtime perspective. By creating a function-first architecture (ala Lambda) using a runtime with a common compilation target (e.g. WASM), you can have a single code-base with hints/IDE navigation while also allowing for polyglot execution.
I think many (including me) would still consider an application written in Lambda functions a monolith.
It's how databases are used etc. that matters for service boundaries.
There really isn't that much difference between a stateless monolith deployed in k8s, a Lambda function, and a microservices with one function in each service ... it's all just stateless code anyway.
Yeah, it's crazy the hate microservices get. I happen to work on a product that needs to scale to handle millions of transactions distributed all over the world and the cloud microservice system is just flawless for this purpose. Is it harder to debug than a monolith? Sure. You need a team that has a good understanding about distributed systems (most coworkers have degrees and the ones without are really good at self-teaching, kudos to them). Could you easily develop our system in a monolith? No, it would be a nightmare, if even feasible given the same availability, security and performance constraints. I totally agree not everyone works on stuff I love to work on (huge scalable systems), but Google and co didn't engineer this tech because they felt like it, but because there was a clear need for it.
And all the post is saying is that it doesn't apply to everyone. It applies to Google and many other companies including your's, but not all. As a matter of fact, it applies to a very small subset of use cases. If your software handles millions of transactions distributed all over the world, there's a good chance it's within that sweet spot. But most software written in the world doesn't fit that category.
Often the microservices hate is around sole developers ranting about how a waste they are, and how everything would be way better if only everyone was very careful and had good practices totally as second nature.
And it's similar in nature as the hate around high level languages: "If only everyone had decades of experience with C, "toy languages" wouldn't exist and programs would be lighter and faster today".
It is perhaps unfair to compare with Google, since the question of "how will we store the web" clearly calls for a distributed system, whereas the microservice haters are often speaking of systems with an ultimate user base in the low single digits. It _is_ true that with modern hardware you can monolith your way to serving thousands of users at 1 query per second. It is not true that you can incrementally modify that architecture to handle more users and traffic. One must carefully judge such things before committing to either the costs of a distributed system or the limitations of a monolithic system.
A reason this succeeds at Google is that many of the microservice counter-arguments boil down to "yes, but what if I am a jackass?" If you are afraid of having to reimplement your structured logging framework for 11 different languages, I suggest you simply do not do that. Initially everything at Google was in C++ alone, then python, Java, and Go[†] were slowly added. Even after 25+ years of being Google they are not out there trying to diversify languages. You don't need to be Google to exercise good taste and restraint.
†: You needn't correct me regarding the niche languages that popped up here and there because I am aware.
> It is not true that you can incrementally modify that architecture to handle more users and traffic. One must carefully judge such things before committing to either the costs of a distributed system or the limitations of a monolithic system.
I'm a little confused by this, microservices or monoliths can both be distributed systems. A monolithic architecture (eg a Rails app) still usually has a separate database. You can even deploy that monoloth in different configurations (Web vs Worker).
You can then deploy many nodes of that same monolith, pointed at many different shards of a database to handle huge scaling needs. This is especially easy to do if the problem domain lends itself to horizontal scaling, like B2B saas.
I feel like the definition of microservices has gotten a bit murky here.
Well, if you use a scalable SaaS like Cloud Spanner, do you then comfort yourself with the idea that you avoided microservices? Because that seems a little hard to defend on the facts.
> A microservice architecture – a variant of the service-oriented architecture structural style – is an architectural pattern that arranges an application as a collection of loosely-coupled, fine-grained services, communicating through lightweight protocols
> A microservice is not a layer within a monolithic application (example, the web controller, or the backend-for-frontend).[8] Rather, it is a self-contained piece of business functionality with clear interfaces, and may, through its own internal components, implement a layered architecture. From a strategical perspective, microservice architecture essentially follows the Unix philosophy of "Do one thing and do it well".[9] Martin Fowler describes a microservices-based architecture as having the following properties:[2]
A typical monolith would use Cloud Spanner as a layer, not as a microservice.
Most people's criticism of microservices isn't about never having independently deployable services, but rather the service boundary being based on business domain decomposition, rather than essential technical characteristics. In other words, different types of storage (due to their technical characteristics, i.e. blob storage for photos vs relational database for transactional entities) being served by different services for technical reasons is eminently sensible and not something most people would have issues with and has been around for much longer than microservices. The problem is more around people deciding that for a hypothetical financial website backend that "bonds" and "stocks" should be two separate microservices, rather than just two endpoints served by the same monolith, even though they differ merely in terms of business logic and do not require any special handling from a technical perspective.
>"Could you easily develop our system in a monolith?"
You have a specific requirement and constraints that call for your system to be distributed. However solutions applicable to majority of regular businesses do not require high scalability and can get easily away with monolith.
I've personally built both types of systems: distributed and monoliths. My take is to always stick with the monolith until you can't.
The "hate" is directly proportional to the failure of microservice zealots to deliver on the ultimate, everlasting, universal prosperity and harmony, that they promised.
Wow. Is there a way to favorite a comment on HN, because this is so spot on.
I've worked on _a lot_ of different applications, and all of them have their ups and downs. Knowing when to implement a micro service, and when not to, depends on experience and knowledge of the application(s) at hand.
Same goes for _everything else_ in the software world. How many of us hasn't used something wrongly in our dev-life time?
I don't know... perhaps a bit harsh but article seems reasonable to me. I'm also yet to see "wow, what an engineering!" application that heavily depends on microservices. mostly it does feel like an unnecessary complex pile of mud.
I’m honestly mainly reacting to the part where they judge interview candidates on the basis that they worked on systems that implemented microservices.
It is hubristic in the extreme to sit in judgement over the architectural choices a team has made on a system you don’t know or understand, and just downright rude to conjecture all the worst failure modes of that architecture and then assume that they constitute what your candidate values.
The only thing you can extract from talking to a candidate about their past projects is their impression of that project. What did they learn? What architectural choices did they like and why? What choices do they regret? Just because they made a choice you wouldn’t have does not mean they are irredeemably broken.
‘Oh, you worked on microservices. Bet it was a big ball of mud.’ then you ask a bunch of questions to confirm that suspicion - you’re just dumping your prejudice against an approach onto someone who might have learned something useful from their experience of working on that thing!
And if they did work on a badly architected micro service system, maybe they learned ‘micro services often turn into a big ball of mud, I’ve seen it happen’. Maybe through that experience they learned something about how to avoid that fate? Or maybe they now share your opinion that microservices are a terrible idea; and if you like hiring people who agree with all your ideas then they would be a great add to your team.
We have all worked on systems that had architectural flaws. We have all built systems with architectural flaws. What matters is how we took those lessons and incorporated them into our understanding and taste for what makes good architecture.
> It is hubristic in the extreme to sit in judgement over the architectural choices a team has made on a system you don’t know or understand, and just downright rude to conjecture all the worst failure modes of that architecture and then assume that they constitute what your candidate values.
It's more like a cultural fit. Microservice oriented people brings too much friction into the process.
Nailed it. This kind of post belongs to a genre where "micro" is tautologically defined to mean "too small". The rest of the post follows logically and could have been written by an automaton.
This happens a lot. Otoh, sometimes you come across cases where you wonder if you accidentally landed on a different planet. Mid sized project, 500 microservices, 800 repositories...
And sometimes when you’re driving you’ll pass a car upside down in a ditch and think ‘glad I’m not that guy’.
But you know that can still happen to you, right? Even if you’re careful?
The sales team promised a massive contract; the system design has to be able to hit x TPS to make it; we need to pull out the stops and build to scale for that and to allow for all this future expandability.
Six months later, that sales director has left, the product has pivoted, and you’re ‘over engineered’.
Being able to scale to any specific amount of usage is never a reason to use microservices. Plenty of planet-scale services and apps are powered by monoliths. And you never run into any kind of ridiculous overengineering when you refactor things into services for scaling reasons and that's not a microservices architecture in the first place.
The problem has always been about people deciding that different parts of the app that differ mostly in terms of business logic somehow need to be independently deployable. It's like deciding that instead of having a single database cluster and having multiple databases and tables, you need a separate, independently scalable cluster for each database table, because of some bogus reasons like some tables are queries more than others and different teams work on different tables and we can't let people join tables because it breaks boundaries or something.
This post reads more like "I created an overengineered microservice architecture once and I'm now jaded because of my bad experiences." See "10 years ago I wanted modules, but I as well found microservice architectures."
It's in the same genre as "I nearly wrapped myself around a tree and now I drive slowly when it's wet", not "I am perfect and everyone else is wrong."
You could say this about any criticism of mainstream practice. This is not a rebuttal and contains no new information.
Actually, I lied - this post is actually a strawman: it contains the information that you actually didn’t understand the point of the article. He is referring to people who have deployed microservices when they needed modules, not people who deployed microservices when they needed microservices.
Common definition of a microservice is basically a module.
This is not the first thread on the topic.
And considering the number of people actively defending microservices, like it's debates about iPhone vs Android over a decade ago - I'm going to say that too many people can't rationally look at this at this point. Too many emotions flying around.
I thought he was referring to typical in-process encapsulation. If you do that well, you can run those separately “for free” optionally, but I think his definition of module is nothing unique.
From the article: "Modules which can be easily be deployed independently - when the need arises."
I read this to mean that they would always be deployed independently, but now I think I misread it.
He actually meant that they should be really easy to split. In practice, I think people with this view underestimate the difficulty in splitting even seemingly unrelated services from a monolith. Its never completely trivial.
Its better not to start that way if it is foreseeable as an undesirable end state.
The most dangerous thing about microservices is that many people squee when they hear "You can write 45 different microservices in 35 different languages". Writing a new microservice? What a great opportunity to learn a new language!
Now it is true that by decoupling the address spaces microservices do let you take advantage of different runtime systems: you might really want access to scikit-learn in Python, for instance, certain libraries available in Java, but also like the speed of Go. That's alright, to a point.
If a microservice system is going to be maintainable you need to minimize the excessive complexity of using different libraries and frameworks for build, configuration, serialization, logging, database access, and other cross cutting concern. To the extent that that stuff is standardized the programmer who works on service Q can do some work on service B and be focused on the application instead of having to do a research project on where to put a configuration variable or log a message.
It's hard to do because in the microservice environment people seem to get a lot of joy out of not being disciplined, will make endless excuses why they can't update the version of the language they are using, etc. It's the kind of freedom that Orwell warned you about.
Not having to upgrade in lockstep is great though - the downfall of many monoliths is the ‘forced death match to port to new version of the platform because we put this off so long it’s about to leave LTS’.
Being able to use different stacks where appropriate is also great (but needs care). E.g. if you are a Java shop but you want to deploy an ML pipeline you should at least consider carving out a pathway for deploying python microservices.
I'm not against having more than one runtime or some variation in component versions. But you've got to be deliberate about it, not blunder into it the way most people do.
I would argue the inverse. If you reject both extremes, and are somewhere in the middle, you probably are much closer to something reasonable. (both in engineering and when driving)
When you drive down a freeway, no matter what speed you are driving at, you will see far more cars traveling at a different speed than you than at the same speed.
The cars going at the same speed as you never pass you and you never pass them. The only cars you notice are the ones who are driving differently.
It is easy to convince yourself that you are taking the sensible middle course, no matter what speed you are driving at. Because you can always point to an extreme outlier and say ‘I’m not that guy’.
Doing the average of everything is a great way to be pretty wrong. If you have to guess, guessing average seems fine, but if you know better, going with the herd can be foolish.
You may not like it, but my nodejs server with nearly all the code in a single file, and most of the business logic in inlined SQL, is what peak swe looks like.
Don't you find it strange that YOU live in a world where everyone else but YOU thinks that everyone else around them is over engineering or under engineering things?
You're not the guy who thinks everyone else is a maniac and is driving too fast or too slow. You're the guy who thinks everyone but you is under this delusion of thinking that everyone else is a maniac and you think you are an exception to the delusion because you can see it in other people.
I am here to tell you that there is a higher plane of understanding. This higher plane of understanding is this:
Even when you are aware that others are prone to biased and fallacies, this awareness does not make you immune to those same biases and fallacies. Because what you understand is also cliche. You are voted up because your awareness is also shared by others. Others observe the biased and fallacies of other people and think that they themselves are above it when in actuality everyone is just looking at each other. We are all mirrors.
There is an even higher plane of understanding. I am not on this level and neither are you. This plane of understanding is the realization that one of us is NOT delusional. Someone is actually right. Someone sees the reality of how something should work and he is right and he is ACTUALLY surrounded by straw men.
The tragedy of the design of computer programs is that we have no theory of efficiency, no theory of what is most optimal. So if someone is right, we have no way of knowing. We are doomed to forever live in a world of blog posts where one of those blog posts is right but we can't know for sure.
You obviously wrote this because you think that the author of the blog post is NOT that man. But because we have no theory to prove otherwise, you simply state common tropes of human fallacies and biases and you use that analogy to discredit his post. It's pointless. Analogies aren't proof, they are themselves delusions but weaponized and used to seemingly prove a point or discredit one. They are convincing but manipulative. Your post can actually discredit every blogpost in the universe and that is why it is useless, pointless and manipulative.
Rather then write sweeping analogies of human biases which can actually "disprove" every single blog post on the face of the earth... offer evidence and example scenarios about why you disagree. Because personally I think the blog poster is right. His thoughts on microservices is correct. Convince me about why he isn't that one guy who is actually right, but know that the argument is endless because none of us can actually verify anything.
That's... a bit more metacognition than maybe my snarky reply warrants.
I didn't claim to be the only person who is smart enough to spot that a lot of online architectural opinion-writers structure their opinion-pieces as strawman-takedowns of 'popular wisdom', and that this is another in that genre which appears to add nothing particularly further to the discussion.
I never claimed that there is no such thing as objective truth, or that there might actually be better or worse ways of solving some problems.
All I'm saying is that 'microservices are a bad choice when applied badly to the wrong problem' is tautologically true, and gets us no closer to understanding the objective criteria by which we can determine what 'bad' or 'good' mean.
Well that may be what you intended to say, but much of what you put, well pretty much all of what you put, is simply an attack on the poster's character. What I read from your post boils down to "the poster is clearly an elitist who thinks he is right and everyone else is wrong, so his opinion isn't worthy of recognition and is wrong".
Character attacks on the internet are self defeating and foolish in many ways. There are little to no stakes for the attacker if wrong (undermines the value of the attack). The attacker is being hostile to someone completely, or mostly, unknown/unfamiliar (foolish). The attack does nothing to address the actual argument/points being made (waste of time for readers).
Rather than oppose the messenger, perhaps lead with opposing the message itself. It is far less hostile and is more productive. You will get better at countering the argument/points if you actively practice doing it, and it is better for everyone involved (more focus on the issue rather than the winning a battle against someone "bad"). You also might spare yourself counter takedowns like you got here, this one being particularly spicy.
The article linked doesn't even seem to say that microservices are always bad. It simply says they are difficult to do, and the author sees them as over engineering. Overall a pretty common opinion.
I think the parent comment was a response to an article that lacks nuance "It’s time that we put an end to this over-engineering".
Finding the right abstraction is often an incremental process made of tradeoffs on a case by case basis. It doesn't really help to make absolute statements.
"better" is a projection of various personal features onto a single axis. If you ask those 100 engineers to predict their strengths and weaknesses relative to the room, they'd probably do an OK job of it. Asking who is "better" is asking for that, plus their subjective weights of those skills. You should expect that to be a pointless exercise.
It’s probably true too! The peers are likely the less astute engineers who are stuck at the office rather than traveling to a cushy engineer conference.
Exactly. My post is the realization of two things:
1. All of us reading this post are more likely to be part of the 80. We are all likely delusional and we are all likely to be wrong.
2. The second thing to realize is that one person out of those 100, is Actually better then everyone else. Who is he and how do we find him? I'm curious about whether this person said yes or no.
> The tragedy of the design of computer programs is that we have no theory of efficiency, no theory of what is most optimal
Yeah, that's just fundamental law we have to accept. I believe the reason we don't have a theory of efficiency is not because it's hard to measure. Efficiency is inherently a human concept, not a mathematical one. We will simply never agree on it's definition, so we'll never have a theory.
The good thing, however, is while we can't agree on how to measure efficiency, we can agree how to measure the outcomes of it. Efficiency leads to success, and our definitions of success are way more aligned (and more mathematical).
If we accept that theory is impossible, empirical evidence is as good as it gets.
For example, I'm not a fan of Ruby, but it contributes (or at least used to) to more than 50% of YC startups value, while not being a very popular language overall. Whether I personally like it or not, this is hard evidence that there's something about Ruby that correlates with success. Something I probably don't understand. Something even Ruby developers probably don't understand. Unless I have a plausible theory explaining why there's absolutely no causation, it's undeniably there.
Of course, we don't have empirical evidence on bleeding edge tech. And that's why "choose boring technology" is a thing.
But he's replying to an article that says "don't use microservices, use independently deployable modules". Your comment has more depth than the article itself.
Or as I like to say "Oh, you have a big ball of mud in your monolith because of poor design and want to move to micro-services?"..."now you have n^n big balls of mud" Poor design is poor design, adding more complexity just makes it a more complicated poor design.
I think most places rearchitect to microservices because it’s the new shiny. They don’t do the engineering necessary to create a detailed cost/benefit analysis, they just feel it will be better and so they jump in.
For the same reasons the companies don’t do the cost/benefit analysis they don’t spend much time thinking about how they could benefit from rearchitecting their monolith into various libraries, modules, packages and interfaces.
Because they don’t think much about these code boundaries, they end up turning their monolith into a distributed monolith. In doing so they don’t get the major benefits microservices are meant to provide, such as independent code deployment. They also lose the benefits of a monolith, such as less ancillary complexity. This situation is the norm and is evidenced by “deployment parties” where you can’t just deploy one microservice because 11 of them need to go to prod together.
What I have seen a lot of over the past few years is a push to get off main frames and into the cloud. This is a valid driver for rearchitecting but microservices are just one of a number of solutions as the cloud is very flexible these days.
I assert that a lot of rearchitecting to microservices can be attributed to the fact that our industry, as Alan Kay has said, is a Cargo Cult.
In my experience, most places rearchitect to microservices because they have a shitty monolith that they are dealing with.
The monolith has been built over a span of 10+ years. It is fragile, brittle, no one understands how the whole thing works, and large scale refactoring without widespread breakage is near impossible. No one wants to touch it because no one understands how the whole thing works. Because of this the codebase is also falling behind and isn't staying current with language updates (I know everyone here likes working with 10 year old tech but it does affect hiring when you have to say that you're using Java 8)
But it is true that distributed monoliths are very common. One problem is that it's hard to get everyone on board with going "all the way". The question "Do we really need multiple databases" is one of the main culprits that spawn distributed monoliths.
> The question "Do we really need multiple databases" is one of the main culprits that spawn distributed monoliths.
It's true the lack of bounded contexts is a big problem. But I also see a lot of orchestration services with multiple dependencies on other services, some of which are themselves orchestration services. This quickly cascades into dependency hell only you don't get the benefit of a compiler to alert you to the problems.
I'm always sceptical on big claims for or against specific architectural and infrastructural choices.
Micro-services makes sense in specific cases and doesn't in some other, the same as monolith is an absolute no-go in some cases but a really good fit for some others.
The correct choice is always the simplest for what you need, the tricky bit is understanding what you actually need. The right choice might be a complex solution because your needs require some complexity.
Going for micro-services just for the sake of it, without the need for it is a bad choice, but it doesn't mean that micro-services are bad.
Microservices - let’s replace as many interfaces as possible with the slowest, flakiest, most complex mechanism - the network layer. Why call a function when you can wrap that function in an entire application and call it via API? Why have a single database when we can silo our data across 200 mini databases? Why have a single repo when we can have 200 tiny repos?
The move to microservices is often more about scaling change management when an organization grows from tens to hundreds of engineers.
Change management becomes the bottleneck when organizations exceed Dunbar's Number, and a "reverse Conway maneuver" is needed to counter excessive cost of coordination.
Linux kernel has a huge amount of contributors with their own goals and they seem to coordinate just fine. Monoliths can scale, but they can't be "owned"/leveraged by leaders/sociopaths into a bigger budget/team/etc, IMO.
Linux development is absolutely hampered by being a monolith. Getting involved in Linux development is notoriously difficult. The average developer won't consider it even though there are many parts of the kernel which very much should be approachable. It is also hard to get some patches upstream if the goals of said patches don't align with everyone else's interests. There are even people exploring userspace schedulers to get around these challenges.
There are also huge maintenance burdens that come from the lack of stable interfaces for modules. There is a reason there is a lot of excitement around ebpf. It is providing the sort of scaling that people have been needing for a while.
The linux kernel is more siloed than one might expect, a lot of code is architecture specific or in drivers, which facilitates having a wide group of maintainers.
I don't agree. Almost every candidate during interview likes to talk how their team of 3-5 people manages multiple microservices. It's pretty much hype driven approach to develop a backend application.
"when an organization grows from tens to hundreds of engineers" - point taken.
My question then is: is this type of growth the norm, or is it exceptional?
If I am, say, Tesla... do I need 1 order of magnitude more sw engineers when I start selling 10 times the cars I was selling 2 years ago?
When going from tens to hundreds, do the hundreds manage to produce anywhere near a many-fold increase in performance though?
My impression is that microservices is a solution to keep a hundred people HAPPY (solving auxiliary problems you create for yourself) while producing about the output for the company that the "tens" would provide.
If management INSISTS on scaling from tens to hundreds, you can't very well keep the tens occupied on irrelevant side project. So instead you can do microservices. Either way you get the speed for the company of "tens" though.
Interestingly, open source has some change management processes that deal with orders of magnitude more people and pieces of software than you will find on a corporation, and mostly only imposes infrastructure policies when that's the development's goal.
It tends to fail less often than corporations too. And my impression is that it's more efficient for developer time.
I've seen quite a few people use the argument that their system wouldn't cope otherwise when they reach the size of Google.
If you are one of those, then I have news for you.
1. You are not going to reach the size of Google. Certainly not if you are engineering for it in your 2 person startup. But in case you do, see number 2.
2. Reaching Google scale means you'll have funding to hire good people to scale your software.
Reaching Google scale means you'll be sat on a beach wondering how to spend your multiple billions - scale is someone else's problem, might as well leave something in it for the next person.
If you plan your architecture carefully and modularize a lot, there is very little that will prevent you from branching out modules of a monolith into microservices when the time comes.
If you produce spaghetti code with tightly coupled components, well, then you are going to have a problem.
Microservices solve many problems to those that understand the virtues and deficiencies of it - everyone else is just cargo-culting and committing themselves too much to something they don't understand (see Kubernetes).
So you embed S3 into your app? Do you embed your ERP system? How about your marketing e-mail system? Your CRM? Do you use an in-process database?
Everyone already has a service based architecture whether they want to admit it or not. The question is the efficiency and granularity of it.
If you interface with any external systems that have data records (ERP, CRM, etc), your database is already spread out. You need to deal with it and be sure you understand the data efficiency and reliability of it.
I don't know what a monolith is that people talk about. Please show me one that doesn't talk to anything else.
Microservices are about fine granularity, not just "talking to other things." There's an enormous difference between "talking to a database" and having to figure out how to roll back transactions because your write to $lastNameService failed after you already wrote to $firstNameService when updating a username. The kinds of problems you run into with microservice based architectures aren't a result of "having services."
I don't see that distinction being made. The counter argument is things like libraries vs microservices. Those articles ignore the common case that probably is true for 99% of the apps - we live in a service based architecture. The Internet is service based. Everything an app talks to is service based. So quit arguing about it.
Microservices don't address any of the issues I ever faced or people on Hacker News talks about like mixing languages, building code around organizational structure, isolating faults, etc. Microservices are way to granular to address those issues well. The only time I ever wrote a microservice is when I had to wrap some stupid SiteMinder binary apache mod blob.
Also, almost nobody writes a monolith, if they say they have a monolith they are probably lying. Do they have a database? S3? ERP, CRM, blah blah blah. Nobody in their right mind would build microservices, that is nuts unless they have some really special architecture. So can we quit the endless debates about stuff and instead focus on the real issue: how to build robust and maintainable service architectures? How to partition systems properly so you can manage avoiding having conflicting data? Eventually consistent systems, etc.
This is as dumb as the CISC vs RISC debate I lived through in the 90s.
Lucky you, that you haven't been hit by the purists.
I am currently getting a hard intro into a microservice first architecture. Where customers are managed in one service, users are managed in a different one, user operations are in a third one. All have their own databases and much of the data is copied between them. There are microservices, that are used by only one other microservice.
That's where we are now - arguing about the validity of "microservices are the best/first" approach.
Microservices are defined as self contained, highly granular, separately developed and deployed horizontally scalable systems.
Think about the debates about Linux vs Hurd, that's where we are.
The idea that no true monolith exists because you didn't write your own kernel is really strange and peculiar. No one has ever claimed that a monolith has 0 external dependencies.
Yes, but the point they were making is that you don't have to abstract across a network boundary. A proper module system and package management can divide work and organization without introducing IPC overheads.
Latency is additive. You get a small window within which people don’t notice and network calls quickly consume all of it.
Micro services have a few advantages but unless you desperately need them it’s a huge hit to performance, productivity, and reliability unless it’s your only option. Netflix needed it because of their complicated network architecture with servers distributed inside various ISP’s, why do you?
public class MyClass {
private Dep myDep;
void MyClass(){
this.myDep = GetDep();
}
}
into this:
public class MyClass {
private Dep myDep;
void MyClass(Dep dep){
this.myDep = dep;
}
}
Why this needs an entire framework is beyond me. It feels like all they do is convert explicit code into boilerplate which then becomes much harder to reason about.
I think the theory is when you want to change the dep you can do it in a config file instead of in the code. But I totally agree, I don't think the value is worth the cost.
> Why have a single database when we can silo our data across 200 mini databases?
First, many orgs using, say, Kubernetes still run their database separately in a traditional way with replicas etc. Second, those who do run their DB inside Kubernetes, probably using an operator, gain scalability and additional resiliency (it also depends on the DB, but these days even PostgreSQL operator works quite reliably).
Scalability and stability.
You need to process more requests? Launch more virtual machines, easy.
Your microservice is a shit and it crashes every second? Don't care, it will be relaunched automatically.
Companies that repeatedly fail to detect and solve this type of problem using automated testing and QA are exactly the companies that lack the sophistication to do distributed microservice architecture.
Learn to do proper CI/CD, end-to-end testing, and logging/metrics on your monolith before you decide to transition to microservices.
Disagree. At well-run companies with highly available services that you have certainly heard of I have routinely discovered microservice backends that are just crashing all over the place with no consequences to the user whatsoever. You can never say that about crashy monoliths.
Every single endpoint in a monolith can be deployed and scaled
independently. In a single such deployment the rest of the endpoints are just dead code not costing anything except larger image.
Also in many cases work should not be performed synchronously in response to http requests but in a background queue to keep your service robust and responsive. When taking a new order you should just place it in some queue and respond ASAP so there is zero chance you miss a customer order because of some error. In that case there is no difference in scaling a monolith or microservice as you can indepedently deploy 25 consumers for module X events or 10 consumers for module Y events and so on.
Could just as well be at no additional cost. If your monolith separates services somewhat cleanly then you can run them on every node and not even touch the part of the codebase that's not related to the current task.
I wasn't talking about the cost of touching the codebase, but rather running a monolith horizontally with an overhead in terms of CPU-, disk- and RAM-usage.
For big companies it might me a miniscule difference, but for smaller companies it can be a deal-breaker.
Running costs or development costs? If you have a low-request / high complexity persistent application you may want to optimize for maintainability. Having all the code in one place _can_ make things easier to figure out in the long term.
> Having all the code in one place _can_ make things easier to figure out in the long term.
My experience is the exact opposite. A huge monolith makes it harder for developers - especially new ones - to get a grasp of how everything is connected. A separation of concern into micro services is sometimes a good solution.
You mean the few extra megabytes of RAM for some extra compiled code on each node?
If you refer to databases: Splitting one service into two services can at most give you a 2x scale-up potential (usually a lot less), and the effect diminishes for 3th, 4th service. Mathematically and logically. Splitting services = vertical scaling.
If you want 100x, 1000x scaling you need to invest in true horizontal scaling anyway, and that works pretty much the same way for monoliths as microservices.
> You mean the few extra megabytes of RAM for some extra compiled code on each node?
Few extra?
Have you not considered applications where you need to spin up/down _extremely_ resource intensive instances?
A year++ ago I worked on a health-related application which used a system for processing X-ray images. During typical work hour (6 am-6 pm) it required a _huge_ amount of CPU and RAM instances. By shifting those specific services out to specialised instanced, we saw a $50K/month saving.
The rest of the application is a lot more lightweight, but of course has to run 24/7, but being able to spin up/down required instances, and just pay for what you use, is a huge benefit for a lot of companies and organisations.
A monolith is not a giant blob that has to run fully on one instance or else nothing works. Although I'm sure plenty of horrible enterprise software works that way, it is not a property of a monolith.
I maintain a SaaS written as a monolith and I can absolutely spin instances that only load one part of the code and do a single thing, for example some instances only handle MQTT messages while others only serve HTTP endpoints. That does not make it microservices, it is all one code base sharing one database.
Overhead is way worse with microservices once you talk about network latency, database cost, infra for logging and monitoring, labor to manage that, and developer cost. Way, way, way, more than spinning up ten more identical webapps.
We learned in the 90s to not distribute the objects and instead just spin up more copied of the whole application. And if only scalability is the point this is still valid.
It's not much that we learned it at the 90's. It's that it became true at around the 90's. It used to be false, but computers became faster, memory more plentiful, and the fundamental physical limitations of networks didn't change a bit (well, fundamental physical limitations never do).
It has become more and more true each passing year.
What I learned since the 90ies, is that software engineers don't learn from previous years... let alone from previous decades.
This "microservices are better" argument was made in the 90ies and top dog is still Linux kernel (the one that was on the monolith side of that debate).
So yeah - microservice advocates haven't learned anything since the 90ies, or from the SOA era, etc... (but I suspect they were in secondary/high school then)
An anecdote: When I joined my current company had more code to orchestrate services, than the actual business value generating code in the services. We still produce an inordinate amount of code that exists just to facilitate microservice architecture. (it's also untested code, because we just don't have the money to spend time on testing it)
While I agree with you, the good part is that each generation is (slightly or significantly) better than the previous one. I remember using static generation in the 90s but the tools we have now are more powerful. Kubernetes is a great piece of software: it is well-designed, easy to work with (at least the managed versions that most companies use; administering the cluster is another thing), it applies a set of simple concepts in a consistent way that makes scaling a breeze. Soin the long run we benefit from the hype, although I can hardly understand folks running k8s for a 3-container setup.
K8s isn't so bad if you use a service like azure. I wouldn't run my own cluster. But k8s does make your life easier. It's basically supervisord on steroids with regards to features. Tho networking and storage is a nasty thing.
Where did I say this? All I said is that it's usually effective to spin up more instances of the system - a conclusion we arrived at long ago.
Regarding your point, most other disciplines gain knowledge over time, such as civil engineering, chemistry or biology and we don't discard results from the past with this kind of strawman argument of there supposedly not being anything to learn since. If things were like that, any progress would be impossible.
The extremely oversimplified reason backing your apparent hate for microservices makes you about as bad as the people you’re arguing against. The OG proponents of microservices acknowledge that there’s a world of nuance and trade off. You’re really just arguing against the outer most few circles of cargo-culting magpie architects.
Someone unfamiliar with the concept of cost and benefit shouldn’t be making architectural decisions in the first place.
I've found the nuance is in the middle somewhere. We've all seen the madness with web scale infrastructure for a personal blog, but one gigantic compilation unit will eventually bite you in the ass too (i.e. rebuilds get very slow).
What you probably want is something where everything lives in the same repository, but as separate modules/dlls which can be included in some common execution framework the team previously agreed upon.
If you have something approximating microservices-as-dlls, then you are essentially eating cake while having cake when you really think about it. Function calls are still direct (sometimes even inlined), but you could quickly take that same DLL and wrap it with a web server and put it on its own box if needed.
Establishing clear compilation unit boundaries without involving network calls is the best path for us, and I suspect it's the best path for anyone to start with. We take this "don't involve the network" philosophy into our persistence layer too. SQLite is much easier to manage compared to the alternatives.
Start with a monolith, that will take you VERY far.
When the organization gets big enough (AND ONLY THEN), add an additional domain oriented service. FULLY implement deployment and infra. Only once you do that can you think about adding another (using the pattern you just built out).
Micro-monoliths.
Organizations explode the number of services, half ass the infrastructure (the hard part of microservices), and then crumble under the organizational complexity.
My org did this, and it ended in disaster because that monolith was shoving everything in the world into a single database the whole time. This created horrible APIs: one team writing into tables that another team is expected to read. They tried to split this later on and created an even bigger mess.
There are reasonable places before then to stop and say, we need to keep X new features in a separate service.
That’s still not a microservices vs monolith problem. That’s a bad data stewardship problem.
If your data is a mess before you decided to break out functionality, then it’s gonna be hard regardless. You should have good schemas and db organization always.
This issue is about microservices because the one alternative to this DB-level data sharing is having proper RPCs (or more specialized things like pubsub in some cases) between cleanly separated services. If logically separate things are sharing data through the DB itself, you will get a mess even if you're very careful, which they were.
You should have a well curated DB regardless of whether one or many domain specific services sit in front of it. That is what would enable you to break a monolith up. Good code and architecture is the result of discipline. Your data is your most valuable asset. What you're describing is a mess.
You mentioned RPC's. Your service interface has nothing to do with the data organization.
But if you think microservices is what would work for you, then you should pursue that next time. My original post was a path to a multiple service architecture that approached service expansion with technical rhyme and reason. It really wasn't aimed at your organizations messy database.
> You mentioned RPC's. Your service interface has nothing to do with the data organization.
I find data organization to be a direct consequence of service interface. If two things aren't talking over RPCs or pubsub, they're talking through the database. It's not just my org. Pretty common for monoliths to end up with an obscene reliance on a single DB and start looking for a huge machine to support it.
There isn't a clear definition of what a separate "service" is, but I think it's fair to say that separate services won't have identical consistent views of the same DB. They'll be more independent than that, each with authority only over its own data, and use each other's data in an eventually-consistent manner through a well-abstracted API. And that does bring some overhead.
> I find data organization to be a direct consequence of service interface.
Then, you will not like microservices. Microservices make it harder, not easier, to organize data.
You have to worry about problems such as eventual consistency, and figure out how to join data across multiple data sources.
It compounds the problem significantly, and the only thing it gives you is that it forces you to silo data. That can be a good thing, but it doesn't solve the data organization problem.
Usually the service structure will mirror the team structure, e.g. one team of 2-6 SWEs per service. I've enjoyed doing things this way now that we're out of the monolith.
BTW, monolith DB has its own form of eventual consistency. Process A puts data into DB, process B picks it up later and affects the DB. There's no reasonable way that everything in such a multi-use DB is always logically in agreement. You're just guaranteeing that B sees what A writes immediately, which comes at a cost.
> the only thing it gives you is that it forces you to silo data
It gives you a smaller blast radius when something goes wrong, avoids over-stressing a single DB, alleviates single points of human dependency like the DB curator, lets you scale separate pieces independently, and yes forces you to silo the data. There are several good reasons larger orgs have been doing things this way for a long time.
Sure, but likely that team is not working on a single service when people say microservices. That could be SOA, which is more my preference, but definitely not a microservice going by the most popular definition which is "small enough to rewrite."
> BTW, monolith DB has its own form of eventual consistency.
Sure, on very large systems. Microservices always have this - even tiny systems.
> It gives you a smaller blast radius when something goes wrong,
At the cost of often having a much harder time fixing things when they do go wrong :)
> avoids over-stressing a single DB,
Monoliths can use as many databases as they want! And you can use a single DB with a microservice architecture across many microservices (tho I think this is an anti-pattern, teams often do it).
> alleviates single points of human dependency like the DB curator,
I haven't been at a job that has a DB curator, but it compounds the "bob wrote those ten services in rust, who... who knows rust? anyone?" issues :)
> lets you scale separate pieces independently,
You can do this with a monolith with many technologies, though seems easier to do for a microservice as a general rule. But, you have to need that scale first!
> and yes forces you to silo the data. There are several good reasons larger orgs have been doing things this way for a long time.
> BTW, monolith DB has its own form of eventual consistency. Process A puts data into DB, process B picks it up later and affects the DB. There's no reasonable way that everything in such a multi-use DB is always logically in agreement. You're just guaranteeing that B sees what A writes immediately, which comes at a cost
Transactions have existed for years, and DB execution engines have been able to concurrently handle non-dependent transactions since the mid 90's.
> It gives you a smaller blast radius when something goes wrong, avoids over-stressing a single DB, alleviates single points of human dependency like the DB curator, lets you scale separate pieces independently, and yes forces you to silo the data. There are several good reasons larger orgs have been doing things this way for a long time.
Vertical scaling of Db's is a non issue. If your DB is complex, get a DBA...
Have you actually done any of this in production before? It doesn't seem like it honestly.
We went from "services should be no more than 100 lines of code" to "testing and maintaining thousands of interconnected microservices is TERRIBLE IT TURNS OUT".
The secret here is that all simple answers are wrong. Your services are too small and too big at the same time. Finding balance is hard. Zen Buddhists call it the "Middle Way".
The best design is always an uneasy intersection of many approaches and concerns, and also it's the concept of what you decide NOT to do, so you have more resource to focus on what TO DO.
Also we keep overanalyzing how we do services in isolation, when the complexity comes not from each of them alone, but how they interact. To solve this complexity you need clear, aligned flows. More like laminar flow. Less like turbulence.
Too many microservice advocates argue about the simplicity of their precious tiny codebase, meanwhile there is always a gigantic ball of mud that does all of the orchestration... and most often very badly.
Trying to keep it simple (to a degree). First of all, I would say that its OK to still have a monolith and even build a whole product as a monolith and break it down as the need for microservices arises.
My understanding of whether or not you should take a monolith and cut it into pieces is that it depends on what you want to achieve.
Every monolith is specific, or are they?
Without knowing what your product does I bet you have an API, a UI layer or two, some business logic and maybe throw emailing or a payment service. Well, guess what? We all have those!
How to decide.
For myself, I’ve tried to boil it down to 3 questions:
1. Will I need to scale this part of the monolith more (often) than others?
2. Does this part of the monolith handle an entire process on its own from start to finish?
3. Does this part of the monolith require a lot of different code or resources than the other parts?
The questions are simple. They aren't philosophical. They don’t have a hidden meaning. Rather, a series of simple booleans. If something needs to be a microservice it'll most likely hit 3 out of 3 of those.
The discussion of Microservices vs. Monolith feels a lot like NoSQL vs. Relational one. That is to say, Microservices are a bad idea right up to the point where monoliths won't work.
Most services can be successfully implemented with monoliths (and relational DBs for that matter). Only when that solution doesn't scale anymore, that's when microservices come in handy. Particularly when a large service has core functionality that must always run and secondary functionality that can tolerate higher rates of failure.
I think the problem with microservices is the same as the problem with OO programming (and I say this as a pure OO Rubyist) ... what you are doing is shifting the complexity out of your code, where at least it's under source control and (hopefully) readable. And moving it into the order and timing of the interactions between your services/objects - which isn't readable unless you start hunting through log files.
I've been on both sides of this debate. I've seen codebases from teams that want monorepos and macroservices that fit all of the falling criteria:
- The codebase had three primary responsibilities
- None of those functions overlapped in functionality and didn't share any significant code
- They were all written by different people with subtly different styles
- They used infrastructure code for talking to third party services in subtly different ways that made upgrading dependencies difficult
In theory, they were all within the same business domain, so the types that think one business domain equals one service clumped them together. This made little sense.
On the opposite side, I've seen microservices where all the little services depended on one another in complicated ways that made them all a brittle mess.
Finding the right solution to each problem is always the real challenge.
Use the best tool for the job. It's stupid to think of monoliths vs microservices. You can use both if the problem requires it.
For example I'm currently working on an audio hosting service. The main app is a monolith where 90% of the code resides but there are a couple of ancillary services.
Audio encoding (which is heavily CPU bound) is a serverless microservice that can scale up and down as needed. Users don't upload content constantly, but when they do, you want to be able to encode stuff concurrently without blocking the main app. Audio streaming is also a serverless microservice because hey for every user uploading content you can have 1000x consumers (or more).
I think your position is basically the same position that most people that advocate for monoliths (which btw is a bit of a strawman created by microservice architecture advocates in the first place) hold. People are objecting to service/network boundaries as a default domain decomposition tool and arguing that it should be used instead when there are clear and immediate technical advantages to do so.
This argument is brought forth a lot, but it misses the point. Often people think microservices are the right tool for the job, but they vastly underestimate the complexity that it entails.
> Often people think microservices are the right tool for the job, but they vastly underestimate the complexity that it entails.
And that goes 10x if deploying, monitoring, or debugging said services is Someone Else's Problem ™ I'm also cognizant that such a situation is just as much an organizational/people problem as the rest of this debate, but I have a sore spot around people making decisions where they don't have to suffer the consequences from them
What a waste of time. Anyone who's not an architect or developer, but regularly works with architects and developers, intuitively reaches these conclusions. And this has been going on and on for almost ten years.
When you watch this from the outside (e.g., let's say, as a consultant called to advise on a very specific aspect of software architectures) it feels like they all follow a secret agreement that instructs them to go full microservices route. Questioning this, even as a consultant paid to do exactly that, is considered unacceptable. It's like questioning your client's religious or political beliefs.
I observe a similar trend with CIOs hired to help institutions digitally transform themselves. Many operate on an "innovative" reasoning that consists in going full cloud and lay off IT personnel, which inevitably leads to increased operating and maintenance costs without exploiting the actual benefits of cloud computing. But it's already too late, Mr/Mrs CIO has already left the org when this happens. And self-congratulating words are already published in their LinkedIn profile.
I often have two thoughts when I attend a pre-sales meeting with a prospect customer that shows us a beautiful microservices architecture:
1. Oh my...
2. Shut up and just take the money they are throwing at your face.
Having run k8s and _classical_ microservices before, I am now in a much happier place just using the AWS serverless suite (lambda, API GW, CF, SNS, SQS, eventbridge, dynamodb, etc).
Is my setup "microservices"? Well, maybe, depending on your definition, but, in truth I don't really care - it works pretty well.
We also do "DDD" with it and have multiple AWS accounts with these marking the domain borders. Comms between the accounts is via eventbridge or (very rarely) inter-account API invocation.
This allows many of the benefits of microservices, without the pain of dealing with k8s. Clean separation of domains, reduced cognitive loads for teams, each of which looks after all the stuff in a single account (so-called feature teams, where each team designs/manages and runs everything in that account/domain).
The hard bit was defining the domain borders, and the inter-domain protocols/interactions, but, once this is well defined, things work pretty well.
Having come from a k8s world, this setup feels so much nicer, and lighter, and easier to get stuff built in a both fast and performant way.
It would be interesting to see trends away from _classical_ "cloud-native" (k8s) setups with microservices, to true serverless setups. I wonder how much of k8s's lunch serverless has managed to eat so far.
We were really happy with Google App Engine for the same reason. It's a product that feels like it was "10 years too early".
Like you I feel the whole microservice debate turns into a "depending on definition" kind of thing and feels odd. Write stateless code that can infinitely scale -- is it 1 or 100 services? Just depends on the perspective really...
it's also worth noting that, initially, we built everything in one account, and had 1 team doing all of it -- the cognitive load became too high at one point, and, only then, did we break out the "DDD" and split the concerns across many teams.
If you can't get your monolith right you probably won't get your microservices right. Microservices do come with additional overhead in terms of infrastructure, integrations, and so on. There's a concept of microservices readiness (e.g. https://learn.microsoft.com/en-us/azure/architecture/guide/t...). Many organisations aren't ready to embrace microservices, and if they get into microservices before they're ready, then it's a lot of pain. There's also this misconception that microservices must be nano-services. But that's not a problem with microservices architecture, it's a problem of using microservices anti-patterns. As with everything in technology, there's no universally unique solution to all problems - everything's context-specific.
I think the key point in the article is the conclusion: try building things more as modules that can be easily split off into separately running server applications as needed later.
So in practice that could look like the interface to your "module" should use parameter objects [1] for function calls, and build in the assumption that most information retrieval or processing requests are async. Then when the time comes swap out the local version of the module of the client stub version.
To avoid manually writing the serialization, those parameter objects should have been code generated already. Although the goals of something like tRPC [2] are admirable, having to manually check inputs with typeof [3] honestly shouldn't be necessary. And allowing non-nullable fields may make field deprecation extremely difficult or impossible. Guess what? If you have any native mobile clients they may not upgrade for years! OpenAPI also exists, but seems a bit too verbose to read, or author manually.
So what would I suggest? Just use proto3/gRPC [4]. It defines the JSON encoding if you haven't clients that can't talk native Protobuf, or don't want to have bikeshedding over what type-casing to use for JSON serialization. If your cloud provider doesn't do it for you already, just drop an Envoy with the gRPC-JSON transcoder in front [5]. If you can't prove you'd have an actual performance problem due to gRPC/Protobuf, then it probably isn't worth the effort to use a less battle-tested messaging library.
So back to the main topic... it doesn't have to be hard to do microservices when the time comes, just make sure your codebase is easier to convert.
...and if you've had a team of developers eagerly using microservices for pretty much everything in production code for any significant period of time, microservices become like a big ball of mud held together by a lot of hair. I shudder in horror when I think about how impossible it will be to maintain all that code in the future.
Microservices can be useful when a single team can’t handle a service anymore. You should have as few services as possible, not as many as possible (eg see team
Topologies)
Otherwise, use modules, or package by feature, to prevent the big ball of mud problem. That’s useful within a service as well.
Microservices are analogous to classes from OOP: an attempt at modularization by bundling function, internal state and side-effects together. So it suffers from the same challenges. It’s worse in fact, as the message passing now involves unreliable I/O, and the internal state is also shared global state.
The main reason it exists is because of the availability of cheap commodity hardware for servers (and later cloud), which breaks the model of programming for vertical scaling afforded by the mainframe model. It’s out of necessity to scale cheaply that this architecture is followed - the rationalizations that this is a superior way for teams to work together or that it improves reliability can be argued.
This article doesn't provide any insight into when monoliths might be a better choice than microservices. It just says that if you do microservices you'll make awful mistakes and end up with a bad result.
The word "macroservices" sort of sums up the whole conversation to me. People are so convinced that microservices can't be done right that when they start to do microservices right, they think they need to invent a new name for it.
Everything that can be done successfully can also be done poorly and unsuccessfully. What's the word for walking successfully? Walking. Can you imagine what babies would say about walking if they could blog?
"Walking is an extremely popular and hyped activity that has achieved an impressive amount of mindshare in the past several months. Caregivers appear to be highly invested in walking as a key to unlock unprecedented mobility. However, if you look past the hype to the reality, walking is mostly about falling down, hitting your head, running into things, and crying because you suddenly realized you can't see your caregiver. At DoodooHeadCorp, we have developed a new approach that delivers on the promise of walking, without miring you in all of its failures. We call it realwalking. Realwalking consists of moving from place to place while propelling yourself forward in a dynamically balanced bipedal fashion. Note that by definition realwalking involves moving from place to place, a crucial distinction that guides you away from one of the biggest pitfalls (no pun intended) of walking. Traditional approaches to walking have often resulted in babies standing up and immediately falling backwards into the same place where they started. Horizontal displacement is crucial and too often ignored by babies who have gone down the rabbit hole of walking. This is where realwalking innovates, by leveraging the power of dynamically balanced bipedalism."
Just freakin' say it's hard, you should be prepared to learn along the way, and at every moment your ambitions should be scoped to your capabilities. There's no alternative that can claim differently.
The factors you should consider when considering whether you will be successful with microservices seem to be outside the scope of this blog post, so I won't address them either.
> At the point in time when you slice the domains you might not know all the product requirements. Probably a feature will arise which forces you to tangle two services together - now you have one domain. But distributed. Urgs.
Are there any architectures which would allow you to monolithize services like this? Changing product requirements sometimes mean you've just arrived at a better abstraction, which may need you to combine services directly.
Of course they are hard. Instead of a well modularized application where applicable, lets create multiple distributed programs that need to communicate across an unreliable network.
Why? Because its fashionable, others are doing it, its a safe answer in an interview, its a buzzword (like “agile”) that management loves, and it seems that the only way many can introduce modularity is by making each module a separate program.
> Large companies like Uber learned this the hard way:
Uber is really not a good example. Many Uber engineers created services for the sake for claiming their territories or for sheer stupidity. On the other hand, managing services in Netflix was a non-event, and people really exercised their judgement carefully when it came to creating a new service. Yeah, I'm saying Uber's problem was cultural and organizational, not technical.
I'd rather work on a good monolith than a bad microservice and vice versa.
The problem is, working on a bad monolith is almost impossible. I'd rather work on a bad microservice than a bad monolith. Now... tbh this could be survivorship bias where I've never had to help with a well-running monolith, fair... but at the end of the day, I find microservices much easier to manage scope, change and scale.
My Ingress yaml uses just 50 lines to slice all domains i need, not hard at all. Even if i don't know what to build or do i can spin up ping-pong micro-services in minutes just to let em play and have fun. All this complexity is abstracted to neat things now, like Ingresses, Services, Deployments and so on.
There is nothing about microservices that require them to be distributed over the network, it's just something that they enable.
Most of the pain points I hear about microservices are to do with versioning and deploying them independently. But again, this is just enhanced flexibility, you don't have to use it.
Microservices are excellent when you have a self-contained set of APIs that need to be updated independently of other code. You must adhere to a contract, publish the contract and provide backwards compatibility for all existing clients.
Perfect example is a Payment Service. You have API tiers, client tiers, backend service tiers and likely customer service tiers hitting it and getting payment histories, issuing refunds, and hopefully requesting payment transactions. This code will likely change constantly and you want to deploy it on your own schedule versus having to match the schedules of all of the clients.
Other candidates might be an image upload service that crops, resizes and creates copies for CDN origin calls or a fraud scanning API that scores text submissions.
You definitely want to keep the number of microservices SMALL. At some threshold the number of services becomes unmanageable because you have to support all the old interface versions.
But then you may end up with - is your whole system testable and how do you do good integration tests?
Right now I work in a pure microservice architecture, where there are no proper system or integration tests. And we had minor bug fixes break things spectacularly.
Sounds like your CTO and engineering leadership aren't owning the architectural issues that are preventing integration tests. I'm not saying it's easy, but there's a point where there are no excuses allowed from the owners.
The idea that an org that has hard time maintaining a properly structured monolith, will solve it's problems by shifting to building a distributed system was always strange to me.
The distributed system has benefits that the monolith doesn't, like scalability and resilience to failures though it opens doors to new kinds of failures.
All of these articles against microservices are so annoying because I rarely see a good argument against microservices. Often the arguments are purely anecdotal and without substance.
There are only two good arguments I can make against microservices. First, it's not the right architecture choice for all projects. Second, microservices don't magically solve the problem of complexity.
But I can name countless benefits of a microservices architecture. Individual microservices are far less complex than monoliths, which allows even a single developer to work on a microservice, run and test it on their local machine, whereas monoliths may require dedicated test servers because they're too large and complex to run on a developer's machine. Microservices simply dependencies because each microservice has exactly the dependencies it needs, as opposed to a monolith that becomes a huge tangled mess of dependencies locked to specific versions, and incredibly complex environments to ensure all these dependencies don't interfere with one another, where any little change or misconfiguration will bring down the entire service. Microservices allow for more rapid development, since you're only rebuilding and testing a small part of the entire architecture, without interfering with others work, allowing many independent teams and individual developers to work in parallel. Without elaborating too much, I'll reiterate what is said many times over, microservices are generally easier to scale horizontally with less downtime.
A common misconception I see is microservices have to be simple and tiny. But your database system can be considered a microservice, and that's hardly simple or tiny. The point is the database serves a single purpose in the overall architecture of the application, and it can be managed independently of all the other microservices. What you shouldn't do is, say, combine your database and message broker into a single microservice.
Yes, microservices are hard, and there's so many ways to do it wrong. But you know what? Software Engineering is a hard problem in general, and it's unlikely there's ever going to be a great solution that magically solves all our problems, only incremental improvements that allow us to manage ever greater complexity.
"Individual microservices are less complex than monoliths" ... but the sum of microservices is still as complex as the monolith. (But better hope you got your division of domains and services correct!)
"A single developer and work and test it on their local machine" ... OK I've never encountered a monolith that couldn't be run and developed locally. I don't say it was never a problem, but that there must have been other solutions to it than microservices. Surely today's laptops can easily compile and run a million lines of code...
And as the counterpint, I've encountered cases of having to orchestrate 15 different micro-services locally to do anything USEFUL and non-contrived. Something to help you gain some understanding of how the WHOLE system work, and not rely on other people with the full overview to tell you can evolve your small piece of it.
"rapid development" -- fix your build and test caching
"only ... testing a small part of the entire architecture" -- again, with a monolith surely you can focus on a single test function / sub-component and figure out how that works. With micro-services it gets a lot harder to do integration testing across the whole system, and people tend to not do it or argue why it "isn't needed". But keep in mind that also with monoliths it is entirely possible to simple delete/not write the system integration tests. It is just that people tend to want to have them (for good reasons), but with microservices there's a much higher investment needed to get them (and they don't execute any faster, but slower, if you invest in them).
This comes down to an argument of "we make such perfect code we don't need tests..."... just because it is network calls and APIs, doesn't mean that noone ever messes up and does a backwards-incompatible change.
Simply dropping system integration tests and discovering bugs like that in production is an option for monoliths too.
The argument against microservices is simple - there should be a clear use case for a microservice. Don't go - "we do everything as a microservice". Almost 20 years ago we had "we do everything using SOA"... Same thing.
And your own comment shows that microservice advocacy are inconsistent:
> Individual microservices are far less complex than monoliths, which allows even a single developer to work on a microservice, run and test it on their local machine
> database system can be considered a microservice, and that's hardly simple or tiny
DBMS is as much a microservice, as many other microservices are. You cannot just replace a DBMS with another transparently, just like in many cases you cannot just deploy a different version of a microservice without coordination.
Linux distributions are essentially a bunch of microservices running together (managed by e.g. systemd). It works well most of the time, of course until it doesn't. Not sure what a better approach would be, though.
In my experience, the "Citadel pattern" is a good alternative to microservices for small to medium teams. I have seen it emerged as a natural evolution of a monolith in several places where I worked, where it served us well.
I keep reading these microservice essays where the author is lost and I really feel their pain. In that spirit, let me try to make things as simple as possible.
(True) Microservices have no dependencies on anything but unstructured text data. They do not couple to a database, the business understanding of what it's doing, a domain model, or anything else. They perform a simple, idempotent, business task that can never fail although it can create various error chains.
Programming at scale is tough. There's no free ride here. All you've done is turn the traditional model of coding "inside-out" and now you've got a ton of work doing all of the wiring.
But if you keep your microservice doing one simple useful business thing, then all of that inside-out work become business decisions. What do we do if the sign-up fails? How do we move IMPORTANT-THING to those other guys to use? You still have business coupling: things change and you have to adapt. But you're not coupled at the _coding_ level. If there's any magic, that's it. Your business should be able to wander all over the place and your microservices hold up just fine. The old way, where we may have coupled every little business need or want with every piece of code in the system, was not only a pain, more importantly it was impossible to keep organized in any one person's head and aligned with everyone else at scale.
What I see is a lot of drift. Folks start coupling things up, perhaps by trying to create one domain model to rule them all. They start creating microservices to do _system_ activities, like flushing a cache. There should be a one-to-one correspondence between your microservices and interesting business conversations. That's a hella discipline to maintain. It may force a lot of conversations you thought you could avoid by hiding them in a class hierarchy somewhere. Once you start drifting, pretty soon you're writing essays like this. And then here we are/
This doesn't make much sense. "Interesting business conversations" should map 1-to-1 with microservices but also each microservice must perform a "simple, idempotent business task" but also microservices can't be coupled to a database? Okay, enjoy developing your business with no data persistence whatsoever and where your architectural principles forbid you from ever so much as sending an email.
Context matters so much. Should you have absolutely everything in 1 system. No. And I think no one thinks that anymore.
Should you spilt your system into as many pieces as possible?
No, of course not.
You are prop. storing files one place and have a data in a database another place. And most likely none are on the webserver receiving request. Or maybe you are, because context matters and I'm wrong in your particular case. Could be.
My advice would be to split your application up into enough pices so that;
1.your engineers feels like they have control over the code and are not afraid of doing changes.
2. if you get uptime issues. Get the heavy / unrealiable services over to new node. Don't let one service/endpoint take down other endpoints.
Think and act according to the problems you have in front of you. Don't go extreme in any direction. Context context context