So many companies shoot themselves in the foot chasing Google/Meta/Netflix backend architectures. If your developer count is in the 10s, you are committing professional negligence chasing a microservices architecture.
I count myself among those who are almost irrationally opposed to microservice anything.
However, in my quest for the holy monolith I came across small teams who against all odds were adequately functional despite this heretical paradigm. They were definitely building a distributed monolith which was a monster to run locally. But, it worked. They were shipping and most importantly it matched their culture of small isolated islands of functional specialists with as little communication between them as possible.
I will refrain from commenting on the virtue of communicating as little as possible, but sometimes you just got to make the best of a situation.
It can be the cause or the consequence. If your team peels a microservice out of a monolith and assign a couple of developers to work on the newly-created service, those guys will be heads down on their work and focusing on their project.
If management does not go out of it's way to rotate people around projects, you'll end up with ad-hoc organizations around their service architecture.
I don't think you can, actually, from the level of a team. If you're designing a bridge, you can work with gravity, or against it. But if you're a car on the bridge, you can't work with or against it; the bridge either will hold you up or it won't, your car will either go or it won't.
Conway's Law is about how your organization's communication structures work, and how that inevitably leads to specific outcomes. That outcome is an eventuality for the team, because the team can't change the communication structure. They can design the best damn microservice in the world, and the resulting use of it could still be shit, because the other teams aren't necessarily up to snuff. And you still have all the baggage of dealing with those intra-team, intra-service issues.
Many companies implement microservices for the wrong reasons. The most common reason is because other companies are doing it. The second most common bad reason is because business units don't want to talk to one another. AKA they are shipping their org chart.
It's great for cloud providers too. It's often cheaper to just run entire copies of monoliths than it is to run microservices. All that synchronization and API calls has a huge cost, even more so when you add support infrastructure (RMQ? Kubernetes?)
Most companies would work just fine with a monolith, with the occasional service split off from that when it makes sense.
I don't read it as your intention, but that comes off putting it on non-technical management.
It sure seems like a lot of the microservices plays come from technologists wanting to use it as OTJ training. There are lots of neat things to experiment with, and there are lots of ways to glue things together. There's a whole resume to build with all of the ancillary tech required just to run a process.
Your point about cloud is right on, too. That goes both ways. Some CEOs chase the deals. But some CTOs chase the greener-grass dragon. One thing leads to another, and suddenly you're migrating from k8s on GCP to nomad on DO.
> with the occasional service split off from that when it makes sense.
Exactly what meta does, according to the article. There it is.
> The second most common bad reason is because business units don't want to talk to one another.
This sounds like a terribly naive take.
It matters nothing if units talk to one another. Nothing. All it matters is ownership and accountability. What happens if the service managed by team A goes down and takes the whole org with it? Is team B going to take the blame because one of their developers posted a PR to tweak the project's README?
They can’t get their internal silos to work together on a single project, so they have each silo make services for the others to use in service of the project they’re trying to make that needs multiple teams.
But… all the same problems come up. Teams building to their requirements and not what the consumers need, refusal to cooperate in design/requirements/etc, scheduling issues or project priority issues.
“Microservices are cool and what Google does and will help us avoid our organizational issues” is very loosely paraphrased from what I heard as the exact pitch for why it was being used.
Indeed, but I think we are too late for that. It's not like the usage of GOTO: someone who mattered wrote that its usage should be considered harmful, and hence nowadays GOTO has practically a niche usage.
If only someone that mattered would have written on time an essay titled "Microservices Architecture Considered Harmful".
The thing is they are not harmful, they are valuable above a certain scale. To caricature, monoliths scale with O(n) and microservices O(nlogn). At some point the lines cross and microservices start to get better. The problem is there's no clear answer to where is that point as it also depends on the product being built and other company specifics. But I wouldn't see it being usually worthwhile below a hundred devs.
This comment was grayed out at the time I upvoted and am commenting. Either readers don't appreciate pedantry or don't realize what the point was, but for any n > 2, n * logn is a larger number than n.
I believe what the original poster meant to say was microservices grow like O(klogn) and monoliths O(n), i.e. microservices have some upfront constant cost that is large enough that, for small projects, monoliths will still perform adequately and be cheaper to develop and deploy. Once you exceed n large enough to overcome that upfront constant cost, microservices become the superior option.
I'm not agreeing or disagreeing with this, by the way.
Neither agreeing nor disagreeing, either, but the thing is… that both, monoliths and µ-services, are loose concepts, and they mean different things to different people and/or in differing circumstances.
A monolith itself can be monolithic either at the design level (only the whole thing can build and work) or at the deployment level (multiple components build, test and – to a certain extent can work – independently of each other but are assembled into a single deployment unit).
As far as differing viewpoints are concerned, let's consider a hypothetical design where GET, POST, PUT, PATCH and DELETE are a «µ-service» design and deploy as 5x distinct deployment units but as the whole and at once, i.e. leaving the DELETE out of the deployment renders the solution inoperable from the business POV. So far so good, we have a conventional µ-service.
Let's now consider a hypothetical yet real scenario where a single POST invocation triggers a chain of 27x service invocations to 10x different systems to produce a result. The implementation is stateless and idempotent, but the processing logic is complex, and it has to be executed in a particular sequence, intermediate results have to collated or fed into the next processing step – either sequentially, or in parallel, or both. And the processing can't be refactored out into smaller components due to the complexities or the business logic or due to technical constraints that the 10x external systems impose. Is the implementation of POST a µ-service, or POST is a monolith, or POST is a monolith within a µ-service? It can be either, neither or both – depending on the point of view. Such hypothetical design are, in fact, real and are fairly common in large environments.
On the scalability point, there is no single answer, either. A design can have scalability constraints either at the design level or at the deployment level. An example of the design level constraint is the strict processing order requirement that imposes substantial restrictions on what and how it can be scaled. It can typically only scale up but not out with not much room to wiggle around in. If the processing order is not important (i.e. the eventuality – not the linearity – is the only requirement), then scalability (up and out) becomes a deployment level constraint which is easy to fix (i.e. a configuration time or an auto-scaling policy change).
Scalability can also be impaired in both, monoliths and µ-services, if at least one external system is slow to respond, or the response time varies. In such a case, both designs will equally suffer.
> […] and be cheaper to […] deploy.
I have deliberately omitted the «develop» part to emphasise the «deploy» part. The deployment cost has gone down significantly over the last decade alone. What used to be an arduous task requiring a coordination of multiple people has now become a few line deployment file change and a one person job in many cases. Even complex deployments are much easier now than they had ever been. Therefore, I would posit that the cost of monolith and µ-service deployments is roughly the same today compared to monoliths being cheaper to deploy 10+ or so years ago.
They're saying that microservices scale much worse than monoliths at the beginning, and need to reach a certain scale before they're worth the effort. The debate around is usually around where that inflection point is, and to a lesser degree if there are other non-scale advances to microservices that might make it worth adopting sooner.
> hence nowadays GOTO has practically a niche usage.
Except where it was rebranded as throw/catch. There, goto remains quite popular.
All while bridled (C-style) gotos were considered acceptable by Dijstrka, but are now looked upon with disdain. Amazing what a little marketing can do.
Throw is much worse than GOTO. GOTO is explicit you always know where it goes. Throw has no idea where catch is and catch has no idea where throw is. It's hidden control flow.
Go/Rust/Zig Errors as values is a much better system forcing you to explicitly deal with the error, crash or pass it on. Rather than hoping you handled all the correct exceptions/someone else will handle all of them.
Even Dijkstra followed up the "Go To Statement Considered Harmful" letter with the "On a Somewhat Disappointing Correspondence" letter because "20" people disagreed with him.
There will always be naysayers. The above is starting to become generally accepted, though, and modern languages are moving away from the practice just as languages moved away from unbridled gotos.
Almost everything is a "rebranded goto". Functions, conditions, iteration, break/continue.
Doesn't mean that Dijkstra was wrong, or that they are the same as goto.
Also, throw/catch do quite more than a goto, though. It's not only about stack unwinding, it's about the ergonomics of not needing code that uses throw to have any information about where catch is located.
Sure you can simulate some use cases of try/catch with goto, but implementing the general case is much harder.
throw/catch is exactly the kind of good goto-replacement provided by "higher level" programming languages that Dijkstra is talking about in his paper.
> Almost everything is a "rebranded goto". Functions, conditions, iteration, break/continue.
Not within the context of discussion, where goto refers to the harmful kind. Throw/catch exhibits the very problem Dijkstra was talking about, with execution jumping all over the place haphazardly. Functions, conditions, iteration, break/continue, even (C-style) goto does not exhibit the same problem. They are strictly bridled in the execution scope.
There is good reason why modern languages are moving away from throw/catch. However, there is little question that it is still widely used at this time.
break, continue, and even goto in any language created in the last several decades cannot leave the current function. These are not what Dijkstra was talking about. Dijkstra wrote the piece when structured programming was just starting to become a thing and was urging people towards it. Think more like goto in BASIC, where there is no bridling of the operator.
Yes, however I am not sure if Dijkstra meant goto in the sense of jump outside of a function. I don't know enough about Algol 60 and languages of the time, but if you allow usage of goto between different functions you will have a corrupt stack in no time, so I'll be surprised if it was implemented.
Going back to the original discussion, my point was that the only statements that are real gotos still in usage (except for C cleanup gotos) is break and continue, which even support labels
Functions do not exist in the context of the "Go to statement considered harmful" paper, so there is no good analog in there. Further, he is specific that it is only about unbridled gotos. continue and break are decidedly bridled. In reality, the paper just doesn't apply to any language created in the last several decades, even those which use the goto keyword. We bought in to structured programming.
Except I posit that throw/catch still suffers the same problem he speaks of. It becomes difficult to follow when and where the code will jump to an arbitrary spot. The harmful parts of goto live on. Granted, we are learning. Modern languages are abandoning the throw/catch concept.
I agree about the somewhat unexpectedness of exceptions, although it is much structured than gotos (you just go up the stack)
virtual polymorphism for example is completely unexpected, as well as function pointers and other constructs that in my opinion are harder to track than exceptions
For us at my present and former workplaces, the decision of using microservices didn't depend solely on number of developers. We needed something very scalable, something that different teams can work on without stepping on each other's foot, something that survives even if part of it fails temporarily, something that auto-heals.
We did it with developers in the tens and we didn't have much issues with this approach. In fact, at one of my workplaces we had much more issues with a monolithic app than with the microservice based app we replaced it with.
That also works with monoliths ... Usually you would make your monolith stateless and distribute the incoming requests / events across many instances that can be spawned / killed depending on volume of requests and health status of instances.
When you kill a monolith you kill a random selection of inflight tasks from every part of your application.
So a rare bug in your mailing list signup workflow that hangs the process and causes it to be killed causes a random selection of inflight webpage requests, payment transactions, message handlers and business processes to fail. And if those failures aren’t all cleanly handled, your mailing list signup bug could propagate into a much wider issue.
Whereas if you have a ‘mailing list service’ that has its own processes that can be killed and respawned, that bug only takes our mailing list processing. Which is good because the bug was probably made by the team who owns mailing list processing. And they can roll back their code and be on their way, with nobody else needing to know or care.
Generally when people argue in favor of a monolith over micro-services, it's not for completely/mostly isolated business functions (i.e. BI pipelines vs. CMS CRUD), it's more for when responding to a single request of group of requests already implicates many services that must all work together to respond to the request(s); in this case, you're still smoked if any one of those services handling a part of the request chokes, you're in fact multiplying your opportunities for failure if you're using micro-services.
Monoliths should be stateless (if achievable) and have no concept of partial success in cases where you would like atomicity unless everything is truly idempotent (easier said than achieved). If those criteria are met then callers just need to retry in the event of failure which can be set up for basically free in most frameworks.
If you're pushing fatal recurring bugs into production, then that is a separate problem wider than the scope of a monolith vs. micro.
If you have a guaranteed way to avoid pushing fatal bugs to production it doesn’t MATTER what your architecture is. You’re in some fantasy land of rainbows and kittens where everything works fine first time every time.
For those of us in the real world who can’t afford perfection, the ability to isolate the impact of the inevitable bugs that do sneak through has some appeal.
As does the fact that exhaustively testing a microservice in a realistic timeframe is a much more tractable problem than exhaustively testing a monolith, which reduces the risk that such bugs will ship in the first place.
Bugs are less likely to ship. And when they do they will have a more limited blast radius. And when they’re detected they can be mitigated more quickly.
> If you have a guaranteed way to avoid pushing fatal bugs to production it doesn’t MATTER what your architecture is.
Bugs aside, the architecture does matter, and it matters a lot.
Whether it is a single coarse grained deployment (i.e. a monolith) or a fine grained deployment (modular services or microservices), a solution has a number of technical interfaces. The tehcnical interfaces broadly fall into low and high data volume (or transaction rate) categories. The high data volume interfaces might have a sustained high data flow rate, or they can have spikes in the processing load.
A coarse grained architecture that deploys all of the technical interfaces into a single process address space has a disadvantage of being difficult or costly (usually both) to scale. It does not make sense to scale the whole thing out when only a subset of the interfaces require an extra processing capacity, especially when the demand for it is irregular but intense when it happens. Most of the time, a sudden data volume increase comes at the expense of the low volume data interfaces being suffocated by virtue of high volume interfaces devouring all of the CPU time allotted to the solution as a whole. Low data volume interfaces might have lower processing rates, yet they might perform a critical business function nevertheless, an interruption to which causing either cascading or catastrophic failures that will severely impair the business mission.
The hardware (physical or virtual) resource utilisation is much more efficient (costs wise as well) when the architecture is more fine grained, and the scaling becomes a configuration time activity which is even more true for stateless system designs. Auto-«healing» is a bonus (a service instance has died, got killed off and a new instance has spun up – no-one cares and no-one should care).
The original statement was "service is not answering for a certain amount of time". If the instance of your monolith is not responding you're probably already in a bad state and can reasonably kill it.
What are you monitoring your monolith for? For microservices you can monitor specific metrics related to the exact function, and perform health checks, scaling events accordingly.
For monoliths you cant be as specific. “Is the response a 500” doesn’t really cut it. “Average request latency” for scaling doesn’t cut it when some of your queries are reads and then some are completely unrelated mass joins.
Sure, but "if the instance of your monolith is not responding" probably means the app is down. That's only going to be true for a small subset of the microservices.
In a past job, the benefit of microservices was that some of the operations performed by the system were fare more CPU intensive than others - by having them in their own service that could be scaled independently led to lower overall hardware requirements, and made keeping latency of the other services sensible much easier.
You can scale monoliths independently too. Depending on the language that means paying some additional memory overhead for unused code but practically it's small compared to the typical amount of ram on a server these days.
This post reminds me of exactly the balance I've been toying with. One particular service I work with has ~6 main jobs it does that are all kind of related in some way but still distinct from each other. That could've been designed as 6 microservices, but there are services that do other things as well - it's not all contained in giant one monolith so it's somewhere in the middle.
The software is going to be deployed at different locations with different scaling concerns. In some places, it's fine to just run 1 instance where it does all 6 jobs continuously. At other places, I anticipate adding parameters or something so it can run multiple instances of a subset of the jobs, but not necessarily all the jobs on every instance.
You do spawn a new monolith. You make one group the CPU intensive one and route that traffic there. Same concept as a microservice except that it comes with a bunch of dead code. But the dead code is not that resource expensive these days.
You don't have to write an API layer and get type checking among some other benefits. Is it a ton of savings? No, but I'd describe it as a significant amount of effort and lower complexity.
I see how it works, and I completely agree that to start out, so going from PoC to first business implementation, a monolith is the way to go (unless the goal from the start is 100 million concurrent users I guess).
But after that initial phase, does it really matter if you use one or the other? You can overengineer both and make them a timesink, or you can keep both simple. I do agree on things like network latency adding up, but being able to completely isolate business logic seems like a nice gain. But Im also not talking about real micro level (i.e. auth login and registration being different services), but more macro (i.e. security is one, printing another(pdf,csv,word etc), BI another one
Which is perfectly fine - 100 million concurrent users aren't the same for app x and app y, as the business logic the backend run isn't also the same.
Not saying it can't handle everything as well. Just saying the modularity of microservices makes it, in my pov, easier to handle large complex real time systems.
Maybe that's also something that comes with experience - as a rather "newish" guy (professional SE, so one level above Jr), it makes it easier to work on our project.
> The routing of just load balancing is much simpler than the routing of exectution jumping between many microservices
Not necessarily at all, i.e. using GRPC it's all self discovered.
> I agree, but a microservice architecture starts you out at a higher complexity.
Definitely
> That can also be done by having that business logic live in its own library.
That's true, having it in it's own library is certainly a possibility -> but then it's also not that far off micro/macro services anyway, except you deploy it as one piece. And basically this is my argument: If you're having it all as libraries, and you all work in a mono repo anyway, the only real difference between micro/mono is the deployment, and that with micro you _could_ independently scale up whatever the current bottleneck is, which we've used plenty of times
Doing microservices from the start is fine if you know what to expect. Having worked with massive monoliths, there are cons that people don't consider longer-term and the longer you dig yourself in the harder it is to pull yourself out.
Honestly I think the realistic advice should be to go monolith if you or part of your team aren't experienced with microservices or if your app is simple / you'd be overengineering it otherwise.
If you're starting a SaaS company, you can envision the moving pieces, and will be growing your team quickly microservices properly in the beginning can have a lot of benefits.
Just feels like another one of those dogmas people just mindlessly scream on the internet all day without considering all the cost/benefit analysis for each particular case.
Agreed. In my last company we had everything running in Kubernetes, despite having less than 300 active users. It did have some benefits, but scaling wasn't one that we needed and did cost lot of developer time. Debugging is painful.
The big differentiator is how micro do you go. My rule of thumb is split by scaling requirements (and hardware requirements if you have them). For example splitting off your general business API from your TURN servers/service since the TURN should be scaled by connections and throughput plus possibly requires a higher network bandwidth.
> If your developer count is in the 10s, you are committing professional negligence chasing a microservices architecture.
Such a strong statement. Might a lack of industry experience be driving such strong convictions of yours?
Here are some reasons, off the top of my head, why a company would want to embrace microservice architecture, with all its benefits and complexities, with a developer count in the 10s:
1. You're building a product that touches multiple deep domains and the business has modelled a very aggressive headcount growth
2. You've outsourced a large portion of your development to a number of agencies
3. You have a team of individuals who know nothing but microservices
4. Your chief compliance officer is intimately familiar with the data protection benefits that microservices bring and is leaning heavily into it in their regulatory submissions as a way to compensate for some other gap in the business
5. You're building anything to do with image processing at scale
6. You are a subsidiary owned by a parent company with tons of experience and tooling for microservices
7. One of your VCs has offered up a dev team they own to speedboat your MVP, who specialize in microservices
8. You've received a buyout offer by a party interested in specific IP within your product, with the condition that the IP is isolated from other parts of the system
1. You're building a product that touches multiple deep domains and the business has modelled a very aggressive headcount growth
That feels like premature optimization; split when you need to and not before
2. You've outsourced a large portion of your development to a number of agencies
Then your developer count is likely not in the 10s (you need to could the agency developers), plus if you've already outsourced your development in that manner, it suggests you already chose a micro-services architecture and tendered accordingly, this feels like a post hoc justification.
3. You have a team of individuals who know nothing but microservices
If a team can build a set of microservices, they can build a monolith, the skill sets are not that different, a microservice is after all just a really small monolith.
4. Your chief compliance officer is intimately familiar with the data protection benefits that microservices bring and is leaning heavily into it in their regulatory submissions as a way to compensate for some other gap in the business
That's an interesting one, you're trading technical complexity for compliance, and it may well be a use case, there is not enough data to comment here. But there are many ways to be compliant with <insert framework here>, micro-services might be one, but is it the optimal solution for all involved, well that depends...
5. You're building anything to do with image processing at scale
This doesn't require micro-services, it probably requires horizontal scalability, if its offline processing you might want a batch process you can turn on and off as required, but that doesn't have to mean microservices, at this point it becomes a semantic argument of what constitutes a microservice, but I would argue the idea of batch processes predates the idea of microservices. Also just because you might need to use microservices in a small part of your application stack, the rest of the solution can still be a monolith: a hybrid architecture if you will.
6. You are a subsidiary owned by a parent company with tons of experience and tooling for microservices
Again, if you have tons of experience with microservices that can be easily translated to monoliths, just build a microservice but bigger
7. One of your VCs has offered up a dev team they own to speedboat your MVP, who specialize in microservices
It its 10 devs or less, does it matter, build a monolith and optimize when it makes sense to do so
8. You've received a buyout offer by a party interested in specific IP within your product, with the condition that the IP is isolated from other parts of the system
Unless that is your goal from day 0, I'm not sure how you could anticipate this, again this feels like a post hoc justification. If, in the unlikely event that that situation occurs, that might be a good time to consider splitting out that functionality.
> Your chief compliance officer is intimately familiar with the data protection benefits that microservices bring and is leaning heavily into it in their regulatory submissions as a way to compensate for some other gap in the business
Great question. Two things you're aiming for with a regulatory submission: 1) Get it approved. 2) Receive back as few rounds of questions as possible from the case officer, as each round delays approval. You can improve 2) by painting a very clear and concise narrative around key concerns pertaining to the particular application you're submitting. Microservice architecture helps to paint that narrative (for CCOs that understand them) as at the high level, it addresses the key concerns (primarily data protection) with simple to convey concepts such as data segregation, service ownership, etc. The implementation of those controls is not as simple, but for the regulatory application, all that matters is that they are sound in practice and simple to convey.
It often ends up as a micro/monolith, a hybrid architecture with few benefits and many costs especially around deployment and distributed tracing, with many resumes polished on the way, but at least the return to a sensible monolith isn't quite as far.
but... but we'll need this so we can cope with webscale request volumes when our product (that has zero customers currently) launches and immediately goes viral!
(/s, obviously, but I've heard variations on this plenty of times)
This is hn so obviously your comment has to be the top voted comment. I am really tired of this trend here.
No. microservice vs monolith is not the deciding factor of the developer speed or bugs. It wouldn't even be top 5 factor for good developers. Difference between microservices and monolith technically is just network rpc rather than function call. In itself, it doesn't make much of a difference. If a developer finds themselves stuck because of microservice architecture, they are probably very bad developer in first place
If it's a network RPC style microservices architecture, you've built a distributed monolith; every service in a proper microservices architecture should be developed, maintained and deployed 100% independently from any other service. If two services are closely tied, they should be merged together.
Yes this causes overhead; every microservice will have a public API of sorts that will need to be documented and communicated, and any other service consuming it will need to eventually be updated to keep up with changes. If this overhead is not something your organization can afford, you should not be doing a microservices architecture.
Except is is. Microservice doesn't mean distributed setup. For the first problem, yes service running in multiple pods causes issues, but even monolith could face the same issue. Statefulset based microservices not only exists, but is a pretty common setup. Also I would even call multiple container on the same pod as microservice, which basically solves all of the problem.
There are many ways to solve second problem. Some folks create different API version. I prefer different deployment.
Last is a problem for monolith. I know of big companies where compile time is in order of an hour for monolith, even for small change.
Except it isn't, because a microservice requires an isolated dataset, API abstraction, documentation, versioning, consumer contract, dependency chain, complex release management. It's a radically different design because it forces you to develop the entire system differently to deal with very different abstractions, ways of working, expectations.
A distributed monolith is simpler, but simultaneously more buggy than a centralized monolith.
- In case of errors, do we get a full backtrace of function calls across service boundaries?
- Is the function call as cheap as passing an argument? What if we're passing 1KB? 1MB? 1GB?
- Can we use a debugger to step in and out of those functions?
- Can we spin up a simple test runner to integration test a few levels of function calls together? Preferably something like Jest that has a watch mode so we can quickly run dozens of tests?
- Can all database modifications run in a single transaction so that we know that either all of them happened or none at all?
> Yes. I don't see how this is different than function call.
If you need atomicity _across services_ it's very different, hence why everyone resorts to eventual consistency and all the associated extra complexity. If there's a decent way to have true transactions that span multiple services, I've certainly never seen it.
We all appreciate your contribution to micro services. But let us now choose simplicity and move on with our lives. You do you with your zoo of dockerized babel towers
I use monolith 90% of the time. I think you missed my point. I just said that I could be similar level of productive in well thought of microservices division.
The more services you have, the more of a nightmare your development environment takes to set up. Network calls are a lot less reliable than function calls.
Dangerously hidden behind some microservices architectures / frameworks, sometimes, that promise to fix or at least hide all these complexities for you.
I find this highly misleading, Facebook is famously a big monolith, and I think so is instagram. There’s plenty of services and a couple micro services as well, but I don’t think anybody would characterize meta as having a micro services architecture. I have no idea what the authors’ agendas are, but something isn’t right there.
The article mentions that www, the monolithic PHP code base, is 4.6% of the service instances. Though the paper does not mention the compute allocation. I do not recall how www is deployed, but it’s easy to imagine it’s allocated a lot more resources than a lot of other services are.
Large, unsharded services are generally going to run one instance per machine alone as that's the most efficient way to run most of them (reduce per-instance overhead as much as possible). Small (true "micro") services or other things which have instances per use case (like the inference platform example given in the paper) may use a small fraction of a machine.
There is a Hack monolith (the www tier). There are also a huge number of other services, ranging from micro with a couple instances globally to "larger than almost anyone elses's monolith".
> Services are defined as units of software with well-defined API interfaces, called endpoints ( in Figure 1). Each service satisfies a specific business use case (e.g., cachinga photo feed). There is significant room for interpretation
in defining the scope of a business use case...
It seems to me the problem with the microservice concept does not come from the decomposition of a monolith into well defined stateful/stateless services but indeed the lack of further principles and tools for doing so in an optimal way.
Imagine a decision process that would ingest some quantitative details (including a topology of dependencies) and volumes of information flow and computation etc and spitting out an architecture. In many cases the outcome might be "monolith".
While fads and bandwagons promising utopia are prevalent in tech as in almost all other domain, there usually is a core truth that activates them.
As I understand it, the concept of microservices mainly originated to solve organisational bottlenecks, rather than to improve the quality of the software itself.
Microservices are organized by business use-case in order to isolate a team so they can focus in deeply understanding the requirements of this specific domain. This is fundamentally a fuzzy and human division of concerns, and has little to do with the performance factors you state, which seems to be a secondary priority. Mantainability and code-quality are related concepts, but they don't seem to be the top priority either, for them it seems that it is more important to enable a team to attain deep knowledge of the problem-space assigned to them, rather than ease in implementing solutions.
Sure microservices are supposed to handle scale, but are we talking about computational scale or organizational scale? It is ambiguous, but I think it's more of the latter. That's probably why so many technical issues have surfaced surrounding this architecture after the initial hype, it was not about technical efficiency to begin with, and it only makes sense for very large teams of engineers, if at that.
Just my interpretation, I don't claim to be more qualified than any other professional here to have an opinion on it.
> Many frontend and some backend services also expose numerous HTTP (REST and GraphQL) endpoints; however, they do not have canonical names that we can use for our analyses. For this reason, we limit the endpoint analysis only to Thrift RPCs reported in the dataset from the routing library.
That’s too bad. One of the funnest thing is when a backend service calls into www, the monolith PHP code base. For example, callbacks when an item enters a queue or video call changes state. And by fun, I mean annoying to debug.
It's cool to hate on microservices here on HN. In my opinion it's not always the answer but can bring many benefits. For example the ability to use different languages or frameworks that might be better suited for parts of the application, the separation of concerns, the ability to split work between teams, outsourcing parts of the app, scaling parts of the app individually, etc. They key is to have clear API contracts for each service, and enforce them ruthlessly.
I've had this thought for a long time that if you have a completely functional code base (as in no side-effects), making the decision between a microservice approach and a monolith approach is fundamentally transparent. Any functional code can not only be split onto multiple threads, but multiple servers (at the cost of latency).
Except in the microservice approach with multiple servers you have introduced the possibility of network partitioning where microservice A can't reach microservice B. Unless your language/runtime model can deal with that you have hidden that error case in your abstraction. I think a lot of microservice architectures are implemented with the YOLO model where there is no thought to network errors and all calls are assumed to succeed.
Language/runtimes like Erlang & Elixir on the BEAM/OTP that were built with this in mind work well. They are completely functional with all state contained with lightweight processes and can send messages with timeout handling transparently across nodes in a cluster. With OTP you get a supervision tree for those processes that can automatically restart nodes and child processes of nodes.
This was the Scala / Akka approach, I'm sure there's similar ones in other languages; basically you'd work via communicating, 'pushing' messages to actors instead of calling functions. From a developer's point of view, it then didn't matter if that message went to an actor on the same machine or something halfway across the world. "Don't communicate by sharing memory, share memory by communicating." is the modern adage.
Any purely functional app still needs a mutable data store _somewhere_. And if that's all owned by one service (the "root" of your functional app, since everything else is pure by definition), then you're hardly doing microservices.
Multiple servers adds a pretty hefty layer of networking, orchestration, security, failure handling (even in the datacenter), and serialization - even if the business logic stays mostly the same.
obligatory: you are not Netflix/Meta/Google... so you don't need to do this architecture for your own startup. Just spin up a monolith until you have something that needs breaking down and even then it might not need it or you might pivot to doing something else entirely.
Exactly that. I also think a lot of developer teams are both too big, too unorganized, and not diligent enough so they start to interfere with each other's work, causing them to want to have their own codebase. But no, you need a strong lead developer and development practices. There may be too many cooks in the kitchen as well, at which point you should probably consider downsizing.