This is such a nice quote that speaks a lot about what it means to be an experienced (senior) software engineer. Our field is such a dynamic one! New tools and techniques appear all the time.
It's easy to fall into the trap of thinking that newer tools are better. I think this is because in some areas of tech this is almost always true (e.g. hardware). But in software, new tools are techniques are rarely fully better, instead they just provide a different set of trade offs. Progress doesn't follow a linear pattern, it's more like a jagged line slowly trending upwards.
We think we are experienced because we know how to use new tools, but in reality we are only more experienced when we understand the trade offs and learn when the tools are really useful and when they are not.
A senior engineer knows when not to use micro services, when not to use SQL, when not to use static typing, when not to use React, when not to use Kubernetes, etc.
Junior engineers don't know these trade offs, they end up using sophisticated hammers for their screws. It doesn't mean that those hammers are bad or useless, they were just misused.
I said something similar in 2018, "You haven't mastered a tool until you understand when it should not be used."
Recently (as in the past few years), it feels more like it's not trending upwards anymore, just jumping around an equilibrium point and maybe even slowly declining.
Junior engineers don't know these trade offs, they end up using sophisticated hammers for their screws.
They also end up making hammer factory factory factory factories.
"Why the hell are people so impressed by boring architectures that often amount to nothing more than a new format on the wire for RPC, or a new virtual machine? These things might be good architectures, they will certainly benefit the developers that use them, but they are not, I repeat, not, a good substitute for the messiah riding his white ass into Jerusalem, or world peace."
"Remember that the architecture people are solving problems that they think they can solve, not problems which are useful to solve. Soap + WSDL may be the Hot New Thing, but it doesn’t really let you do anything you couldn’t do before using other technologies — if you had a reason to. All that Distributed Services Nirvana the architecture astronauts are blathering about was promised to us in the past, if we used DCOM, or JavaBeans, or OSF DCE, or CORBA."
Note: that kind of "selling points" was "promised to us in the past" even then.
Also, not even distributed anything is necessary for architecture astronauts, one can "architect" any task:
The oldest form of "architecture astronauts" food I personally had to fight against was Grady Booch's "Object Oriented Analysis and Design With Applications" from 1991/1994. It resulted in many enterprises wasting immense amount of time even in the nineties.
Funny but accurate article! I did not know they saw that as a problem in 2005, which means it could be a lot more factories by now.
This is such a great quote. I would also love to know the origin.
I'll also bite on the when not to use static typing bit. Not using static typing is a bit of a misnomer because you can use a statically typed language and just use `String` (or `Bytes`, `Object`, `Value` or whatever the equivalent is in your language). The question is more of whether to use one of these catch-all structures or to use a more structured domain-specific type. And the answer here is when you don't need all of the structure, don't want to force the whole thing to validate, etc. For example, maybe you have JSON getting read into some complex customer data structure. If you only need a single field out of the structure, and haven't already parsed it into the customer data structure for some other reason, it might be best to just reach in and grab the field you need. You can think about it kind of as the principle of least context but in a data parsing scenario.
FWIW, there's a very old Russian joke that goes like "a novice doesn't know how to do it, a pro knows how to do it, and an expert knows how to avoid doing it".
I'll bite. When should static typing not be used?
(Note: I agree with your general point)
For any decently large project, though, I prefer static typing.
To take a concrete example that could go both ways: Say you want to parse a JSON blob for some task. On the one end, you could access it through dynamic typing, or tools like jq, that don't need a schema for the entire data format. At the other extreme, you could make typescript definitions defining a schema for the entire format.
The more the same data gets (re)used, the more worthwhile taking the time to define a full schema is. But to download and add type definitions (often out of date and in need of further tweaking) for every once-off API request? Way more effort than it's worth.
In fact its a good practice to do in general. So that when processing the json blob you tell what only your processing requires.
What you get is that if for example you do your validation, but then by chance you touch more data that you’ve checked for, the types will tell you you are dangerous waters, and you can go update the validations.
This is especially useful if you’re not the original author or if you’ve written it several months back and don’t remember the details.
Static types are really cool that way, and can be treated as just a faster to write and faster to run and always up to date unit test.
I'm sure there's better ways to do this, but that's an example I'll throw in.
All of which depends on a lot of variables.
Most people disagree, but then they end up writing giant python monoliths with layers of implementation inheritance, dependency injection and functional programming paradigms.
In the end, they try to port it to pretty much any other language, but at that point it’s too late.
Here's the thing I see repeatedly called out as a negative, but it's a positive!
Processes and networks are amazing abstractions. They force you to not share memory on a single system, they encourage you to focus on how you communicate, they give you scale and failure isolation, for force you to handle the fact that a called subroutine might fail because it's across a network.
> f your codebase has failed to achieve modularity with tools such as functions and packages, it will not magically succeed by adding network layers and binary boundaries inside it
Functions allow shared state, they don't isolate errors. Processes over networks do. That's a massive difference.
If you read up on the fundamental papers regarding software reliability this is something that's brought up ad nauseum.
> (this might be the reason why the video game industry is still safe from this whole microservice trend).
Performance is more complex than this. For a video game system latency might be the dominating criteria. For a data processing service it might be throughput, or the ability to scale up and down. For many, microservices have the performance characteristics that they need, because many tasks are not latency sensitive, or the latency sensitive part can be handled separately.
> would argue that by having to anticipate the traffic for each microservice specifically, we will face more problem because one part of the app can't compensate for another one.
I would argue that if you're manually scaling things then you're doing it wrong. Your whole system should grow and shrink has needed.
The problem: distributed systems are hard to get right. Better stay away from them unless you really need them, AND you have the time/resources to implement them correctly. The benefits of microservices are a bad excuse, most of the time.
Don't opt for something very, very hard unless you have to.
Global state still gets pushed out into backend services (redis, postgres) and I can still scale horizontally all day, but there’s no crazy chain of backend interservice HTTP requests to cause no end of chaos :)
Thesis is here: https://harvest.usask.ca/handle/10388/ETD-2015-06-2185
I don't believe there are any papers that show that adding network hops to an application makes it more reliable. I would be extremely interested in any references you could provide.
Here's one paper that discusses building reliable systems from two abstractions - the process and transaction.
That just adds one failure mode to the list of failure modes people ignore due to the happy-path development that languages with "unchecked exceptions as default error handling" encourage.
> Functions allow shared state, they don't isolate errors. Processes over networks do. That's a massive difference.
Except not, because "just dump that on a database/kv-store" is an all-too-common workaround chosen as an easy way out. This problem is instead tackled by things such as functionally pure languages such as Haskell and Rust's borrow checker, and only up to a certain degree at which point it's still back into the hands of the programmer's experience; though they do help a ton.
Then maybe we should be criticizing that instead? Like, that'd still happen with Haskell or Rust (or, in my experience, Erlang, with that KV store specifically being ETS). Seems like that's the thing that needs addressed.
There are only two meaningful failure modes - persistent and transient. So adding another transient failure (network partition) is not extra work to handle.
> Except not, because "just dump that on a database/kv-store" is an all-too-common workaround chosen as an easy way out.
Just to be clear, microservices are not just separate binaries on a network. If you're not following the actual patterns of microservice architecture... you're just complaining about something else.
So what you're saying is that the way to avoid this problem in a microservice architecture, is to be disciplined and follow the right patterns. Then couldn't I just follow the same patterns in a modular monolith (eg: avoid shared state, make sure errors are handled properly, etc) and get the bulk of the benefits, without having to introduce network related problems into the mix?
Sure. Microservice architecture is a set of design patterns and a discipline for knowing how to structure your applications.
Many, including myself, would argue that by leveraging patterns such as separate processes as a focal point for the architecture leads to patterns that are harder to break out of and abuse, but of course anyone can do anything.
Error handling is the easiest one. With any 'service oriented' approach, where processes are separated, you can't share mutable state without setting up another service entirely (ex: a database). Microservices encourage message passing and RPC-like communication instead, and it's much easier to fall into the pit of success.
Could you do this with functions? Sure - you can just have your monolith move things to other processes on the same box. Not sure how you'd get there without a process abstraction, ultimately, but you could push things quite far with immutability, purity, and perhaps isolated heaps.
This is like the one thing that microservices might actually be sort of good at: drawing a few very hard boundaries that do actually sort of push people in the general direction of sanity, e.g. it's easier to have basic encapsulation when the process might be on another computer...
As witnessed by many teams, spaghetti happens just as poorly in a distributed monolith as it does in a proper monolith, it just adds latency and makes it harder to debug.
The boundaries you're imagining are not drawn by the technology nor by the separate codebases, they're drawn by the programmers making the calls. And I guarantee you that the average developer with their usual OOP exposure can understand much more easily where to draw decent boundaries following some pattern like Clean/Hexagonal/Onion/Whatever Architecture as opposed to microservices, where it's far more arbitrary to determine the concerns of a microservice, specially when a usecase cuts through previously drawn boundaries.
I think you missed the point - just using separate processes does not guarantee you separate errors and state between services, there's lots of ways to get 'around' that. What if the two services talk to the same database/service? What if there's weird co-dependencies and odd workarounds for shared state? What if one service failing means the entire thing grinds to a halt or data is screwed up?
Now that said, yes, if you use good development practices and have a good architecture microservices can work quite well, but if you were capable of that you probably wouldn't have created a non-microservice ball of mud. And if you're currently unable to fix your existing ball of mud, attempting to make it distributed is likely going to result in you adding more distributed mud instead. In other words, the problem here isn't really a technical one, it's a process one. And using their current failing processes to make microservices is just going to make worse mud, because they haven't yet figured out how to deal with their existing mud.
Microservice architecture is not just using separate processes.
You can 'get around' microservice architecture by not doing it. The point is that if you're familiar with it it's a lot easier to 'accidentally' be successful - or at least, that's the proposition.
> You can 'get around' microservice architecture by not doing it. The point is that if you're familiar with it it's a lot easier to 'accidentally' be successful - or at least, that's the proposition.
I don't disagree with your definition (Though I think it's a hard sell to say that's the "correct" definition, when it really depends on who you ask), but the things you described aren't exactly novel programming concepts unique to or invented by microservices. I could say the same things about OOP - modularity and API design aren't new things.
With that, the idea that things like shared state are removed due a network being between the services just isn't true, it still requires design effort to achieve that goal - I would argue the same design effort as before. And if your previous design efforts and development practices (for whatever reason) did not lead to good designs, and you're not actually making an attempt to fix the existing issues along with the reasons why you were making such decisions, then you're likely just going to repeat the same mistakes but now with a network in-between.
And yes, to an extent I agree it's not "really" microservices at that point, you're just emulating something that "looks" like microservices when it's really a ball of mud. But can't you just as easily argue they weren't creating a proper "monolith" to begin with?
I don't think anyone has ever claimed novelty in microservice architecture. It's very clearly derivative.
Shared state is moved. You get some level of isolation simply from the fact that there are two distinct pieces of hardware operating on state. You can then move state to another piece of hardware, and share that state between services (somewhat discouraged), but this is a much more explicit and heavy lift than just calling a function with an implicitly mutable variable, or sharing a memory space, file system, etc.
> But can't you just as easily argue they weren't creating a proper "monolith" to begin with?
Maybe. You can say this about anything - there's no science to any of this. I could say functional programming leads to better code than oop, but I couldn't really prove it, and certainly you can still do crazy bad garbage coding things with FP languages. But I would argue that the patterns that make up microservice architecture can help you make bad things look bad and good things look good. It's not magic fairy dust that you can just use to make your project "better" by some metric, but no one informed would ever claim so.
So why not use Haskell (or other pure language)? It's pure, so functions don't share state. And you don't have to replace function call with network call.
Shared state is part of it. Isolation of failure is another. Your Haskell code can still blow up.
Immutability makes that way easier to recover from, but it's just one part.
Of course, microservices are much more than processes over a network, they're a discipline. And I think one can go much further than microservices as well - microservice architecture doesn't tell you how processes should communicate, it's more focused on how they're split up, and leaves the protocol to you. Good protocol design is critical.
Stop on! Its so much easier to enforce separation of concern if there is an actual physical barrier. Its just to easy to just slip bad decision through peer review.
So I totally agree
The comment suggests that just by separating things in the popular sense of microservices results in a barrier to enforce separation of concerns. That's how I read it, at least, and I find that to be is misleading.
This is the crux I see around popular discourse of microservices. It's often presented without a broader context.
The point, I reckon, is that you should be implementing access control.
Not sure if typo or joke about out-of-order message processing in distributed systems.
> just to easy to just slip
And maybe also about the impossibility of exactly-once message delivery?
So instead people use a single SQL database between 20 microservices.
> give you scale and failure isolation,
Only if you configure and test them properly, and they actually tend to increase failure and make it harder to isolate its origin (hello distributed tracing)
> force you to handle the fact that a called subroutine might fail because it's across a network
They don't force that at all. It's good when people do handle that, but often they don't.
> I would argue that if you're manually scaling things then you're doing it wrong
And I would argue that if people are given a default choice of doing the wrong thing, they will do that wrong thing, until someone forces them not to.
Microservices allow people to make hundreds of bad decisions because nobody else can tell if they're making bad decisions unless they audit all of the code. Usually the only people auditing code are the people writing it, and usually they have no special incentive to make the best decisions, they just need to ship a feature.
The problem might lie in your technical leadership (e.g. making bad design decisions and failing to learn from experience) or from product leadership (e.g. demanding impossibly short turnaround on work, leading to reliance on solutions that are purely short term focused).
In my experience, the best leadership comes from people who don't hyper-focus on one thing, but are constantly thinking about many different things, mostly centered around the customer experience (because that's the whole point of this thing: to make/sell/run a product, not to have perfect software craftsmanship). I find those people rare and usually not working at lower levels of a product.
In a sense, doing the wrong thing is perfectly fine if you don't need to be doing the right thing to provide a great customer experience. Of course maintainability, availability, recoverability, extensibility, etc are all necessary considerations, but even within these you can often do the wrong thing and still be fine. I have seen some truly ugly stuff in production yet the customers seemed happy. (though there were probably some chronically sleepless engineers keeping it alive, which sucks and shouldn't happen)
I don't like microservices because of how poorly they're usually implemented by default. But at the same time, even poorly implemented microservices give some great benefits (more rapid/agile work, reusability, separation of concerns, BU independence), even if you're struggling against some harmful aspects.
this is the closest reference I can find:
The developers that aren't able to write modular code, are just going to write spaghetti network code, with the added complexity of distributed computing.
We end up with way too many developers on a given product, an explosion of systems that are only the least bit architected, but thankfully the vp of engineering didn't have to worry themselves with actually understanding anything about the technology and could do the bare minimum of people management.
Individual minor wins, collectively massive loss.
* there are reasons for microservices at big scales, if everyone is still fitting in the same room/auditorium for an all-hands I would seriously doubt that they're needed.
I would argue it's not a bad thing per se: reducing coordination can speed up some processes and reduce overhead.
But I agree there is a threshold below which it doesn't make much sense.
Anyone doing distributed computing long enough has been at this via SUN RPC, CORBA, DCOM, XML RPC, SOAP, REST, gRPC, <place your favourite on year XYZW>.
That said, I recently ended up looking at the code so far and thinking "I should have used CORBA". And nothing so far managed to dissuade that thought...
Thanks for completing the list.
I also agree that the way to build new software is to build a monolith and when it becomes really necessary, introduce new smaller services that take away functionality from the monolith little by little.
Microservices do have a good usecase even for smaller teams in some cases where functionality is independent of existing service. Think of something like LinkedIn front end making calls directly to multiple (micro)services in the backend- one that returns your contacts, one that shows people similar to you, one that shows who viewed your profiles, one that shows job ads etc. none of these is central to the functionality of the site and you don’t want to introduce delay by having one service compute and send all data back to the front end. You don’t want failure in one to cause the page to break etc.
Unfortunately, like many new tech, junior engineers are chasing the shiniest objects and senior engineers fail to guide junior devs or foresee these issues. Part of the problem is that there is so much tech junk out there on medium or the next cool blog platform that anyone can read, learn to regurgitate and sound like an expert that it’s hard to distinguish between junior and senior engineers anymore. So if leaders are not hands on, they might end up making decisions based on whoever sounds like an expert and results will be seen a few years later. But hey, every damn company has the same problem at this point.. so it’s “normal”.
Almost the entire RPA industry revolves around the idea of supporting this legacy apps problem -- scrapping content and not worrying about them breaking.
For my personal projects, I just have a frontend service (HTTP server) and a backend service (API server). Anything more is overkill.
1. Using small-ish (I hate the word "micro"), domain-bounded services leads engineers to think more carefully about their boundaries, abstractions, and interfaces than when you're in a monolith. It reduces the temptation to cheat there.
2. Conway's law is real. If you don't think very deliberately about your domain boundaries as you code, then you'll end up with service boundaries that reflect your org structure. This creates a lot of pain when the business needs to grow or pivot. Smaller, domain-bounded services give you more flexibility to evolve your team structure as your business grows and changes, without needing to rewrite the world.
I'm a big fan of the "Monolith First" approach described by Martin Fowler. Start with the monolith while you're small. Carve off pieces into separate services as you grow and need to divide responsibilities.
A multi service architecture works best if you think about each service as a component in a domain-driven or "Clean Architecture" model. Each service should live either in the outer ring where you interface with the outside world or the inner ring where you do business logic. Avoid the temptation to have services that span both. And dependencies should only point inward.
Carving pieces off a monolith is easier if the monolith is built along Clean Architecture lines as well, but in my experience, the full stack frameworks that people reach for when starting out (e.g Rails, Django) don't lead you down a cleanly de-couple-able path.
Out of interest, what does the "frontend service" do in your setup? For my personal projects I generally just go for a single server/service for simplicity.
Key features are domain focused services, DevOps friendly deployment via containerization, and continuous integration.
I stick those two (webserver/static content and API) plus a database in a docker-compose file, and put all three plus a frontend in a single repo. That feels like the sweet spot of "separate but integrated" for my work.
Hmmm... I think I can do better: Service Oriented Architecture... Yeah I like this name. SOA.
Are you telling me I just invented something that's 30 years old? Bollocks!
I assume it’s concepts like a dedicated TLS terminator, Single Sign on, centralised logging, etc?
People will try to quibble that all this is new stuff, but really it is just new names for old ideas.
People ended up treating it like every library project should be separated by a network call.
This is assuming you're converting an existing non-modular monolithic service to micro services. If you're starting from scratch or converting a modular monolithic service then this point is moot. It says nothing about the advantages or disadvantages of maintaining a modular code base with monoliths or microservices which is what people are actually talking about.
On a side note, I've found that creating something again usually leads to messes as you try to fix all the issues in the original which just creates new issues.
That does entirely depend on your language and tooling.
> the latter does not
And that does entirely depend on your processes and internal communication.
If your people keep changing, your chances are much less astronomically bad with a monolith.
At my current place we have a monolith and trying to get services right by modelling them as a sort of events pipeline. This is what we're using as a foundation, and I believe it addresses a lot of raised pain points: http://docs.eventide-project.org/core-concepts/services/ (full disclosure: I'm not personally affiliated with this project at all, but a coworker of mine is).
 At one of my previous jobs, I've had success with factoring out all payment-related code into a separate service, unifying various provider APIs. Given that this wasn't a "true" service but a multiplexer/adapter in front of other APIs, it worked fine. Certainly no worse than all the third-party services out there, and I believe they're generally considered okay.
That’s wild. Microservices are mostly beneficial organizationally — a small team can own a service and be able to communicate with the services of other small teams.
If anything I think a 10:1 software engineers:services is probably not far off from the ideal.
And a cross-concern fix that a dev used to be able to apply by himself in a day, now has to go through 5 teams, 5 programming languages, 5 kanban boards and 5 QA runs to reach production. I never understood the appeal of teams "owning" services.
In my dream world, every engineer can and should be allowed to intervene in as many layers/slices of the code as his mental capacity and understanding allows. Artificial - and sometimes bureaucratic - boundaries are hurtful.
To me, it's the result of mid-to-senior software engineers not being ready to let go of their little turfs as the company grows, so they build an organizational wall around their code and call it a service. It has nothing to do with computer science or good engineering. It is pure Conway's law.
In more mature engineering organizations, you would define a set of maintainers for the service, who will define the contribution mechanisms and guidelines, so that anyone can make changes to the code. This is further enabled by common patterns and service structures, especially when there is a template to follow. Strict assumed "ownership" creates anti-patterns where each team will define their favourite tech stack or set of libraries making it difficult for others to contribute and decreasing the possible leverage effects in the team.
The term 'ownership' is popular in product teams  and in engineering career frameworks . In the second example, it's defined as "The rest of that story is about owning increasingly heavy responsibilities". Even github allows defining code ownership through the CODEOWNERS files.
The problem is that a sufficiently-complex system is impossible for a single engineer to comprehend in its entirety, so each engineer will end up only understanding a subset of the overall system. Splitting a monolith into clearly-delineated services - and splitting those engineers into teams developing and maintaining those services - helps make that understanding less of a daunting task.
I agree that engineers should be able to move between these different concerns mostly freely (so long as there are indeed engineers covering all concerns), and should be encouraged to learn about those components which interest them, but that clear delineation and a clear notion of "ownership" of different components expedites the process of figuring out who can fix what (since "everyone can fix everything" is unrealistic for all but the most trivial systems), who to call first when something goes wrong, who should be reviewing PRs for a given component, etc.
If you ever find yourself in that situation, rest assured that something went wrong way before the decision to use microservices was made.
If your systems are that tightly coupled you'll have problems regardless of architecture.
Are there examples of the size of these individual services? What are they doing?
* two services that accept requests via telephony protocols (SS7, SIP, these are quite small) and forwards the request to:
* the business logic component (large only because of the complexity of the business logic implemented). When there's some state changes, it sends a request to:
* the replication module (say, midsized), which ensures that the state change is replicated at The Other Site to handle geographical redundancy (a requirement from our customer).
* There's one other microservice that the business logic uses and that's to send some data to Apple so they can hand it off to iOS phones (all our stuff internally uses UDP (except for the SS7 stuff)---the addition of a TLS based service was too much to add to the business logic component and this is quite small as well).
All of these modules are owned by the team I'm on, namely due to the esoteric nature of it (the telephony protocols, this is real time since it's part of the call path , etc.). Within our team, we can work on any component, but generally speaking, we tend to specialize on a few components (I work with the telephony interfaces and the Apple interface; my fellow office mate deals with the business logic).
 As a cell phone call is being made.
I think the discussion about microservices has suffered more than anyone realises from a lack of shared understanding about what a microservice actually is.
And the teams managing those „services” do full DevSecOps? 10 people working on such component is actually pretty decent team size for that task...
Surely, you can make the creation of services really easy - so easy that it's not even viable to define meaningful business names for the services, thus creating an unmaintainable mess. But still, having an application registry or automation that crawls the cloud resources for services can be done afterwards without significantly impacting the speed of creating new services.
This is my solution, called Dataworks, if anyone's interested: https://github.com/acgollapalli/dataworks#business-case
(Some of those things like introspection and replay-of-events are in the road map, but the core aspects of hotswapping and modification of code-in-db work.)
EDIT: The above was not fully considered. I think the original article makes a really good point about this:
>Splitting an application into microservices gives finer-grained allocation possibility; but do we actually need that ? I would argue that by having to anticipate the traffic for each microservice specifically, we will face more problem because one part of the app can't compensate for another one. With a single application, any part (let's say user registration) can use all allocated servers if needed ; but now it can only scale to a fixed smaller part of the server fleet. Welcome to the multiple points of failure architecture.
Having a monolith where each feature is deployed separately according to feature flags makes some sense in that you have one codebase, deployed modularly, like microservices, but you still leave yourself open to the "multiple points of failure arhitecture" as the author describes it. In addition, the feature flags idea doesn't really remove the deployment disadvantages of the monolith, unless you're willing to have different parts of your horizontally deployed application on different versions.
Actors are a simple solution to the same problems microservices solve and have existed since the 1970s. Actor implementations address the problem foundationally by making hot deployment, fault tolerance, message passing, and scaling fundamental to both the language and VM. This is the layer at which the problem should be solved but it rules out a lot of languages or tools we are used to.
So, in my opinion, microservices are a symptom of an abusive relationship with languages and tools that don't love us, grow with us or care about what we care about.
But I also think they're pretty much the same thing as EJBs which makes Kubernetes Google JBoss.
No it doesn't. Google "distributed monolith" to read some horror stories.
Bad architecture, or good architecture without enough quality control over time will cause these issues one way or another.
There's no silver bullet for this.
until one engineer say "hmmm why adding a new endpoint in their service while we could simply connect our microservice to their database directly"
That is: the database's job ain't just to store the data, but also to control access to it, and ensure the validity of it. A lot of applications seem to only do the first part and rely on the application layer to handle access control and validation, and then the engineers developing these apps wonder why the data's a tangled mess.
I did a post about microservices as I've seen them, and I see the more as software evolution matching that of our own biological and social evolution:
Like our own immune system, the thousands of moving parts have somehow evolved to fight infections and keep us alive, but it's easy to not be able to understand how any of it really works together.
For example the deployment aspect:
- monolith single deployable unit.
- microservice multiple independently deployable units.
Multiple teams on a monolith:
- you have to coordinate releases and rolebacks...
- code base grows and dependencies between modules (that have shouldn't have dependencies on each other as well, unless you have a good code review culture.)
- deployment get slower and slower over time.
- db migrations also need to coordinates over multiple teams.
These problems go away when you go microservices.
Of course you get other problems.
My point is, in the discussion microservices vs monolith you need to consider a whole bunch of dimensions to figure our what is the best fit for your org.
“Microservices” is just a new name for SOA that ditches the psychological association SOA had developed with the WS-* series of XML-based standards and associated enterprise bloat.
One thing the article fails to mention are the boat loads of tooling out there to address the failings of, and complement microservices architecture, of which Kubernetes is only one.
Sure they come with their own levels of complexity, but deploying K8 today is orders of magnitude simpler than it was 4 years ago. The same will hold true for similar tooling in the general microservices/container orchestration domain, such as service mesh (it's a lot simpler to get up and running with Istio or Linkerd than it was 18 months ago), distributed tracing (Jaeger/Opentelemetry) and general observability.
I'd also point out that MS can provide benefits outside of just independent scaling and independent deployment of services, but should in theory also allow for faster velocity in adding new services, all dependent on following effective DDD when scaffolding services, they allow different teams in a large org to design, build and own their own service ecosystem with APIs as contracts between their services and upstream/downstream consumers in their own org and new team members coming onboard should in theory be able to get familiar with a tighter/leaner codebase for a microservice as opposed to wading through thousands of lines of a monoliths code to find/understand the parts relevant to their jobs.
The other callout is clean APIs over a network can just be clean APIs internally. This is true in theory but hardly in practice from what I've seen. Microservices tend to create boundaries that are more strictly enforced. The code, data and resources are inaccessible except through what is exposed through public APIs. There is real friction to exposing additional data or models from one service and then consuming it in another service, even if both services are owned by the same team (and moreso if a different team is involved). At least in my experience, spaghetti was still primarily the domain of the internal code rather than the service APIs.
There's also a number of benefits as far as non-technical management of microservices. Knowledge transfer is easier since again, the scope is narrower and the service does less. This is a great benefit as people rotate in and out of the team, and also simplifies shifting the service to another team if it becomes clear the service better aligns with another team's responsibilities.
The idea of microservices is that they are self-contained, not just middleware to a monolithic backend.
If you have a shared data store then you are not really implementing microservices.
In fact, the linked article by Martin Fowler pretty much describes it as the opposite of what you are describing:
No, microservices can handle data from a Bounded Context, that can be its own data, external data, or aggregate data. A Bounded Context is data that is part of a specific Domain that may connect to other Domains that have edges explicitly defined. Therefore the data is decentralized, it can connect to an API that is a monolith, it can interface with messaging services, send notifications over websockets etc because its... a middleware service.
From the article that you linked to debunk me:
> The Guardian website is a good example of an application that was designed and built as a monolith, but has been evolving in a microservice direction. The monolith still is the core of the website, but they prefer to add new features by building microservices that use the monolith's API. This approach is particularly handy for features that are inherently temporary, such as specialized pages to handle a sporting event. Such a part of the website can quickly be put together using rapid development languages, and removed once the event is over. We've seen similar approaches at a financial institution where new services are added for a market opportunity and discarded after a few months or even weeks.
And if I am reading this right, they have a monolith backend, but the frontend doesn't read directly from that it reads from some 'thing' in the middle? Oh what's that called? Its on the tip of my tongue. Ah, right, its called middleware. Because microservices are middleware.
Edit: Oh look that article you linked to debunk me also has the very image I am trying to describe with words:
Isolating something to a simple deployment exposed through an RPC API might make it far easier and straight forward to validate and pass requirements.
Micro-services can be used and misused. Good engineering rarely follows these culture-trends. If it makes sense, it makes sense.
The same applies to systems architecture. Microservices isn't the only solution or the best solution.
Case in point: I've worked on high-frequency trading systems for much of my career. The early systems, circa 2000-2005, were built on top of pub/sub systems like Tibco RV or 29West - this was effectively microservices before the term was used popularly.
What happened around 2006 was that the latency required to be profitable in high-frequency came down drastically. Strategies that were profitable before needed to run much faster. The result was to move to more monolithic architectures where much of the "tick to trade" process happened in a single thread.
Point is: use the right tool for the job. Sometimes requirements change and the tools needed change as well.
You can but you shouldn't unless there's a very good reason (ex: there's a very specific interface only available in a language that doesn't conform to the rest of your services) :)
BTW, for level 1 businesses, I have a boilerplate for Node.js & TypeScript you may want to give it a try: https://github.com/bsdelf/barebone-js
For example: a high level, black box test of a service endpoint requires mocking external dependencies like other services, queues, and data stores. With a large monolith, a single process might touch a staggering number of the aforementioned dependencies, whereas something constrained to be smaller in scope (a microservice) will have a manageable number.
I enjoy writing integration and API tests of a microservice. The ones that we manage have amazing coverage, and any refactor on the inside can be made with confidence.
Our monoliths tend to only support unit tests. Automated end-to-end tests exist, but due to the number of dependencies these things rely on, they’re executed in a “live” environment, which makes them hardly deterministic.
Microservices allow for a healthy Pyramid of Tests.
But in my experience, that thinking leads to bugs where the JSON was correct, but it still triggers an error when you run real business logic on it.
It's an easy trap to fall into because that microservice exists to abstract away that business logic, but you can't just pretend it doesn't exist when it comes to testing.
So stubs may be good for unit tests, but only if there are integration tests to match.
If you have 10 microservices, each of which can be on one of two versions, that's 1024 combinations. How do you test that?
Like, services are an abstraction. If one service has to call all other 9 services, and the same occurs for the other 9 services -- then that's a monolith acting over the network.
"New" and "current" are two different versions.
Which downplays your exaggerated 1024 cases to 1.
Each team can't just deploy a new version of their microservice when it makes sense to them.
So your collection of microservices becomes a bit of a distributed monolith, losing some of the classic microservice advantages.
Or so it seems to me. I just read about this stuff, have never used it. Happy to be educated.
I'd venture to say that this is a strong indication that You're Holding it Wrong™
> a high level, black box test of a service endpoint
Then maybe don't do these kinds of high-level black box tests?
> requires mocking external dependencies
...if you're stubbing out everything, then it's not actually a high-level, black-box test. So no use pretending it is and having all these disadvantages.
Instead, use hexagonal architecture to make your code usable and testable without those external connections and only add them to a well-tested core.
See: Why I don't Mock (2014) https://blog.metaobject.com/2014/05/why-i-don-mock.html
Discussed at the time: https://news.ycombinator.com/item?id=7809402
> [All] technical challenges [...] will not be magically solved by using microservices
Is the key statement of your article, then you should really consider adding a lot of nuance or not publishing it at all.
>The main benefit is independency
In the absence of independency, a service development organization will hit a ceiling and fail to scale beyond that. While there may be a whole host of other problems that microservices does not solve, this single problem makes it worthwhile in many cases.
That all said, implementing microservices well or even scaling beyond the point where microservices become useful requires a great deal of engineering discipline. Concepts like configuration as code and API contracts have to become something more than theoretical.
That rules out every monolith I've seen at companies that still did that.
But unfortunately, Microservices, becomes a religion, a cargo cult, and companies have hundreds of tiny little services.
My services are not monoliths. But are they microservices? Don't care. They work. Certainly they are just a couple of services within a network of several hundred, but I work at a large company. And every one of those services has one team responsible for them.
Structure engineers have such books. They need to build something speced to hold X tonnes here are the possibilities outlined in simple drawings.
I need to bump my head on any problem I'm facing and I live in constant doubt that I took the wrong technical decisions.
I mean do they abstract the function of "emailing" out into 50 different micro services. Or 1 micro service?
personally, i'm more a fan of "realm of responsibility scoped services" to decouple technologies/datastores of parts of a system that do not interact by design (for instance, your user account / credentials handling from literally anything else), and then use a system like kafka (with producer-owned format) to have a common data bus that can tolerate services that process data asyncronously (or even things that keep users in the typical "refresh loop") dying for a bit.
That must make meatballs the DBs and parmesan the JS frameworks.
Network partitions are indeed a problem for distributed software in general. By the time microservices are worthwhile, however, the application likely already necessitates a distributed design.
> Remember your nice local debugger with breakpoints and variables? Forget it, you are back to printf-style debugging.
...why? What's stopping you from using a debugger? A microservice v. a monolith should make zero difference here.
At worst, you might have to open up multiple debuggers if you're debugging multiple services. Big whoop.
> SQL transaction ? You have to reimplement it yourself.
...why? What's stopping you from pulling it from a library of such transactions?
I don't even really think this is a "problem" per se. Yeah, might be inconvenient for some long and complicated query, but that's usually a good sign that you should turn that into a stored procedure, or a view, or a service layer (so that other services ain't pinging the database directly), or something else, since it simply means bad "API" design (SQL being the "API" in this context).
> Communication between your services is not handled by the programming language anymore, you have to define and implement your own calling convention
Which is arguably a good thing, since you're able to more readily control that calling convention and tailor it to your specific needs. It also gives ample opportunities for logging that communication, which is a boon for troubleshooting/diagnostics and for intrusion detection.
> Security (which service can call which service) is checked by the programming language (with the private keyword if you use classes as your encapsulation technique for example). This is way harder with microservices: the original Monzo article shows that pretty clearly.
The programming language can do little to nothing about security if all the services are in the same process' memory / address space; nothing stopping a malicious piece of code from completely ignoring language-enforced encapsulation.
Microservices, if anything, help considerably here, since they force at least process-level (if not machine-level) isolation that can't so easily be bypassed. They're obviously not a silver bullet, and there are other measures that should absolutely be taken, but separation of concerns - and enforcing that separation as strictly as possible - is indeed among the most critical of those measures, and microservices happen to more or less bake that separation of concerns into the design.
Just name it "The downsides of microservices" and we'll know that it's your personal opinion. This title might get you more clicks, but it's a turn off for me.
 - https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.p...
I would go on a limb and say that CH-titles should be themselves considered harmful. 
Personally, my main issue with them is that they invite (often unfavourable) comparisons with EJD's paper.
“The original title of the letter, as submitted to CACM, was "A Case Against the Goto Statement", but CACM editor Niklaus Wirth changed the title to "Go To Statement Considered Harmful".”
So, Dijkstra’s choice for a title would be “A Case Against Microservices”.
There's an HN discussion too https://news.ycombinator.com/item?id=9744916
I hated the obnoxious tone of the quote when I first heard it, nowadays it makes me smile with a grin.