
Microservices Considered Harmful (2019) - maattdd
https://blog.matthieud.me/2019/microservices-considered-harmful/
======
ceronman
I once read a quote that was something like "You are only an expert in a given
technology when you know when _not_ to use it". Unfortunately I forgot the
exact quote and the author. (If anyone knows please let me know).

This is such a nice quote that speaks a lot about what it means to be an
experienced (senior) software engineer. Our field is such a dynamic one! New
tools and techniques appear all the time.

It's easy to fall into the trap of thinking that newer tools are better. I
think this is because in some areas of tech this is almost always true (e.g.
hardware). But in software, new tools are techniques are rarely fully better,
instead they just provide a different set of trade offs. Progress doesn't
follow a linear pattern, it's more like a jagged line slowly trending upwards.

We think we are experienced because we know how to use new tools, but in
reality we are only more experienced when we understand the trade offs and
learn when the tools are really useful and when they are not.

A senior engineer knows when not to use micro services, when not to use SQL,
when not to use static typing, when not to use React, when not to use
Kubernetes, etc.

Junior engineers don't know these trade offs, they end up using sophisticated
hammers for their screws. It doesn't mean that those hammers are bad or
useless, they were just misused.

~~~
macspoofing
>when not to use static typing

I'll bite. When should static typing not be used?

(Note: I agree with your general point)

~~~
yashap
When I’m throwing together a quick script, or maybe some minor piece of
automation, Python is my go-to language.

For any decently large project, though, I prefer static typing.

~~~
karatestomp
I'd kinda prefer it then, too. If it's short and I'm just scripting stuff up
it's probably mostly calls to existing code (libraries) so almost all the
static typing would do is tell me when I'm screwing up and give me hints for
valid arguments with types and names, with little or no added overhead. It'd
be _very_ nice.

~~~
MaulingMonkey
Consuming types in that context is nice, defining them is what's generally a
PITA / extra boilerplate / overhead.

To take a concrete example that could go both ways: Say you want to parse a
JSON blob for some task. On the one end, you could access it through dynamic
typing, or tools like jq, that don't need a schema for the entire data format.
At the other extreme, you could make typescript definitions defining a schema
for the entire format.

The more the same data gets (re)used, the more worthwhile taking the time to
define a full schema is. But to download and add type definitions (often out
of date and in need of further tweaking) for every once-off API request? Way
more effort than it's worth.

~~~
seer
Most static type systems do not force you to say what’s the entire shape of
the data, just what the shape of the data you need is.

In fact its a good practice to do in general. So that when processing the json
blob you tell what only your processing requires.

What you get is that if for example you do your validation, but then by chance
you touch more data that you’ve checked for, the types will tell you you are
dangerous waters, and you can go update the validations.

This is especially useful if you’re not the original author or if you’ve
written it several months back and don’t remember the details.

Static types are really cool that way, and can be treated as just a faster to
write and faster to run and always up to date unit test.

------
staticassertion
> However, your codebase has now to deal with network and multiple processes.

Here's the thing I see repeatedly called out as a negative, but it's a
positive!

Processes and networks are amazing abstractions. They force you to not share
memory on a single system, they encourage you to focus on how you communicate,
they give you scale and failure isolation, for force you to handle the fact
that a called subroutine might fail because it's across a network.

> f your codebase has failed to achieve modularity with tools such as
> functions and packages, it will not magically succeed by adding network
> layers and binary boundaries inside it

Functions allow shared state, they don't isolate errors. Processes over
networks do. That's a massive difference.

If you read up on the fundamental papers regarding software reliability this
is something that's brought up ad nauseum.

> (this might be the reason why the video game industry is still safe from
> this whole microservice trend).

Performance is more complex than this. For a video game system latency might
be the dominating criteria. For a data processing service it might be
throughput, or the ability to scale up and down. For many, microservices have
the performance characteristics that they need, because many tasks are not
latency sensitive, or the latency sensitive part can be handled separately.

> would argue that by having to anticipate the traffic for each microservice
> specifically, we will face more problem because one part of the app can't
> compensate for another one.

I would argue that if you're manually scaling things then you're doing it
wrong. Your whole system should grow and shrink has needed.

~~~
mixedCase
> for force you to handle the fact that a called subroutine might fail because
> it's across a network.

That just adds one failure mode to the list of failure modes people ignore due
to the happy-path development that languages with "unchecked exceptions as
default error handling" encourage.

> Functions allow shared state, they don't isolate errors. Processes over
> networks do. That's a massive difference.

Except not, because "just dump that on a database/kv-store" is an all-too-
common workaround chosen as an easy way out. This problem is instead tackled
by things such as functionally pure languages such as Haskell and Rust's
borrow checker, and only up to a certain degree at which point it's still back
into the hands of the programmer's experience; though they do help a ton.

~~~
staticassertion
> That just adds one failure mode to the list of failure modes people ignore
> due to the happy-path development that languages with "unchecked exceptions
> as default error handling" encourage.

There are only two meaningful failure modes - persistent and transient. So
adding another transient failure (network partition) is not extra work to
handle.

> Except not, because "just dump that on a database/kv-store" is an all-too-
> common workaround chosen as an easy way out.

Just to be clear, microservices are not just separate binaries on a network.
If you're not following the actual patterns of microservice architecture...
you're just complaining about something else.

~~~
unchar1
>Just to be clear, microservices are not just separate binaries on a network.
If you're not following the actual patterns of microservice architecture...
you're just complaining about something else

So what you're saying is that the way to avoid this problem in a microservice
architecture, is to be disciplined and follow the right patterns. Then
couldn't I just follow the same patterns in a modular monolith (eg: avoid
shared state, make sure errors are handled properly, etc) and get the bulk of
the benefits, without having to introduce network related problems into the
mix?

~~~
fennecfoxen
Because engineering discipline is actually hard. Not necessarily in the "here
is how you do it" sense, just in the sense of getting the buy-in from
engineers and engineering leadership that will make it happen.

This is like the one thing that microservices might actually be sort of good
at: drawing a few very hard boundaries that do actually sort of push people in
the general direction of sanity, e.g. it's easier to have basic encapsulation
when the process might be on another computer...

~~~
mixedCase
I cannot figure out how you can see that. RPC just adds a "Remote" on top of
the "Procedure Call" part, we add a failure mode but the thought process is
the same.

As witnessed by many teams, spaghetti happens just as poorly in a distributed
monolith as it does in a proper monolith, it just adds latency and makes it
harder to debug.

The boundaries you're imagining are not drawn by the technology nor by the
separate codebases, they're drawn by the programmers making the calls. And I
guarantee you that the average developer with their usual OOP exposure can
understand much more easily where to draw decent boundaries following some
pattern like Clean/Hexagonal/Onion/Whatever Architecture as opposed to
microservices, where it's far more arbitrary to determine the concerns of a
microservice, specially when a usecase cuts through previously drawn
boundaries.

------
dpeck
In my experience, microservices grew to prominence not because of their
technical merit, but because it allowed engineering "leadership" to not have
to make decisions. Every developer or group of developers could create their
own fiefdoms and management didn't have to worry about fostering consensus or
team efforts, all that had to be agreed on was service contracts.

We end up with way too many developers on a given product, an explosion of
systems that are only the least bit architected, but thankfully the vp of
engineering didn't have to worry themselves with actually understanding
anything about the technology and could do the bare minimum of people
management.

Individual minor wins, collectively massive loss.

* there are reasons for microservices at big scales, if everyone is still fitting in the same room/auditorium for an all-hands I would seriously doubt that they're needed.

~~~
pjmlp
And the worse it that it just gets rebooted every couple of years.

Anyone doing distributed computing long enough has been at this via SUN RPC,
CORBA, DCOM, XML RPC, SOAP, REST, gRPC, <place your favourite on year XYZW>.

~~~
peterwwillis
Some more hits and misses: ONC-RPC, DCE/RPC, JRMI, MSRPC, RPC over HTTP, MAPI
over HTTP, JSON-RPC, JSON-WSP, WAMP, M-RPC, MTProto, ICE, Cap'n Proto, PRyC,
DRb, AMF, RFC, D-Bus, Thrift, Avro, GWT

~~~
p_l
<pedant>ONC-RPC and DCE/RPC and MSRPC and JRMI are dupes</pedant> ;)

That said, I recently ended up looking at the code so far and thinking "I
should have used CORBA". And nothing so far managed to dissuade that
thought...

------
dpenguin
I agree 99.9% of products do not need a micro service architecture because 1\.
They will never see scaling to the extent that you need to isolate services
2\. They don’t have zero downtime requirements 3\. They don’t have enough
feature velocity to warrant breaking a monolith 4\. They can be maintained by
a smaller team

I also agree that the way to build new software is to build a monolith and
when it becomes really necessary, introduce new smaller services that take
away functionality from the monolith little by little.

Microservices do have a good usecase even for smaller teams in some cases
where functionality is independent of existing service. Think of something
like LinkedIn front end making calls directly to multiple (micro)services in
the backend- one that returns your contacts, one that shows people similar to
you, one that shows who viewed your profiles, one that shows job ads etc. none
of these is central to the functionality of the site and you don’t want to
introduce delay by having one service compute and send all data back to the
front end. You don’t want failure in one to cause the page to break etc.

Unfortunately, like many new tech, junior engineers are chasing the shiniest
objects and senior engineers fail to guide junior devs or foresee these
issues. Part of the problem is that there is so much tech junk out there on
medium or the next cool blog platform that anyone can read, learn to
regurgitate and sound like an expert that it’s hard to distinguish between
junior and senior engineers anymore. So if leaders are not hands on, they
might end up making decisions based on whoever sounds like an expert and
results will be seen a few years later. But hey, every damn company has the
same problem at this point.. so it’s “normal”.

~~~
Sherl
Atleast frontend and backend needs to be decoupled in almost any development
for the future. I work with several legacy apps where we use python requests
just to collect data. It's a huge pain when https certificate expires, when
they change something in validation header and when they deploy a new 'field'.
Most CRUD applications do need a place when you can collect all the data after
the backend process all the business logic without touching the frontend.

Almost the entire RPA industry revolves around the idea of supporting this
legacy apps problem -- scrapping content and not worrying about them breaking.

------
cameronbrown
Microservices were never about code architecture, they were an organisational
pattern to enable teams to own different services. Most "microservices" don't
actually look micro to those implementing them, because it's really just "a
lot of services".

For my personal projects, I just have a frontend service (HTTP server) and a
backend service (API server). Anything more is overkill.

~~~
benhoyt
Agreed. Or as I've heard it said, "microservices solve a people problem, not a
technical one". This is certainly how they were pushed at my current workplace
-- it was all about "two-pizza teams being able to develop and deploy
independently".

Out of interest, what does the "frontend service" do in your setup? For my
personal projects I generally just go for a single server/service for
simplicity.

~~~
cameronbrown
My frontend services handles all user requests (HTML templates, i18n,
analytics, authentication). Backend is exclusively focused on business logic,
interacting with DB, cron jobs, etc... My projects grew into this naturally,
and it was monolith first.

------
z3t4
Take away the word micro from micro services. Its just a buzzword. Now you
have just services. You can have just one service that handles email, chat,
payroll, website - or you can break them up into independent services. Ask
yourself: Does it make sense to have two different services to handle x and y.
Just don't break something up because of some buzzword mantra. Maybe the
public website is the bottleneck in your monolith, then it might be a good
idea to put it on its own server and scaling strategy so that it doesn't bog
down the other parts of the system.

~~~
eldelshell
Yeah, something in between a monolith and microservices, and I'm going to name
it: services architecture.

Hmmm... I think I can do better: Service Oriented Architecture... Yeah I like
this name. SOA.

Are you telling me I just invented something that's 30 years old? Bollocks!

~~~
Bnshsysjab
Can you articulate SOA without using buzzwords?

I assume it’s concepts like a dedicated TLS terminator, Single Sign on,
centralised logging, etc?

~~~
m463
programs communicating with remote programs.

People will try to quibble that all this is new stuff, but really it is just
new names for old ideas.

[https://en.wikipedia.org/wiki/Remote_procedure_call](https://en.wikipedia.org/wiki/Remote_procedure_call)

------
marcinzm
>If your codebase has failed to achieve modularity with tools such as
functions and packages, it will not magically succeed by adding network layers
and binary boundaries inside it

This is assuming you're converting an existing non-modular monolithic service
to micro services. If you're starting from scratch or converting a modular
monolithic service then this point is moot. It says nothing about the
advantages or disadvantages of maintaining a modular code base with monoliths
or microservices which is what people are actually talking about.

~~~
mic47
If you are starting from scratch (again), you can make good monolith too,
since you already know a lot about the problem you are solving.

~~~
marcinzm
The issue generally isn't just creating something clean but rather it's
maintaining something clean. Something that will be owned not by you but by
multiple teams whose members change over the course of years.

On a side note, I've found that creating something again usually leads to
messes as you try to fix all the issues in the original which just creates new
issues.

------
drdaeman
Data API over HTTP spaghetti is surely a bad way to do microservices (some
accidental exclusions apply[1]). And if you'd have to do cross-service
transaction or jump back and forth through the logs, tracing the event as it
hops across myriad of microservices, it means that either your boundaries are
wrong or you're doing something architecturally weird. It's probably a
distributed monolith, with in-process function invocations replaced with
network API calls - something worse than a monolith.

At my current place we have a monolith and trying to get services right by
modelling them as a sort of events pipeline. This is what we're using as a
foundation, and I believe it addresses a lot of raised pain points:
[http://docs.eventide-project.org/core-
concepts/services/](http://docs.eventide-project.org/core-concepts/services/)
(full disclosure: I'm not personally affiliated with this project at all, but
a coworker of mine is).

___

[1] At one of my previous jobs, I've had success with factoring out all
payment-related code into a separate service, unifying various provider APIs.
Given that this wasn't a "true" service but a multiplexer/adapter in front of
other APIs, it worked fine. Certainly no worse than all the third-party
services out there, and I believe they're generally considered okay.

~~~
ipnon
Yes, the more I develop on the Web, the more I find HTTP lacking as a panacea
for process communication. It's ubiquitous because Web 1.0 operated at a
human-readable level. It was 1 request makes 1 response. Now that we have 1
request makes many server-side requests makes 1 server-side response makes
many client-side requests. There does not exist a solution now that solves all
of the problems of this many to many communications problem on the web.

------
plandis
> A recent blog post by neobank Monzo explains that they have reached the
> crazy amount of 1500 microservices (a ratio of ~10 microservices per
> engineer)

That’s wild. Microservices are mostly beneficial organizationally — a small
team can own a service and be able to communicate with the services of other
small teams.

If anything I think a 10:1 software engineers:services is probably not far off
from the ideal.

~~~
axlee
> That’s wild. Microservices are mostly beneficial organizationally — a small
> team can own a service and be able to communicate with the services of other
> small teams.

And a cross-concern fix that a dev used to be able to apply by himself in a
day, now has to go through 5 teams, 5 programming languages, 5 kanban boards
and 5 QA runs to reach production. I never understood the appeal of teams
"owning" services. In my dream world, every engineer can and should be allowed
to intervene in as many layers/slices of the code as his mental capacity and
understanding allows. Artificial - and sometimes bureaucratic - boundaries are
hurtful.

To me, it's the result of mid-to-senior software engineers not being ready to
let go of their little turfs as the company grows, so they build an
organizational wall around their code and call it a service. It has nothing to
do with computer science or good engineering. It is pure Conway's law.

~~~
hrpnk
In my experience, ownership of services is usually defined to ensure that in
the engineering organization it's clear who will react as the first responder
when there are security patches, production incidents to deal with and also
when there are questions about the service's inner workings. It's especially
required when the documentation of the services is sparse which is likely to
happen when the change rate in the team is high.

In more mature engineering organizations, you would define a set of
maintainers for the service, who will define the contribution mechanisms and
guidelines, so that anyone can make changes to the code. This is further
enabled by common patterns and service structures, especially when there is a
template to follow. Strict assumed "ownership" creates anti-patterns where
each team will define their favourite tech stack or set of libraries making it
difficult for others to contribute and decreasing the possible leverage
effects in the team.

~~~
axlee
Maybe it is simply a terminology issue, but what you describe, I would call it
responsibility rather than ownership. Ownership implies strong exclusivity.
Agree with your post otherwise.

~~~
hrpnk
I agree that responsibility is what we are actually looking for.

The term 'ownership' is popular in product teams [1] and in engineering career
frameworks [2]. In the second example, it's defined as "The rest of that story
is about owning increasingly heavy responsibilities". Even github allows
defining code ownership through the CODEOWNERS files.

[1] [https://svpg.com/autonomy-vs-ownership/](https://svpg.com/autonomy-vs-
ownership/) [2] [https://open.buffer.com/engineering-career-
framework/](https://open.buffer.com/engineering-career-framework/)

------
ts0000
While I agree with the notion of treating microservices with caution, I found
the article a bit too shallow, barely supporting the claim. Especially the
second "fallacy" reads like a draft and it overall ends abruptly.

------
Gollapalli
Microservices have some inherent advantages, mainly that you can manage,
modify and deploy one service at a time, without taking down/redeploying the
rest of your application(s). This is arguably the big thing that is missing
from monoliths. It's hard to only change a single API endpoint in a monolith,
but easier to do a change across the entire monolith, when you have to change
something about how the whole system works. The best compromise that I've come
up with would be to have something that can keep your entire app in one place,
but allow individual portions of it to be hot-swapped in the running
application, and is built to be run in a distributed, horizontally scalable
fashion, In addition, there's a lot to be said for the old way of putting
business logic in stored procedures, despite the poor abstraction capabilities
of SQL, relative to something like lisp, but with modern distributed
databases, we can conceivably run code in stored procedures written in
something like Clojure, keeping code close to the database, or rather, data
close to the code, allowing hot-swapping, modification, introspection, replay
of events, and all other manner of things, all while managing the whole thing
like a monolith, with a single application, configuration, etc. to deploy, and
a more manageable and obvious attack surface to secure.

This is my solution, called Dataworks, if anyone's interested:
[https://github.com/acgollapalli/dataworks#business-
case](https://github.com/acgollapalli/dataworks#business-case)

(Some of those things like introspection and replay-of-events are in the road
map, but the core aspects of hotswapping and modification of code-in-db work.)

~~~
winrid
You can just deploy the monolith horizontally with runtime/feature flags.

~~~
Gollapalli
That solves part of the problem. If you can turn it on and off with a feature
flag, then you probably have some modularity. But for an internal service, or
a SaaS offering, or any number of things, where, you have one application that
you need to scale, do feature flags really make sense?

EDIT: The above was not fully considered. I think the original article makes a
really good point about this:

>Splitting an application into microservices gives finer-grained allocation
possibility; but do we actually need that ? I would argue that by having to
anticipate the traffic for each microservice specifically, we will face more
problem because one part of the app can't compensate for another one. With a
single application, any part (let's say user registration) can use all
allocated servers if needed ; but now it can only scale to a fixed smaller
part of the server fleet. Welcome to the multiple points of failure
architecture.

Having a monolith where each feature is deployed separately according to
feature flags makes some sense in that you have one codebase, deployed
modularly, like microservices, but you still leave yourself open to the
"multiple points of failure arhitecture" as the author describes it. In
addition, the feature flags idea doesn't really remove the deployment
disadvantages of the monolith, unless you're willing to have different parts
of your horizontally deployed application on different versions.

------
fffernan
For the most part this level of microservice solves the problem of: New
engineering leader comes in. New engineering Leader wants to rewrite the
entire thing cause "it sucks". Business doesn't have resources to rewrite (for
the nth time). New leader and business compromise to create a microservice.
Rinse and repeat. Cloud/container/VM tech as really allowed this pattern to
work. The art of taking over an existing codebase, keeping it going at low
cost, low overhead is gone. Nobody's promo packet is fulled with sustainment
work. One microservice per promotion. Etc etc.

------
bhntr3
Microservices are the actor model (erlang or akka) except they require lots of
devops work, being on call for x services every night, and a container
management system like kubernetes to be manageable.

Actors are a simple solution to the same problems microservices solve and have
existed since the 1970s. Actor implementations address the problem
foundationally by making hot deployment, fault tolerance, message passing, and
scaling fundamental to both the language and VM. This is the layer at which
the problem should be solved but it rules out a lot of languages or tools we
are used to.

So, in my opinion, microservices are a symptom of an abusive relationship with
languages and tools that don't love us, grow with us or care about what we
care about.

But I also think they're pretty much the same thing as EJBs which makes
Kubernetes Google JBoss.

------
MarkMarine
This misses some of the main reason Microservices are nice, it’s much easier
to change code that isn’t woven throughout a code base. Microservices make the
boundary between units defined and forces API design on those boundaries. Yes,
you can properly design these abstractions without a service boundary, but
having the forcing function makes it required.

~~~
allan_s
> Microservices make the boundary between units defined and forces API design
> on those boundaries

until one engineer say "hmmm why adding a new endpoint in their service while
we could simply connect our microservice to their database directly"

~~~
yellowapple
I mean, this ain't entirely unreasonable. Considering the database to be its
own "service" is perfectly valid, and you can control what things a given
client can do through users/roles, constraints, triggers, etc., which you
absolutely should be doing anyway.

That is: the database's job ain't just to store the data, but also to control
access to it, and ensure the validity of it. A lot of applications seem to
only do the first part and rely on the application layer to handle access
control and validation, and then the engineers developing these apps wonder
why the data's a tangled mess.

------
djsumdog
The author is totally right about the HTTP layer/networking stuff. I don't
think you have to re-implement SQL transactions, but you do need a backing
store that allows acknowledging a message has been processed and not down-
sides to processing the same thing twice (idempotent).

I did a post about microservices as I've seen them, and I see the more as
software evolution matching that of our own biological and social evolution:

[https://battlepenguin.com/tech/microservices-and-
biological-...](https://battlepenguin.com/tech/microservices-and-biological-
systems/)

Like our own immune system, the thousands of moving parts have somehow evolved
to fight infections and keep us alive, but it's easy to not be able to
understand how any of it really works together.

------
jayd16
Hammer considered harmful. Cannot secure screws, says blog.

~~~
HideousKojima
Hammers work great for driving screws if you have an impact driver (the hand
tool kind)

------
je42
i think he is skipping a couple of points.

For example the deployment aspect: \- monolith single deployable unit. \-
microservice multiple independently deployable units.

Multiple teams on a monolith:

\- you have to coordinate releases and rolebacks... \- code base grows and
dependencies between modules (that have shouldn't have dependencies on each
other as well, unless you have a good code review culture.) \- deployment get
slower and slower over time. \- db migrations also need to coordinates over
multiple teams.

These problems go away when you go microservices. Of course you get other
problems.

My point is, in the discussion microservices vs monolith you need to consider
a whole bunch of dimensions to figure our what is the best fit for your org.

~~~
giulianob
Still don't need microservices. What you're referring to is just SOA which has
been around for a couple of decades. Microservices typically outnumber
engineers or aren't too far off.

~~~
dragonwriter
> What you're referring to is just SOA which has been around for a couple of
> decades

“Microservices” is just a new name for SOA that ditches the psychological
association SOA had developed with the WS-* series of XML-based standards and
associated enterprise bloat.

------
jb_gericke
Microservices aren't a panacea by any means, but like any tool, they provide
certain advantages when dealing with certain use-cases.

One thing the article fails to mention are the boat loads of tooling out there
to address the failings of, and complement microservices architecture, of
which Kubernetes is only one.

Sure they come with their own levels of complexity, but deploying K8 today is
orders of magnitude simpler than it was 4 years ago. The same will hold true
for similar tooling in the general microservices/container orchestration
domain, such as service mesh (it's a lot simpler to get up and running with
Istio or Linkerd than it was 18 months ago), distributed tracing
(Jaeger/Opentelemetry) and general observability.

I'd also point out that MS can provide benefits outside of just independent
scaling and independent deployment of services, but should in theory also
allow for faster velocity in adding new services, all dependent on following
effective DDD when scaffolding services, they allow different teams in a large
org to design, build and own their own service ecosystem with APIs as
contracts between their services and upstream/downstream consumers in their
own org and new team members coming onboard should in theory be able to get
familiar with a tighter/leaner codebase for a microservice as opposed to
wading through thousands of lines of a monoliths code to find/understand the
parts relevant to their jobs.

------
SiNiquity
In my experience, the benefits of microservices are primarily better
delineated responsibilities and narrower scope, and secondary benefits tend to
fall out from these. There are downsides, but the "harmful" effects do not
reflect my experience. I fully grant more things on a network invite lower
availability / higher latency, but I contend that you already need to handle
these issues. Microservices do not tend to grossly exacerbate the problem (in
my experience anyway).

The other callout is clean APIs over a network can just be clean APIs
internally. This is true in theory but hardly in practice from what I've seen.
Microservices tend to create boundaries that are more strictly enforced. The
code, data and resources are inaccessible except through what is exposed
through public APIs. There is real friction to exposing additional data or
models from one service and then consuming it in another service, even if both
services are owned by the same team (and moreso if a different team is
involved). At least in my experience, spaghetti was still primarily the domain
of the internal code rather than the service APIs.

There's also a number of benefits as far as non-technical management of
microservices. Knowledge transfer is easier since again, the scope is narrower
and the service does less. This is a great benefit as people rotate in and out
of the team, and also simplifies shifting the service to another team if it
becomes clear the service better aligns with another team's responsibilities.

------
stillbourne
Microservices are middleware. That's they way I treat them anyway. I build
them as the glue between the backend and frontend. They handle things like
authentication, business logic, data aggregation, caching, persistence, and
generally act as an api gateway. I really only ever use microservices to
handle crosscutting concerns that are not directly implemented by the backend
but have a frontend requirement. The only way that is harmful is if you write
bad code. Bad code is always harmful.

~~~
Closi
You aren’t really implementing a microservices architecture in that case
though.

The idea of microservices is that they are self-contained, not just middleware
to a monolithic backend.

~~~
stillbourne
That statement goes against everything I have learned about SOA and
microservices.

[https://apifriends.com/api-management/microservices-
soa/](https://apifriends.com/api-management/microservices-soa/)

~~~
Closi
The central premise of microservices architecture is that the services manage
their own data.

If you have a shared data store then you are not really implementing
microservices.

[https://docs.microsoft.com/en-
us/azure/architecture/microser...](https://docs.microsoft.com/en-
us/azure/architecture/microservices/design/data-considerations)

In fact, the linked article by Martin Fowler pretty much describes it as the
opposite of what you are describing:

[https://martinfowler.com/articles/microservices.html](https://martinfowler.com/articles/microservices.html)

~~~
stillbourne
> The central premise of microservices architecture is that the services
> manage their own data.

No, microservices can handle data from a Bounded Context, that can be its own
data, external data, or aggregate data. A Bounded Context is data that is part
of a specific Domain that may connect to other Domains that have edges
explicitly defined. Therefore the data is decentralized, it can connect to an
API that is a monolith, it can interface with messaging services, send
notifications over websockets etc because its... a middleware service.

From the article that you linked to debunk me:

> The Guardian website is a good example of an application that was designed
> and built as a monolith, but has been evolving in a microservice direction.
> The monolith still is the core of the website, but they prefer to add new
> features by building microservices that use the monolith's API. This
> approach is particularly handy for features that are inherently temporary,
> such as specialized pages to handle a sporting event. Such a part of the
> website can quickly be put together using rapid development languages, and
> removed once the event is over. We've seen similar approaches at a financial
> institution where new services are added for a market opportunity and
> discarded after a few months or even weeks.

And if I am reading this right, they have a monolith backend, but the frontend
doesn't read directly from that it reads from some 'thing' in the _middle?_ Oh
what's that called? Its on the tip of my tongue. Ah, right, its called
middleware. Because microservices are middleware.

Edit: Oh look that article you linked to debunk me also has the very image I
am trying to describe with words:

[https://martinfowler.com/articles/microservices/images/decen...](https://martinfowler.com/articles/microservices/images/decentralised-
data.png)

~~~
Closi
Can you describe how that image is what you are describing?

------
toddwprice
I see microservices as a people/team architecture. It's a way to scale up
people and define boundaries around who is responsible for what, without
having to standardize how everyone implements what they are responsible for.
Just expose it as a REST API. Problem solved. And problems created. This isn't
all bad, it just isn't a "everyone should do this and all your problems will
go away" architecture. That architecture doesn't exist.

------
danthemanvsqz
Microservices are good for scaling teams not hardware. If you don't have more
than one team then there is no reason to break up your monolith.

~~~
okamiueru
Seems just a bit too black and white. Surely there can be reasons for
splitting up a monolith. Some domains might require very strict boundaries for
shared memory and concurrent software on a given system. CE Certified class C
medical software comes to mind.

Isolating something to a simple deployment exposed through an RPC API might
make it far easier and straight forward to validate and pass requirements.

Micro-services can be used and misused. Good engineering rarely follows these
culture-trends. If it makes sense, it makes sense.

------
anonu
I read the explanation and I think the answer is still: it depends. Think
about it this way, in your kitchen you dont just have 1 kind of knife. You
probably have 2 or 3 different kinds of knives if you're doing basic stuff -
and maybe 5 to 10 different knives if you're a top chef.

The same applies to systems architecture. Microservices isn't the only
solution or the best solution.

Case in point: I've worked on high-frequency trading systems for much of my
career. The early systems, circa 2000-2005, were built on top of pub/sub
systems like Tibco RV or 29West - this was effectively microservices before
the term was used popularly.

What happened around 2006 was that the latency required to be profitable in
high-frequency came down drastically. Strategies that were profitable before
needed to run much faster. The result was to move to more monolithic
architectures where much of the "tick to trade" process happened in a single
thread.

Point is: use the right tool for the job. Sometimes requirements change and
the tools needed change as well.

------
jgarzon
Like almost anything when not used for the correct application. Say a hammer
to insert a screw. It is not a good idea. One of my favorite things about
using microservice, is that you can use multiple languages. This can grant you
the ability to use a language which is better for the task, or for other
programmers to contribute in their favorite language.

~~~
status_quo69
> you can use multiple languages

You can but you shouldn't unless there's a very good reason (ex: there's a
_very_ specific interface only available in a language that doesn't conform to
the rest of your services) :)

------
bsdelf
IMO, microservice has two practice levels. Level 1, single codebase with
multiple entrances (programs). At this level, the application already scales
in horizontal and functional, while still has the benefit of code & data
sharing. Level 2, eliminate code & data sharing, use RPC or MQ for
communication, split the project into multipile repositories. This level might
be regarded as the "ture" microservices which is considered harmful according
to the blog post. Generally speak, if level 1 could fit your business, there
is no need to go for level 2. If it does need level 2, well, it is the
complexity itself which leads to the architecture, there is no shortcut.

BTW, for level 1 businesses, I have a boilerplate for Node.js & TypeScript you
may want to give it a try: [https://github.com/bsdelf/barebone-
js](https://github.com/bsdelf/barebone-js)

------
trentdk
A major benefit to microservices (over monoliths) that I haven’t seen
mentioned yet is testability. I find it hard, or improbable to achieve a
healthy Pyramid of Tests on a large monolith.

For example: a high level, black box test of a service endpoint requires
mocking external dependencies like other services, queues, and data stores.
With a large monolith, a single process might touch a staggering number of the
aforementioned dependencies, whereas something constrained to be smaller in
scope (a microservice) will have a manageable number.

I enjoy writing integration and API tests of a microservice. The ones that we
manage have amazing coverage, and any refactor on the inside can be made with
confidence.

Our monoliths tend to only support unit tests. Automated end-to-end tests
exist, but due to the number of dependencies these things rely on, they’re
executed in a “live” environment, which makes them hardly deterministic.

Microservices allow for a healthy Pyramid of Tests.

~~~
BurningFrog
Microservice testing come with version combination hell.

If you have 10 microservices, each of which can be on one of two versions,
that's 1024 combinations. How do you test that?

~~~
pojzon
Im yet to see a system that consists of other versions of code than ”new” and
“current”. You test against changes only, what you described is some mess in
deployed versions / versions management.

~~~
BurningFrog
How is this different from what I'm describing?

"New" and "current" _are_ two different versions.

~~~
pojzon
In that you always test against only the versions you have deployed + new
version of single service.

Which downplays your exaggerated 1024 cases to 1.

~~~
BurningFrog
OK, but then you have a very controlled way of deploying each service.

Each team can't just deploy a new version of their microservice when it makes
sense to them.

So your collection of microservices becomes a bit of a distributed monolith,
losing some of the classic microservice advantages.

Or so it seems to me. I just read about this stuff, have never used it. Happy
to be educated.

~~~
pojzon
Its losing „some” adventages of startup grade microservices and gain
maintainability adventages of „netflix/facebook” level grid... Depends whats
your scale. Shipping shit fast is often not the best solution at that scale,
doing it right is. And I have already explained to someone else in this thread
why that approach is important.

------
Discombulator
Without going into detail on the actual debate, I just want to make a meta
point here: If you are writing an article and the following

> [All] technical challenges [...] will not be magically solved by using
> microservices

Is the key statement of your article, then you should really consider adding a
lot of nuance or not publishing it at all.

------
scarmig
Start with a monolith for your core business logic. Rely on external services
for things outside your value prop, be it persistent storage or email or
something else. Keep on building and growing until the monolith is causing
teams to regularly step on each other's toes. By that I mean, continue well
past teams needing to coordinate or needing dedicated headcount to handle
coordination to the point where coordination is impossible. When that point
approaches, allow new functionality to be built in microservices and gradually
break off pieces of the monolith when necessary.

------
halbritt
This blog seems self-contradicting.

>The main benefit is independency

In the absence of independency, a service development organization will hit a
ceiling and fail to scale beyond that. While there may be a whole host of
other problems that microservices does not solve, this single problem makes it
worthwhile in many cases.

That all said, implementing microservices well or even scaling beyond the
point where microservices become useful requires a great deal of engineering
discipline. Concepts like configuration as code and API contracts have to
become something more than theoretical.

------
lowbloodsugar
Teams of ten. Each service is owned by exactly one team.

That rules out every monolith I've seen at companies that still did that.

But unfortunately, Microservices, becomes a religion, a cargo cult, and
companies have hundreds of tiny little services.

My services are not monoliths. But are they microservices? Don't care. They
work. Certainly they are just a couple of services within a network of several
hundred, but I work at a large company. And every one of those services has
one team responsible for them.

------
hn_1234
When people decided to go Microservices route, In my 4 yrs of experience with
it, please define couple of things before you go down that route. 1. How to
share the database when there is too much of dependency between 2
microservices , think event driven or other mechanism like materilized views.
2. Please give developer more importance in this setup as there is too much of
responsitbility being thrown up at devs.

------
mbrodersen
If you don't know how to structure a monolith well (using libraries/modules)
then you will 100% fail building well structured micro services. Micro
services take the difficulty of building a well-structured monolith and _adds_
even more complexity (networking, protocols, no global ACID, no global
priority list, inter-team politics etc. etc.)

------
avip
I just wish we had industry standards. If this and such is your prob., this
and that architecture is the simplest that works.

Structure engineers have such books. They need to build something speced to
hold X tonnes here are the possibilities outlined in simple drawings.

I need to bump my head on any problem I'm facing and I live in constant doubt
that I took the wrong technical decisions.

------
philliphaydon
I'm unsure what people are building when they say they are building micro
services. I don't understand how any company like Uber or monzo end up with
1500 services to maintain.

I mean do they abstract the function of "emailing" out into 50 different micro
services. Or 1 micro service?

------
sascha_sl
microservices are useful, but not for the reasons listed here (or the reasons
often assumed)

personally, i'm more a fan of "realm of responsibility scoped services" to
decouple technologies/datastores of parts of a system that do not interact by
design (for instance, your user account / credentials handling from literally
anything else), and then use a system like kafka (with producer-owned format)
to have a common data bus that can tolerate services that process data
asyncronously (or even things that keep users in the typical "refresh loop")
dying for a bit.

------
timwaagh
I think nowadays most developers pretty much collectively point their browser
towards linkedin.com the moment they here an architect mutter the phrase
'Service-Oriented Architecture'.

------
andrewstuart
Is microservices trading development complexity for deployment complexity?

------
vyrotek
_Spaghetti over HTTP_

That must make meatballs the DBs and parmesan the JS frameworks.

------
user00012-ab
[https://news.ycombinator.com/item?id=23448489](https://news.ycombinator.com/item?id=23448489)

------
yellowapple
> Network failure (or configuration error) is a reality. The probability of
> having one part of your software unreachable is infinitely bigger now.

Network partitions are indeed a problem for distributed software in general.
By the time microservices are worthwhile, however, the application likely
already necessitates a distributed design.

> Remember your nice local debugger with breakpoints and variables? Forget it,
> you are back to printf-style debugging.

...why? What's stopping you from using a debugger? A microservice v. a
monolith should make zero difference here.

At worst, you might have to open up multiple debuggers if you're debugging
multiple services. Big whoop.

> SQL transaction ? You have to reimplement it yourself.

...why? What's stopping you from pulling it from a library of such
transactions?

I don't even really think this is a "problem" per se. Yeah, might be
inconvenient for some long and complicated query, but that's usually a good
sign that you should turn that into a stored procedure, or a view, or a
service layer (so that other services ain't pinging the database directly), or
something else, since it simply means bad "API" design (SQL being the "API" in
this context).

> Communication between your services is not handled by the programming
> language anymore, you have to define and implement your own calling
> convention

Which is arguably a good thing, since you're able to more readily control that
calling convention and tailor it to your specific needs. It also gives ample
opportunities for logging that communication, which is a boon for
troubleshooting/diagnostics and for intrusion detection.

> Security (which service can call which service) is checked by the
> programming language (with the private keyword if you use classes as your
> encapsulation technique for example). This is way harder with microservices:
> the original Monzo article shows that pretty clearly.

The programming language can do little to nothing about security if all the
services are in the same process' memory / address space; nothing stopping a
malicious piece of code from completely ignoring language-enforced
encapsulation.

Microservices, if anything, help _considerably_ here, since they force at
least process-level (if not machine-level) isolation that can't so easily be
bypassed. They're obviously not a silver bullet, and there are other measures
that should absolutely be taken, but separation of concerns - and enforcing
that separation as strictly as possible - is indeed among the most critical of
those measures, and microservices happen to more or less bake that separation
of concerns into the design.

------
nindalf
I wish Dijkstra had named his article on the go to statement[1] something
else. It feels like every other author nowadays want to use the sense of
authority that the "considered harmful" gives them. Like it's obvious and
widely accepted that it's harmful, and they're giving you an FYI.

Just name it "The downsides of microservices" and we'll know that it's your
personal opinion. This title might get you more clicks, but it's a turn off
for me.

[1] -
[https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.p...](https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf)

~~~
phlakaton
I never quite understood the objection to "considered harmful." To me the very
wording of the phrase is a jest; it's said with tongue firmly planted in
cheek, even if what follows is quite serious, and it fully and effectively
declares its intention to go poke some sacred cow in the eye. I never read it
as overbearing or judgmental. I love it!

~~~
sildur
It feels pompous to me.

------
anm89
Came here to complain about "considered harmful" click bait and was happily
surprised to find everyone else already feels the same.

