
The Death of Microservice Madness in 2018 - Sandman
http://www.dwmkerr.com/the-death-of-microservice-madness-in-2018/
======
shitloadofbooks
I think "microservices" is so appealing because so many Developers _love_ the
idea of tearing down the "old" (written >12 months ago), "crusty" (using a
language they don't like/isn't in vogue) and "bloated" (using a pattern/model
they don't agree with) "monolith" and turning it into a swarm of
microservices.

As an Infrastructure guy, the pattern I've seen time and time again is
Developers thinking the previous generation had no idea what they were doing
and they'll do it _way_ better. They usually nail the first 80%, then hit a
new edge case not well handled by their architecture/model (but was by the old
system) and/or start adding swathes of new features during the rewrite.

In my opinion, only the extremely good developers seem to comprehend that they
are almost always writing what will be considered the "technical debt" of 5
years from now when paradigms shift again.

~~~
dreamcompiler
I call this the painting problem. Painting the walls of a room seems easy to
an amateur: You just buy a few gallons at Home Depot and slap it on. But a
professional knows that prep, trim, and cleanup are 80% of the job and they
take skill. Anybody can slap paint onto the middle of a wall. What's difficult
and time-consuming are making the edges sharp and keeping paint off the damn
carpet.

~~~
cortesoft
So you are saying edge and corner cases are the most difficult?

~~~
metahost
Great pun!

~~~
kazagistar
Not sure if it's a pun, or the literal etymology of the phrase.

------
dvt
Biggest issue with microservices: "Microservices can be monoliths in disguise"
\-- I'd omit the _can_ and say _99% of the time are_.

It's not a microservice if you have API dependencies. It's (probably) not a
microservice if you access a global data store. A microservice should
generally _not_ have side effects. Microservices are supposed to be great not
just because of the ease of deployment, but it's also supposed to make
debugging easier. If you can't debug _one_ (and only one) microservice at a
time, then it's not really a microservice.

A lot of engineers think that just having a bunch of API endpoints written by
different teams is a "microservice architecture" \-- but they could't be more
wrong.

~~~
nemothekid
Once when starting a new gig I inherited a "microservices" architecture.

They were having performance problems and "needed" to migrate to
microservices. They developed 12 seperate applications, all in the same repo,
deployed independently it's own JVM. Of course if you were using
microservices, you needed docker as well, so they had also developed a giant
docker container containing all 12 microservices which they deployed to a
single host (all managed by supervisord). Of course since they had 12
different JVM applications, the services needed a host with at least 9GiB of
RAM so they used a larger instance. Everything was provisioned manually by the
way because there was no service discovery or container orchestration - just a
docker container running on a host (an upgrade from running the production
processes in a tmux instance). What they really had was a giant monolithic
application with a complicated deployment process and an insane JVM overhead.

Moving to the larger instance likely solved the performance issues. In place
they now had multiple over provisioned instances (for "HA"), and combined with
other questionable decisions, were paying ~100k/year for a web backend that
did no more than ~50 requests/minute at peak. But hey at least they were doing
real devops like Netflix.

For me, I've become a bit more aware of cargo cult development. I can't say
I'm completely immune to cargo cult driven development either (I once rewrote
an entire Angular application in React because "Angular is dead") so it really
opened my eyes how I could also implement "solutions" without truly
understanding why they are useful.

~~~
eadmund
> They developed 12 seperate applications, all in the same repo, deployed
> independently it's own JVM.

I've dealt with an even worse system, with a dozen separate applications, each
in its _own_ repo, then with various repos containing shared code. But the
whole thing was really one interconnected system, such that a change to one
component often required changes to the shared code, which required updates to
all the other services.

It was a nightmare. At least your folks had the good sense to use a single
repository.

~~~
crescentfresh
> then with various repos containing shared code

What source control system?

Also, from the article:

> even though theoretically services can be deployed in isolation, you find
> that due to the inter-dependencies between services, you have to deploy sets
> of services as a group

This is the situation we are in, like you were.

~~~
eadmund
> What source control system?

Git in our case. And our direction was _not_ to use submodules or anything
like that to make life manageable. It was pretty unpleasant.

------
manigandham
There is no such thing as "microservices", it's just services, otherwise known
as a service-oriented architecture (SOA). A service is a logical grouping of
functionality as it makes sense in your business domain. A small service for a
large company can be bigger than the entire product of a startup; there is no
standard unit of measure.

Computers also don't care how code is deployed and different services can be
bounded by classes, or namespaces, or assemblies, or packages, or processes,
or completely separate APIs reached over the internet on the other side of the
planet.

Microservices can perhaps be defined as more of a deployment model but even
then it's 99% about the team and organization structure. As companies get
larger, there is a trend towards smaller teams in charge of separate
functionalities that create, deploy, and operate their own service. This can
be effective in managing complexity and creating efficiency, although
definitely not absolutely necessary.

All that being said, outside of the major software companies, I have seen
exactly 0 uses of microservices where the benefits were worth the effort, if
any benefits even appeared at all.

~~~
lulmerchant
"microservice" is certainly a buzzword, but it's not just service-oriented
architecture. Mircoservice architecture is modular design, with all the
advantages of effectively infinite scale, incredibly flexible orchestration,
and the resilience provided by the public cloud providers. It has it's own set
of challenges, and it isn't the right solution for every problem. But it can
be fantastic when used properly. I've written plenty of well performing API
endpoints in microservices, and I've also done a fair bit of business process
automation with them.

~~~
oblio
And how is that not also service oriented architecture? That's also supposed
to be modular and one of the reasons for splitting out service is to scale.
You can then place those services in a cloud...

~~~
bonesss
Microservice architectures are a sub-category of SOA.

Many of the same reasons to use them apply to both, but with a finer
granularity on what is being hosted where and how it is scaling with
microservices. Scaling a service with multiple facets going through it gets
tricky, hence a desire for a more appropriate modularity.

"Placing services in the cloud" kinda papers over why microservices and cloud-
native apps hang together so tightly... Leveraging cloud-provider capabilities
(message queues, streaming services, specialized DBs, etc), often introduces
new kinds of scaling, new requirements, and new deployment needs at the
_component level_. Particularly if you're working in multiple clouds or a
hybrid environment or with teams in different organizations... Tech stacks
might diverge at the component level, developer competencies diverge,
maintenance routines and monitoring and the rest of it too.

A thin service wrapper in front of DynamoDB is gonna have a different
scalability story than a DoItAll service on a VM that can also write to
DynamoDB. Especially if the service is used by multiple apps, or customers in
addition to apps. For a SaaS outfit that can be make or break.

~~~
cookiecaper
"Message queues, streaming services, specialized DBs, etc" have no intrinsic
properties making them "cloud-provider capabilities".

All of these "cloud-provider capabilities" are just normal software that you
can run on your own hardware. In one or two cases, the software was first
exposed via a particular provider's platform, but similar, and very likely
superior software that accomplishes the same goals, is now available for self-
hosting (Dynamo -> Cassandra or Riak, for example). That's about as strong as
the connection gets.

It doesn't follow that "microservices and cloud-native apps hang together"
because the cloud provider charges you a lot of money to access their Redis
server, when you could run your own Redis server.

------
klodolph
Background… I’ve been on good and bad projects that used microservices, and
good and bad monolithic projects.

The madness is going away but the microservices are staying. There are some
rationales for microservices that are conspicuously missing.

1\. Fault isolation. Transcoder stuck in a crash loop? Upload service using
too much RAM? With microservices, you don't even really have to figure out
what's going on, you can often just roll back the affected component.

2\. Data isolation. Only certain, privileged components can access certain
types of data. Using a separate service for handling authentication is the
classic example.

3\. Better scheduling. A service made of microservices is easier to schedule
using bin packing. Low priority components can be deprioritized by the
scheduler very easily. This is important for services with large resource
footprints.

The criticisms remind me of the problems with object-oriented programming. In
some sense, the transition is similar: objects are self-contained code and
data with references to other objects. The 90s saw an explosion of bad OO
design and cargo cult architectures. It wasn't a problem with OO design
itself. Eventually people figured out how to do it well. You don't have to
make everything an object any more than you have to make everything a
microservice.

~~~
maga_2020
WRT #2. Data isolation argument.

it is not clear to me, why data isolation is your view, is exclusive to
microservices.

I have build non trival RBAC+ABAC authorization platforms, using PDP and
embedeabble PEP, and did not find that it was useful by micro services only.
And I did not feel that it can only be called via 'micro service' pipeline.

In a way the Authorization is a separate service, yes, but it should be
offering an embeddable PEP (policy enforcement point) that one can embed (link
or call out-of-process if needed), from pretty much anywhere (monolith, or any
runtime component).

Authorization decisions require very very low latency, as you are authorizing
pretty much every data or function interaction.

In fact, for data interaction, authorization engines offer SQL-
rewriting/filtering -- so that the actual 'enforcement' happens at the layer
of database you are using, not even at the layer of the component that's
accessing the data.

~~~
klodolph
I think you may have misread my comment. I said "authentication" and you are
talking about "authorization".

Authentication can be very easily centralized in a separate service,
authorization is a completely different beast. Authentication often involves
access to high-value data such as hashed passwords, authorization does not.

------
wst_
I found out recently that people too often think about microservices in
context of broader solution like it would be one app, just scattered around. I
did that mistake in the past, either. The longer I work with microservices the
clearer it is to me that teams implementing them should forget about the big
product, just focus on the service, as it would be product itself. Assume that
anyone can use it, for any purpose they like, as long as they stick to the
contract that is, and you'll be fine.

I tend to have two layers of design, now. One - big picture, which treats
services anonymously. Just black boxes that respond to input. The goal here is
to build solution like kids are building stuff from building blocks.

Other layer depicts services, as separate beings. They treat all their clients
anonymously. They have a contract to fulfill and whomever plays by the rules
can be served all right. They should be treated as completely separate
projects, have their own backlog, release strategies, etc.

Now, if you would have a product that utilize certain data, would you allow
some anonymous guy from the internet tap to it directly? No need to answer, I
guess.

Edit: typo

~~~
sooheon
IOW, good, clean function composition.

~~~
mettamage
While this is a simplification, I often catch myself thinking: isn't
programming creating functions and functions of functions and that all way
down? And on each layer we call them differently because of the context we're
in.

Input --> Stuff happens --> Output

Again, it's a simplification, although to be fair, I sometimes don't see how
-- other than that I'm feeling that I'm ignoring context too much (e.g.
underlying hardware or networks or REST API endpoints).

~~~
bambataa
And not just programming - all of computing really. Logic gates are composed
into more complex ones that are composed into even more complex ones and so on
upwards.

------
mikekchar
Just for the record, I'm one of the people who thought that putting a CORBA
ORB inside GNOME was a _fantastic_ idea. We're all young once!

Microservices are just another way for us to do premature subsystem
decomposition -- because we _always_ think that we can build components with
stable APIs that will be small, clean and reusable. It's even more fun to put
that subsystem into a different process because, who doesn't like a little bit
of extra latency in their processing? I jest, but it's not such a silly idea.
By making sure everything is in another processes and using the most
inefficient IPC system available (TCP/IP), you ensure that nobody is going to
do stupid things similar to what people tend to do with threads. The multi-
processing aspect appeals to people because it helps them break down the
problem into isolated chunks and reason about them.

The key here, though, is to realise that you almost never need multi-
processing. The design challenge is actually the same whether you isolate your
processing in different processes or not. However, it's _much_ easier to
refactor your code when you haven't put road blocks in your path first. If you
are doing that, then it is easy to extract the functionality into a separate
process if you need to (or even a thread if you happen to work in an OS that
thinks that thread processing should be more efficient than process
processing).

In short, don't practice "I must protect myself from the stupid programmers"
programming and instead concentrate on writing good code with your coworkers.

------
CryoLogic
Best use I've found for microservices is highly isolated and well-defined
stateless functions which make a significant (read compute intense) change to
some data and drop it somewhere else e.g. image compression.

Now you can use this microservice anywhere and just change a few params in how
you call it and you have avatars, thumbnails, etc.

~~~
napsterbr
So basically a library? :)

~~~
philipkglass
It's like a library that you can call regardless of language ecosystem.

I joined a company that had a large, old, and mature selection of services
written in PHP. If I'd tried to rewrite that mature code in a different
language I probably would have wasted a lot of time for little benefit. If I'd
had to write new code in PHP just to access old code as libraries that would
have been a problem too. But functionality was exposed over HTTP APIs that
could be used from any language, any runtime.

------
ChicagoDave
I've delivered two major applications (400k users, critical internal apps)
using micro-services in the cloud reducing cost and increasing continuous
delivery capabilities.

There are definitely special cases, but overall, after 33 years building
software, domain driven design, PaaS, micro-services, and continuous delivery
is the most productive paradigm I have ever seen.

~~~
virmundi
Please go on. Can you provide details?

~~~
cube2222
Not the OP but also having worked on multi-million user apps, off the top of
my head: zero downtime deployments, small failure domains (if you make a bad
update to a service, only related functionality suffers, the rest keeps
working), frequent small deployments (like, several times a day), easy and
quick integration testing (cause you only have to test the functionality of
one service, not the whole system), easier debugging, cause if one
functionality isn't working, it's easy to analyze only the logs of the service
responsible for that functionality (and optionally move to others later,
having identified some part of the cause). Also, every microservice is a new
clean slate, you can quickly learn from your mistakes and try out new
approaches. (Not diametrically different, but you have the ability to iterate
more)

Edit: another one, is that if you keep the microservices actually small and
well described by an API, you can easily, quickly and safely heavily
refactor/rewrite old services.

------
DanielBMarkham
There's a ton of stuff here -- enough for a book or two. I loved the article
although I might take issue with several of the smaller points. It's way too
much to go into on an HN thread.

There's just so many topics that all come together here: stateless
programming, pure FP, NetOps, DevOps, and so on. It feels like one of those
cross-discipline discussions where it's either really simple -- or really
complex. In fact the core skill here is managing complexity across multiple
domains up and down the entire stack. It all depends on the fine details and
which choices you make.

I just finished a really good book which may seem unrelated to the current
topic but actually is spot on. It's "Domain Modeling Made Functional" It uses
F#, but the principles involved apply no matter which language you're using.

[https://pragprog.com/book/swdddf/domain-modeling-made-
functi...](https://pragprog.com/book/swdddf/domain-modeling-made-functional)

~~~
dwmkerr
Hi Daniel,

It's weird you mention that, I was just chatting to a colleague the other day
on this topic, and I rooted out this old article which I loved:

www.slideshare.net/ScottWlaschin/ddd-with-fsharptypesystemlondonndc2013

I was lucky enough to work on a big F# project a while ago and really enjoyed
the experience, it was the first time I'd done any functional programming in a
professional context and I miss it now that I'm doing more JavaScript and
Node.js!

Thanks for the kind words on the article, much appreciated!

~~~
DanielBMarkham
Hey Dave,

No problem. There was a ton of good stuff in there! I saved it to refer back
to it later. :)

As a tech coach, I teach good programming skills, which mostly ends up being
TDD and other good coding skills for OOP folks. But I've become a huge pure FP
fan over the past decade, learning it all on my own just by struggling through
it.

So as a coach, I'm seeing more and more of these microservice train wrecks.
I'm also seeing a ton of people who don't understand microservices come online
and bitch about how they don't work.

Wlaschin's book goes into building a pure FP system from the top-down --
something I never considered doing (I've never coached or worked in a large
pure FP shop). Of course, if you build it out that way, it really doesn't
matter how you splice up the code or where it goes. You can do monolith,
serverless, microservices, whatever.

I liked the process described in the book a lot. It describes to me how you
can have 50 people and 100 microservices and make it all work. (Although
everything you point out is still an issue, many of the pure FP techniques
mitigate them.) Feels a lot like BDUF, though, which kinda bugs me.

I sat down with Uncle Bob last month and spend a week showing him F#. TDD
didn't seem to work that well, but maybe I missed it. There's just nothing to
test in simple transforms, and you want to keep using simple transforms as
much as you can. I think we need to evolve a TDD-like process for growing
systems in pure FP from the ground-up. I'll probably cogitate on this for a
few years and end up writing something. If you know of any good resources,
please send them my way!

~~~
bajablast
Personally I have learned that TDD and unit tests are only useful in OOP
settings that have side effects and have mutated state overtime. But with a
more FP approach all you need to do is verify the feature works since you know
that state can't change the output.

I kind of also feel that TDD has a kind of cult like following, especially
with the Rails crowd

------
paulddraper
[https://en.wikipedia.org/wiki/Law_of_conservation_of_complex...](https://en.wikipedia.org/wiki/Law_of_conservation_of_complexity)

Broadly speaking, microservices make small, limited changes/update less
complex, and large, sweeping changes/updates more complex.

This is why generalizations like "larger, established organizations [which
make mostly relative small changes] should lean more to microservices than
smaller, younger organizations [which make mostly relatively large changes]"
hold true.

~~~
cottsak
This alone almost rules microsevices out for the majority of us from a cost-
benefit point of view.

------
cocktailpeanuts
Here's my experience. Some features are extremely good as Microservices. And I
wanted to use it. But I don't. Why?

1\. Setting up a "Microservices" architecture itself is a convoluted process.
If you're not a Netflix or Amazon that will run hundreds of microservices, the
up-front cost and time is NOT at all worth it. I would rather just run an app
on Digitalocean.

2\. IF I decided to bypass step 1 and just use existing vendors like AWS,
Google, Microsoft, etc., first I don't feel good about it because I know it's
in each vendor's best interest to implement lockin, which conflicts with my
own interest.

3\. Lastly, none of the existing cloud function vendors have good user
experience. AWS Lambda, Google cloud functions, etc. all require you to go
through all the authentication related stuff before running a simple function,
not to mention all the constraints that come with "function based paradigm".
Again, definitely not worth it unless you're trying to run hundreds of these
functions.

I think part 3 is critical, because I was willing to ignore part 1 and 2, as
long as I could get my function up and running really easily. But none of the
vendors make it easy. Sure, there are frameworks that let you do that like
Apex and Serverless, but those also require setup. I would rather just run a
DigitalOcean server for $5 which by the way can run multiple of those services
as a regular server, without me having to spend time dealing with the
complexities of function servers.

I thought about how this could be improved. Sure there can be some "Heroku for
functions" SaaS, but nobody is excited about building SaaS startups nowadays
because they know they can't compete with large vendors in the very long run.
(Unless you're comfortable with just building a lifestyle business, which is
totally fine, but I personally won't rely on a lifestyle business SaaS)

The only way this can be improved is if each root vendors like Amazon, MS, and
Google actually implemented better UI, but none of them are really focused on
this since the reason they're doing this business in the first place is not
for consumers but to make sure they have leverage in this space. (If AWS
dominated 90%+ of this market, the rest of the companies will be in a great
risk in the long run)

------
hamandcheese
I feel like several of the purported benefits of microservices could be
realized in a monolith as well.

Independent development - just stick to your own part of the app.

Reusability - write a library

Isolated deployments and deployment velocity - I suppose not isolated but
there’s no reason you can't achieve great velocity with a monolith.

A less often cited advantage that I do think has a lot of merrit is the
ability to choose different languages and tools for different problem domains.

------
drej
What I "love" about some microservice _implementations_ (so not microservices
in general), is that people use fast and scalable data platforms to analyze
data in microseconds, but then slap a REST API in front of them, ideally with
slow authentication and logging (and ideally in a different availability
zone), so instead of waiting 20 ms for your metric, you get it in 3 seconds.

I love (without quotes) microservices for their isolation principle, people
can kill off/rewrite parts of the system without affecting other parts,
written possibly in a whole different language etc., but lets not abuse it. If
you put every tiny function behind an API, what you'll get is a slow and
unmaintainable mess.

------
narayanbs
Being a veteran programmer,i find it funny how the article sounds like the
stuff we used to read back in 2004-2005, during the heydays of spring and EJB.

~~~
kovacs
You took the words right out of my brain as I was scrolling through the
comments. My internal dialogue literally was "This is EJB all over again",
which admittedly may be a bit too much but as they say history never repeats
but often rhymes :-)

In this go around I think the value of discreet, well factored services
clearly have merit but it's really a question of just how often you truly have
something that's a good candidate for this pattern. From what I've seen in
some places it's used as a reason to rewrite systems and not worry about doing
a good job creating well abstracted software because the integrity of the
overall system is some other team's problem.

This article is really well written and does a great job shedding light on the
truth. Thank you for sharing.

~~~
dwmkerr
Thanks for the comments. When I was writing the article I kept on thinking
that really this is just the same old fundamental problem in a new guise,
which is in many ways just how to manage component boundaries and
dependencies, particularly when you are looking at a system which is being
handled by more than one person.

------
bayonetz
The issue with designing an internal service without a consumer in mind is it
tends to bloat the functionality and interface to allow for a variety of
speculative-at-best use cases. The issue with desiging an internal service
with a consumer in mind is that you tend to build a "distributed monolith".
It's like a fricken Zen riddle. That's why it takes a master to navigate the
middle ground successfully.

------
justinwp
Appears to be hip to bash microservices now. We have some very clear reasons
that keep us on the service oriented path.

\- scaling based on very disparate resource requirements across our api

\- security isolation of critical backend infrastructure

\- language requirements and dependency management

\- accommodating different skills and groups of skills across our organization

------
linkmotif
People talk about building monoliths like it’s some simpler alternative. On
the JVM it’s not.

When I first started with Java one of the coolest things was that I could
embed anything. I could embed Elasticsearch in my web service. Wow! Forget
services calling each other! I’ll prototype real quick with everything in one
JVM!

But that proved impossible! :( It turned out that until Java 9, the JVM has a
single namespace for all loaded classes. For me this was the greatest driving
force for building out smaller services. You had to be very careful about
dependencies and couldn’t just include Elasticsearch—or even much of
anything—in your web service even if you wanted to. You’d be in immediate
unresolvable dependency hell. I can imagine the insane artifacts people
produce when they try to build monoliths on the JVM. Even with JavaScript now
you have one huge flat dependency namespace. Good luck building a monolith.

I can only conclude that the all the people on this thread who are painting
“monoliths” as some easy alternative developers are ignoring due to some
inexplicable desire to make things complicated have no idea what they’re
talking about.

~~~
corpMaverick
I think the recommendation is Monolith First

[https://martinfowler.com/bliki/MonolithFirst.html](https://martinfowler.com/bliki/MonolithFirst.html)

You start with a Monolith and break it out as needed. (i.e. When you find out
that is not a good idea to embed Elasticsearch )

You avoid premature boundaries that can become a significant constraint.

~~~
linkmotif
My experience in attempting to build a monolith—and, oh I tried—was that it's
basically impossible.

Forget embedding Elasticsearch for the purpose of running it inside your
monlith; just the transport layer drivers (supposing you're not using drivers
that work over HTTP) basically add all of Elasticsearch onto your classpath.
Good luck building your monolith after that. And that's true for so many
things you'd want to use. If you want to build a monolith you basically can't
have any dependencies. Or be very experienced in dependency management.

This is why I have a microservice that fronts all my Elasticsearch queries.
Its artifact has the Elasticsearch drivers, all other microservices that need
access data from Elasticsearch talk to this service over gRPC. Any time I want
to upgrade Elasticsearch or change something about a query, I only need to do
it in one spot. It's great.

------
gfiorav
Nice article but I'd point out two things:

1\. 'Death' is a bit too extreme. I think everybody is excited at first and
tries to find an excuse to use microservices. We're assisting to a decline in
that initial push, that's all.

2\. I think the point about transactions is fair, as well as the expertise and
team concerns, but the rest are mostly "it's too hard and complex". It'll get
better with time. I think that being a bit uncomfortable as a dev is not too
bad, that's what we're here for (assuming there's a real need of course).

~~~
dwmkerr
Thanks gfiorav for the comments!

1\. I think here I was going for 'death' more in terms of the end of the hype,
rather than the approach! 2\. Agreed. As people get more familiar with the
patterns, tools and code and so on it does get easier. The point is more that
it can be a hell of a journey :)

------
mattbillenstein
Microservices as a mechanism for mainly code organization are just a bad idea.
Build a well-organized monolith first -- structure your code as in-process
services so that you can factor those pieces out individually in the future.

I see smaller teams heaping in tons of complexity for no real gain; and actual
measurable cost.

------
ec109685
These two statements don't reconcile:

> Being both a developer and an operator is already tough (but critical to
> build good software)

> Yes, with effective automation, monitoring, orchestration and so on, this is
> all possible

This argues for having a separate ops team to handle the complexity centrally
(e.g. operating a kubernetes cluster, providing standard building blocks), so
developers can concentrate on their services.

~~~
philwelch
You can and should have a cross-functional team (containing both devs and
operators) to build and operate your infrastructure automation. But the
operation of the individual services still falls on the teams that own them.

~~~
ec109685
Right, but the complexity the author complains about is then not on the
individual services’ developer teams.

~~~
philwelch
That's the point, yes?

~~~
ec109685
Right, if you have a strong team supporting orchestration (or outsource to a
cloud), the need for every developer to be an operations specialist goes down,
yet the team can still support a larger number of services efficiently.

------
Mc_Big_G
Next up, mixing logic and presentation like it's 1999 _cough_ php _cough_
react native

~~~
cc81
Have you seen the talk "Rethinking best practices" from back when React was
first launched?

[https://www.youtube.com/watch?v=x7cQ3mrcKaY](https://www.youtube.com/watch?v=x7cQ3mrcKaY)

It addresses this and is pretty great in my opinion. Of course one does not
have to agree with it but it is not the same as the mess back then and usually
the separation of concerns is more about separations of technology than
concerns.

As an an example:

Let us say we have a shopping basket up in the top right corner of our web
page. For me there is a big difference between the logic of calculating to the
total sum of the parts we have in it or collapsing/expanding it. One is
"business logic" and one is "presentation logic" and they are not the same.

And I also bet that if for example HTML or CSS had native collapse/expand then
people would consider that presentation as well and not to mention all the
logic you usually have (loops, conditions) in a normal template language.

------
gshulegaard
This is a well thought out and well reasoned article.

It resonates on many of the pain points of microservices. I also nearly broke
out laughing at his definition of "microservice madness":

> Netflix are great at devops. Netfix do microservices. Therefore: If I do
> microservices, I am great at devops.

Because it is so hilariously accurate it's almost painful.

I do want to point out something the author implies, which is that there _are_
benefits to microservices as a pattern...and as a pattern it is likely here to
stay. It's an advanced pattern that comes with tradeoffs and complexity
increases on the ops side. In the best case you're trading developer
complexity for operational complexity and perhaps also getting better
reliability (but this is not a guarantee).

Additionally, projects like Kubernetes, Istio, and Envoy are all tools aimed
at making certain operational complexities of microservices easier...so the
operational trade offs are likely to change dramatically this year.

But in the end we can all still hope that madness dies in 2018 though.

------
he0001
One thing I’ve noticed with microservices is the business rules duplication.
Everyone tries to solve their problem so the same rules start popping up (ofc
implemented differently and mostly unknowingly) in all these microservices.
The problems start when the business requirements change and you need to
change all Microservices.

------
linkmotif
Will someone enlighten me as to why a microservice should have no dependent
services? Seems reasonable to me.

~~~
virmundi
Service dependencies indicate that you're domain is not properly broken down,
or that the services are communicating incorrectly. They can share
information, but probably better to do so via events and data copy.

For example, you have a centralized security service. It handles the things
related to accounts, profiles and group membership. You have a recipe service.
It handles things like finding recipes, adding them to "books", etc. The
recipes need to be guarded. They are shown only to members of certain groups.

You could have the recipe service call the security service each time to get
the caller's groups. Then compare those groups to the allowed groups. If that
user service goes down, you're borked.

If you use events instead, you'd keep a copy of the profile and its groups in
the recipes. Every time that changes, it gets an event with the details. Now
they've decoupled by sharing data async. It is not a perfect system. It can be
possible that the recipes service doesn't update. Now a user that should get
the data can't.

The trade off with events is that you can upgrade and redeploy services more
easily. If you find a bug in the user profile service, bring it down, update
and restart. No other service goes down with it.

~~~
linkmotif
Thank you for the answer.

All of this makes sense but also seems unavoidable to some degree. Seems like
a set of tradeoffs you make by going microservies that you should be weary of.

Having services fully state-decoupled via unified log is interesting. I’ve
considered this but it seemed a bit complicated in terms of then being able to
scale the services that need, say, local copies of user profiles.

~~~
virmundi
It can be, heck it is difficult if you think about service having to have
access to a full profile. If it is the case, that service should probably be
part of the profile containing service. But you might not need the full
profile.

Here's two examples. Provided you don't get much churn in key profile data,
you might be able to use JWT to store group membership for example. All
services can read the the token. The token provides the groups necessary for
the access operation.

Another example is only copying parts of the data. My current project has
profiles and memberships. Interestingly the app side code doesn't actually use
that information directly. A copy of group membership is held in the DB. When
the user executes any operation, the queries themselves check for rights. When
a use wants to read a list of recipes, part of the query is "and has read
permission". The same is true of updates. The system queries for the resource
to update. Only those that the user has an explicit right to modify are found.

------
daxfohl
Why is scaling independently a pro? I've heard that argument before and never
understood it.

If I have one api that is low throughput and one that is high, say requires 2
and 20 nodes respectively, then you need 22 nodes whether they are two
independent microservices or a monolith. I'd think the monolith would actually
be easier to manage as you don't have to worry about how to divvy up the
resources. All 22 nodes run the same code. Sure you could do the same with a
k8s cluster autoscaling (I assume), but still, I don't see how this makes
anything strictly easier than a monolith scaling. If anything it seems like
one more thing to worry about.

But I've heard the argument multiple times, so wonder if I'm missing
something.

~~~
zenlikethat
It sounds like you're conflating a binary barrier with a service barrier here.
In your example you have two cleanly defined services which happen to be in
the same binary. The issue with monoliths is when they all do the same thing
because the code is a big ball of mud that "cheats" by calling shared
subroutines and doesn't have a clearly delineated API. Therefore you have only
one knob to turn to scale.

The 20+2 distinction isn't clear because each part of the code is causing
action-at-distance on the others. Failures become a lot harder to isolate.
That's what people mean by scaling is hard with a monolith.

~~~
daxfohl
That doesn't answer the question though. Suppose the apis are completely
independent. Say one is a chess server and the other generates haikus. Granted
that's silly architecturally to put into a single binary, but that's not the
question. My question is why specifically managing scalability becomes easier
when deploying them independently. My thought is that it actually becomes more
difficult as you have to manage each one independently, whereas if they were a
single binary, all you have to care about is the net sum of your resource
needs.

~~~
rurounijones
Usually because they have different usage patterns.

Chess API is a daily cycle between 10-1,000 TPS and is CPU intensive. Haiku is
uaually 1-10 TPS apart from Fridays (when everyone's device gets a new one)
and holidays when it spikes to 100,000 TPS and is IO intensive.

Scaling a single service with both these API endpoints being called for
different pattens like the above is a pain. Splitting them allows for choosing
different host types (e.g. More CPU or Memory / SSD etc.) and makes scaling
(especially planned / dynamic scaling) easier.

~~~
daxfohl
Thanks, and to add to that, it occurred to me that they may have different
criticality too. Some services you may want to scale very aggressively because
a failure would be catastrophic. Whereas other services may be even more cpu
intensive on average but failures are acceptable so you let them run at 90%
load. I'd imagine this would be a far more difficult balancing act if they
were both in the same process.

For example if you had your chess AI engine running in the same monolith as
your web server, it could slow down your response time to the point of
timeout. But if they were separate services, your web server could stay snappy
and give a meaningful response to the problem. "our ai service is overloaded
right now, but here is a nice haiku while you wait."

Though still, I'd think of that as a fairly advanced use case. Not something
small projects should have to think about.

~~~
corpMaverick
I think your original question is a good one. It must be thoroughly proven and
not just taken as gospel.

You may find cases where decoupling a service is a good idea. That doesn't
justify decoupling everything by default. The more you decouple the more rigid
becomes the whole system.

------
joneholland
I took his guide at the end and apparently we are supposed to do micro
services, which is good, because we do as an engineering org of 800+ people.

~~~
jrs95
And sadly some of us are doing them for no apparent reason with fewer than 10
people

------
bluepeter
> In fact, if you have stateless services, then I'd be inclined to consider
> skipping microservices altogether and consider using a serverless model.

What? Isn't a serverless model necessarily a microservices solution?

------
mattknox
It's fascinating to me that people don't point out that you can get many of
the benefits of microservices with all your code in a monolith, indeed, all in
a single runtime. Of the 4 benefits he lists (1: independent development, 2:
independent deployment 3: independent scalability, 4: reusability) you can
mostly get 2-4 by deploying your monolith to different server pools that are
deployed and routed to independently.

Microservices get to be necessary for large dev teams, but probably a lot
later than most teams think.

------
eecc
Yeah, finally. uS architectures quickly devolve into a haphazard home-grown
variant of an ejb design.

You've gotta tip your hat to Sun, despite the OO crap and XML-itis the
underlying ideas were - and are - sane.

Now, just as ejb was rightfully considered massive overkill for all but the
most large-scale applications, just as simple servlet based (later on
controller, service, persistence designs) we often more than enough to get the
job done, same applies to uServices.

------
luikore
I think another (important) cause of microservice is politics:

When every department wants the easiest job with max outcome and the boss
decides to split the cake equally, then an easy job X can be cut into N
services which are usually named "micro", resulting in a huge project of
complexity (X/N)^N for each team.

------
adambyrtek
Great article, but I'd like to dispute one point:

> A new version of the subscriptions service may store data in the
> subscriptions database in a different shape. If you are running both
> services in parallel, you are running the system with two schemas at once.

Microservices should manage their own separate data stores and communicate
with others through a well-defined API. Only then services can evolve
independently, and each team is free to change the internals (including schema
migrations) without coordinating with everybody else.

Multiple services sharing the same database is the perfect example of the
"monolith in disguise" anti-pattern mentioned in the article, with all of the
costs of microservices but few benefits.

~~~
paulddraper
They _might_ be talking about a slow rollout.

~~~
adambyrtek
If that was the case then it wouldn't be anything specific to microservices.

------
jakozaur
Thumb rule: Divide number of backend engineers by 5 you get optimal number of
micro-services.

~~~
niteshade
Expected: 50 Actual: 0.4

------
ramses
"Real world systems often have poorly defined boundaries" ... I have seen
teams getting hit by that multiple times. As usual, it is a bad idea to try to
make everything fit a single model before attempting to understand the
consequences.

------
HumanDrivenDev
So it's backlash time already?

A lot of technologies get a lot of hype over the years. But often the backlash
is no less ignorant or fashion driven.

Is there a silent majority of us that look at these hype trends, consider
them, and make a level-headed decision to see if they fit our problem? I never
got into micro-services, because I worked on small teams and it seemed like
over-engineering. But it's an idea that I tucked away in the back of my head,
that I'd still use if the situation calls for it, HN backlash be damned. The
same goes for nosql databases or dynamic scripting languages or anything else
that's now irrationally hated.

------
sidhuko
We are such a totalitarian part of the species. Someone writes about
microservices and my next project I'm building services for each feature
because some manager heard it increases reliability. The fact is its always
science. If you don't want your inconsistent dataset from somewhere else in
the corporation taking down your application then chunk a part of the
application into your own container to manage calls rather than allowing the
event loop to block up. It doesn't have to be 'micro', it has to exist for
some reason, if not it is just your developers trying to be cool.

------
abvdasker
My two cents would be that tooling makes all the difference with
microservices, especially in terms of developer productivity. Having tooling
which allows me to test changes to a microservice locally and easily is a huge
win, and without that tooling microservice development becomes much more
difficult. Making it easy to test a service locally -- including its network
interactions -- lets me be more confident that my code will work in production
and reduces a lot of the cognitive overhead of developing one component among
thousands.

------
marcobehler
Because I find debating about the pro's and con's about microservices somewhat
cumbersome, I tried to let the code speak and record a screencast series on
exactly this topics. ([https://www.marcobehler.com/series/9-microservices-the-
good-...](https://www.marcobehler.com/series/9-microservices-the-good-bad-
ugly) ) Because I feel that especially in the Java world, Microservices are an
utter abomination most of the time. Comments more than welcome!

------
geodel
At my workplace, the idea is distort, disparage or destroy applications and
replace with Spring boot thingie. Voila, now we have a 'Next Generation
Platform' based on microservices.

------
xtrapolate
Decoupling code/infrastructure is not a bad practice. As it so often turns out
to be, "Microservices"/"Monoliths" mean different things to different people,
and your mileage may vary. Difficulties may be introduced going either way.

Choosing the right tool for the job is difficult, there are many variables at
play. There's a lot of value to people sharing their perspectives/experience
about these issues. At the same time, I think the overall discussion shouldn't
be so dichotomous.

------
ntrepid8
I've often thought that micro-services were more of a software engineering
management solution rather than a technical one.

From the perspective of a high-level architect it makes delegating certain
types of responsibility a bit easier and potentially reduces the need for
communication between teams while maintaining centralized control of the
overall system.

However, it always seems to come with such a high overhead that IMHO it's
almost never worth doing unless there really isn't another option.

------
TekMol
I have recently been thinking about doing microservices by simply having a
shared MySql DB that all modules connect to.

So for example one developer can create the backend and one developer can
create the frontend. The codebases can be completely independent. One could
use PHP+Laravel and the other one PHP+Symfony for example.

Frontend and backend would live on their own servers. And simply having the
IP, login and PW of the shared DB set in their project.

What do you guys think about such an approach?

~~~
saryant
I've been down that road. Eventually it turns into a nightmare because
evolving your DB means making simultaneous changes to multiple applications.

"We think we can drop this column, someone figure out which of our eight apps
using this DB might be using it still"

I much prefer putting a single service layer in front of the DB that speaks
thrift or protobuf and letting all clients interface with that instead.
Evolving thrift services is fairly straightforward and allows you to make
changes without needing every app the keep up.

~~~
TekMol
How does an additional layer make changes to the DB easier? If a column is
dropped what will your layer do when a request for data from that column comes
in? What if the request joins it with other columns from other tables?

~~~
cookiecaper
It's better because if you have one application that has authority over that
area, you only have to answer this question once.

The answer will depend on the data in question, of course; maybe it is fine to
serve stale data for a while, maybe you need to write to one DB, read from
another, and combine in-process, etc., until the change fully propagates. But
the impact needs to be localized to whatever extent is possible.

If you have several applications accessing the database directly, it makes the
database everyone's problem, instead of just the one thing's problem. Then
_everyone_ has to know about the downtime and come up with their own strategy
to mitigate. They can't say "Well we'll just trust what we get from Service
A", because they don't actually get info from service A; they get info from
service A's underlying datastore.

Worse, in most cases like this, there will just be one global database for
everything, so schema changes, database restarts, etc., necessary for one
thing can have negative effects, both direct and indirect, across the entire
ecosystem. If Bob's Service decides it needs to do a massive reindexing and
Alice's Service is on the same DB, even if they're using completely
independent tables, etc., the performance hit is going to affect both. If Bob
changes his schema and Alice reads or writes directly to those tables (e.g.,
Alice's service updates a column in records originally inserted by Bob's
service), now Alice has to know about the change, plan for it, and coordinate
her deployment in sync with Bob, etc.

That kind of thing is what people mean when they say "distributed monolith".
There is no real "private" and "public" space where one service provider could
reasonably offer a stable API but change things as necessary on the back-end.
Nothing is really independent. All you've done is make a monolith that is much
harder to coordinate, manage, debug, and understand.

~~~
MBCook
This was the problem at my last job. We had a few applications and each had
their own database, but they were all on the same server.

Over time they all learned to reach into eachother’s databases. The truth is
we had ONE database arbitrarily divided into three schemas, each with
different traditions.

As load increased it became a nightmare and a literal single point of failure.
If one app misbehaved or took a load spike all the rest would slow/fall down.
Even though huge chunks of the applications had nothing to do with each other
they couldn’t be scaled independently.

We were working very hard, slowly, to detangle it without blowing everything
up.

No application should _ever_ have direct access to another application’s
database. It’s going to go wrong. The temptation is too great. And by the time
you realize it the technical debt it has caused may be MASSIVE.

------
acd
Agree with the author that Microservices are harder to operate in certain
aspects. For example latency tracing with Microservices is harder when you
have many services to check latency through, what service call caused the
latency. Doing error tracing with Microservices is also harder because now you
have to follow a web request through many different Microservices.
Microservices in some cases may have worse performance than a monolith. With
Microservices data no longer does in local memory or local cache, you have to
traverse the network through an API calls to fetch data. Imagine that you have
a monolith that can look up data in a hash table that is one memory/cache
access versus a micro service network call. In the micro service you have to
make a network call for a memory lookup. A memory access may take between
0.5ns-100ns versus a network call may take 10000ns. That Microservices may be
between 20000 times to a 100 times slower accessing remote memory in another
Microservice than accessing it locally in the same monoliths memory. Thus one
could argue that in some cases performance are worse with Microservices than a
monolith. You can still scale stateless monoliths buy putting a load balancer
in front of the monolith. There is also increased overhead of running
Microservices, instead of having one Java process/NodeJS process for a
monolith you may have several hundred nodeJS processes Microservices running
in Docker containers. Each NodeJS process takes memory for NodeJS itself. An
important aspect of Microservices which are good is that they allow a part of
the system to be restarted independently instead of restarting a whole
monolith.
[https://radlab.cs.berkeley.edu/people/fox/static/pubs/pdf/j0...](https://radlab.cs.berkeley.edu/people/fox/static/pubs/pdf/j03.pdf).
Sometimes you may want to rewrite a component that is also much easier to
rewrite with Micro services. Compare a Mainframe from the 60s-80s running a
monotlith without unit testing / integration testing versus modern micro
services with testing. Replacing parts or the whole main frame monolith will
be very very scary.

Latency Numbers Every Programmer Should Know
[https://gist.github.com/jboner/2841832](https://gist.github.com/jboner/2841832)

However I would argue Microservices has been done before the idea is not new.
Microservices is a realization of the same principle as that of UNIX/Linux.
This is called the UNIX philosophy, Write one program that do one thing well.
Make the output of one program the input of the other one.

Microservices are not a new idea, its the reimplementation of the UNIX
philosophy for the web.

The UNIX philosophy
[https://en.wikipedia.org/wiki/Unix_philosophy](https://en.wikipedia.org/wiki/Unix_philosophy)

"Make each program do one thing well. To do a new job, build afresh rather
than complicate old programs by adding new "features"." compare with
Microservice definition by wikipedia "The services are small - fine-grained to
perform a single function."

Improving reliaility with recursive reboots, Microservices on Kubernetes are
at least implementing micro reboots in case of component failure.
[https://radlab.cs.berkeley.edu/people/fox/static/pubs/pdf/j0...](https://radlab.cs.berkeley.edu/people/fox/static/pubs/pdf/j03.pdf)

Like most tech Micro services probably follow the Hype cycle
[https://en.wikipedia.org/wiki/Hype_cycle](https://en.wikipedia.org/wiki/Hype_cycle)

------
wiremine
Looks like Microservices have rounded the "Peak of Inflated Expectations" and
heading into the "Trough of Disillusionment"

[https://en.wikipedia.org/wiki/Hype_cycle](https://en.wikipedia.org/wiki/Hype_cycle)

------
fibo
I use microservices and I experienced many venefits. More in general it
follows Unix phylosophy.

------
crb002
I use a monolith AWS Lambda function. No need for snowflakes unless some cary
huge binaries.

------
Chiba-City
Reminds me of distributed architectures like CORBA turning into spaghetti in
the 90's.

~~~
flukus
Seems like every ten years or so a new architecture comes along and quickly
devolves into into being yet another distributed RPC mechanism. There was
CORBA, SOAP, SOA and now micro services. All promised to fix what came before
and all ended up repeating the same mistakes in practice.

Can any of the old timers here name any predecessors to CORBA?

~~~
charlieflowers
Java RMI, which had the misguided goal of making objects (the whole world was
fully in the grip of OO religion) appear the same, regardless of whether they
were local or remote to you.

~~~
Chiba-City
I was going to mention RMI, but couldn't recall the TLA. Does anyone else
remember LANS and Windows NetDDE? That got end user machines involved in
"distributed" computations.

~~~
charlieflowers
COM+ also had distributed transactions. I was fortunate not to work with it
too much.

~~~
flukus
Was this in the protocol? I thought that was around the same time as DTCC
which is pretty good IMO.

~~~
charlieflowers
I believe COM+ came along and subsumed DTCC, but I'm not sure.

------
delfaras
> When I started writing the answer to this question, it turned into a long
> description, so I'm actually going to separate it into another article and
> publish it a little later.

eheh, on an article about micro service madness

------
didibus
Very good article. Though some of the downsides are also problems a
distributed monolith could face, but overall I agree with the article. You
have to consider the trade offs.

------
juanmirocks
For my small team of 2, but with an increasingly more complex codebase, the
orchestration of (micro)services indeed causes headaches. There is no silver
bullet.

------
dpeck
Good. More often than not it is overengineering that is blessed by technical
management because it means they have seemingly less personnel issues.

------
styfle
> If your boundaries are actually not well defined, then what happens is that
> even though theoretically services can be deployed in isolation, you find
> that due to the inter-dependencies between services, you have to deploy sets
> of services as a group.

This is the part I keep getting hung up on. Even with perfect boundaries,
there always seems to be a cross-cutting feature that would require touching
all of the services.

------
ramses
The author omitted an important one: microservices are the new source of
spaghetti code.

When trying to implement as multiple microservices something that should
actually be a single service—i.e., the ms's are not useful as independent
units—answering a single API request is an amazing mess of distributed calls
all over the place and unnecessary serialization/deserialization of queries.

------
z3t4
If you can separate something out into it's own completely independent and
reusable module, you should always do so. Also if something can scale by
adding concurrency, that is often a good strategy. For example in conferences
where every participant is supposed to talk, you can divide them into groups,
so many people can talk at the same time but in small groups.

------
tuananh
microservices bring more advantages for management than technical people.

\- easier to manage (smaller team) \- easier to hire (we can hire good people
with different skillset) \- easier to scale \- and so on...

------
Daycrawler
Call me back for the Death of Cargo Cult Programming altogether.

------
cottsak
Someone ping me when HN shows a headline which reads "The Death of JS Madness
in ??".

Same issues right? Complexity === pain and we can't seem to grasp this as
developers. Are we the most self-flagellating people on the planet?

------
fragosti
Still really important for scaling teams.

------
jlebrech
what we need is a polyglot PaaS that's only runs microservices, lambda is the
closest to that ideal right now.

------
Ericson2314
Microservices (in practice) are like polyglot: throwing away type safety for
no good reason. This shit needs to stop.

~~~
saryant
You don't _have_ to use JSON-over-HTTP. You can use something like Thrift or
Protobuf which gives you statically-typed inter-process communication.

~~~
Ericson2314
You're completely missing the point. I shouldn't be able to connect two
interface-mismatched services in the first place.

------
stevenmays
I posted this a few days ago... I wonder why HN didn't keep the original post.

[https://news.ycombinator.com/item?id=16159589](https://news.ycombinator.com/item?id=16159589)

~~~
axelfontaine
This is the 9th time it's been posted:
[https://hn.algolia.com/?query=The%20Death%20of%20Microservic...](https://hn.algolia.com/?query=The%20Death%20of%20Microservice%20Madness%20in%202018%20&sort=byPopularity&prefix&page=0&dateRange=all&type=story)

~~~
acd
An improvement to the Hacker news website would be to compute and compare
hashes of weblinks so that the same link is not reposted multiple times. Later
posters of links could then be redirected to the first post of a link.

~~~
krapp
It wouldn't work - it's trivial to add meaningless query parameters or anchors
that would change the hash but still lead to the same content. And stripping
that wouldn't work because some sites use them to route to content.

What _might_ work is hashing the text and outbound link content submitted
pages of, and building something like a similarity index of text, metadata and
a graph of links, but that would probably still be fragile, and definitely be
too much effort for a site with as little traffic as this.

~~~
striking
They could capture the canonical URL from the meta tags in the page. I don't
think they do currently.

~~~
krapp
Assuming a site has one, although most news sites probably do. Facebook Open
Graph and other social media tags are worth looking for as well.
Unfortunately, they're not always trustworthy.

