
Enough with the microservices - mpweiher
https://aadrake.com/posts/2017-05-20-enough-with-the-microservices.html
======
23david
_Remember, if you have a DevOps Team, then you are absolutely not doing
DevOps. Developers should be involved in managing everything about their
applications, including infrastructure._

Any sufficiently complicated infrastructure that has uptime requirements and
significant revenue associated with it is going to have a DevOps Team (or the
equivalent) ultimately responsible for ensuring that things are working. I
guess it's possible to turn your entire dev team into part-time DevOps
engineers, while still calling them Software Engineers, but I've usually found
that doesn't work long-term and causes employee retention issues. It's like
saying your company does 'No-Support' because you don't hire Support
Engineers, while in fact you've enlisted your Software Engineering team to
handle all support requests.

Also, if you're working in a regulated field like Healthcare or Finance, or
anything that touches PII, your developers often can't have access to deploy
code directly to production. Again, you could maybe work around this in the
short-term by turning all developers into developers+devops, but they're
different skillsets.

~~~
throwanem
Yeah, I was doing "DevOps" back when it was still called "wow, good sysadmins
are really expensive! I know..."

The rebranding is cute and all, and the tooling's a fair bit better, but
nothing I've seen suggests that combining these two full-time roles in the
same people is any more a good idea today than it was fifteen years ago.

~~~
knightofmars
Thanks for the flashbacks of supporting failing services at 3:00 AM. :D Even
today, I am amazed at how absolutely terrible this type of decision making was
and still is. The idea that you can "save some money" by having your engineers
supporting your infrastructure "part time" is always going to cost your more
in the long run. Because of the long-term effect of having an unskilled (in
the area of question) individual attempting to do the job of a system admin
(bad security practices, etc) and the turn-over you end up with due to burn-
out.

~~~
karmajunkie
Whenever I've been in this position, or run teams this way, its been less
about saving money and more about making sure developers have to live with
their own bad calls on tech. I've worked in places where dev teams would just
throw code over a wall with the equivalent of a sticker saying "Works on my
machine!" and weren't the ones getting woken up in the middle of the night
when that overeager query taps out the memory limits, and its a recipe for
headbanging.

All things being equal (i.e. I've got the money to have someone who's role is
more ops than dev but still does both) having that person to "own" the
production configuration is valuable, but developers still need to be in touch
with what their code does in production. Otherwise you eventually end up with
the equivalent of a cool interaction design that's damn near impossible to
implement on the web (another pet peeve of mine...)

~~~
knightofmars
There's a fundamental difference between holding engineers accountable for the
downstream impact of their technical decisions and making engineers take on
the added responsibilities of an additional position without compensating them
accordingly for the added responsibility. In an organization that prioritizes
stability there is an appropriate balance between engineering and system
administration as well as potential overlap given the right boundaries and
understanding of job responsibilities. The engineering team will be held
accountable by the system administration team and changes will happen because
of it.

The inherent failing in this structure is when one of two things happens; one,
the system administration team does not have the appropriate channels (and
clout) to provide push back against the engineering team; two, the technical
teams (both engineering and system administration) don't have the ability to
get technical debt payed off properly due to an improperly structured project
management process.

Anecdotally, I've been witness to the second issue a number of times. If there
isn't an immediate understanding of ROI for a proposed change then it isn't
prioritized to be worked on. The thought process is generally along the lines
of, "Engineers are an expensive resource, having them working on something
that won't make the company money is obviously not the priority."

While some of this is on the engineering leadership as their job is to provide
insight into ROI for technical matters there also needs to be a balance where
the non-technical leadership trusts the technical leadership to know when to
prioritize projects with non-obvious ROI.

~~~
karmajunkie
Definitely agree with much of that, though I don't think it detracts too much
from my main point. Most of the companies I work with don't have the luxury of
dedicated ops people, and those that do I still think insulate developers too
much from the production environment.

------
mirekrusin
Microservices are just one option of refactoring that can be considered at
later stage of life of the system.

Even big guys who are using microservices started with monolith. Many people
can't seem to catch this important detail - they started as monolith that was
later refactored/modularised/split/microserviced. It doesn't mean, the moment
they did it, that humanity found a better way of writing software called
"microservices". It just means that, at that stage of life of the project, it
made sense. Starting with monolith is still, in most cases, the best way to
write projects, even if later they evolve into microservices.

Starting projects with maximum split into microservices is, in most cases,
just a plain, stupid idea.

The most important part when starting project is avoiding friction at all
levels - from dev setup, contribution, deployments, database evolution
(migrations), interaction between different parts of the system (it's easier
to just call a function from a module than to do rpc - which involves
implementing rpc on the other side, managing it's deployment, keeping
interfaces in sync etc)...

Microservices are for mature services with crystallized interfaces. They
emerge naturally and the split is obvious at later stage - this information is
not available at the beginning.

~~~
ChicagoBoy11
I've mostly focused on front-end stuff, but it's amazing to me how much advice
similar to "start off doing <simple-even-if-perhaps-not-proper> and then later
when the problem space becomes clearer or you begin to suffer <then-do-proper-
refactor-or-concern-separation>" hits a chord with just about everything I've
ever worked on.

Even when doing incredibly small front-end projects, I have a hard time
thinking about times where I didn't regret optimizing or modularizing things
before it was obviously necessary. Would usually later uncover that it was
either useless or that I had built the wrong abstractions, needing to start
over again. Conversely, every time where I've spun something quickly using the
most straightforward patterns and optimized or refactored only when it became
abundantly clear that it was necessary, the right way to abstract the code
always felt like a no-brainer and I would always end up with very useful
abstractions which allowed me to build meaningfully on top of them.

~~~
pdimitar
As a full-stack dev this is my exact experience as well. As Martin Fowler says
in his book Refactoring -- don't go ahead of yourself, code your way out of
the business problems, stay on top of your code, refactor periodically (even
if it's only to remind yourself what does a module do!) and the best patterns
for your project will emerge by themselves.

------
afpx
There are good points in there, but I disliked some of the scare quotes.

Microservices were not just concocted by a team at Netflix, and everyone then
followed. Instead, microservices emerged across many different companies and
teams concurrently. The architectural style was a natural reaction to many
simultaneous forces that were being applied across the broader development
ecosystem.

Of course, I'm biased because I built two similar architectures at the same
time that micro service was becoming a buzz word, and I only knew that the
type of architecture had a name much later. But, me and my team just did it
that way because we were trying to find the architecture that worked best for
us, our tools, and our environment. That is, the form followed the function.
And, this was the type of design that naturally turned out.

~~~
mdekkers
_Microservices were not just concocted by a team at Netflix_

In the distant past, "microservices" were called "loosely coupled
architectures" and these have been around since the early 80's I believe, but
probably earlier. There isn't anything new about microservices, and they are a
great idea in theory. In practice, there are all kinds of challenges, and
these will have to weigh in on the architectural decision to deploy this
particular pattern.

I am genuinely curious as to why we in information technology have such an
astounding capacity for re-inventing (and re-labeling) the wheel. I rarely see
as much "Not Invented Here" and "Not In My Backyard" as I do in our field.

~~~
pjc50
The field is really badly taught. Formal CS does exist, but the corresponding
architectural and craftsmanship issues often aren't really.

Conversely, the field is full of autodidacts. It's much easier to teach
_yourself_ by doing. Companies like it this way and fill themselves with smart
kids who reinvent the wheel. Let's not forget that often it's _easier_ to
reinvent the wheel than look for existing solutions, especially with the risk
of running across a software patent (never read these!).

~~~
JustSomeNobody
That's fine. But it still couldn't hurt people to actually learn some
computing history.

~~~
pjc50
Where would you suggest they start? (Serious question, I've no idea how you'd
find this stuff as a curriculum rather than just cultural immersion)

~~~
JustSomeNobody
I don't think there is a single starting place. It really depends on the
context.

As an example, we're talking about microservices, so one could start looking
into things that people brought up, like the Unix Philosophy.

The thing developers really need to constantly be aware of is this - if you
think of it today, another developer has already thought of it 30 years ago.
Go find out what he knows.

~~~
Intermernet
I remember my dad telling me that a "science history" class would be mostly
useless from a practical standpoint as the study of science would include all
of the really important bits of the history of science as a matter of course.
This may be accurate when approaching physics, but CS is a very young field.

Just reading Knuth can give you a fantastic window into what has been done
before. Looking into the history of Bell labs and Xerox PARC can make you
understand the reality of what you suggest.

I'm not in any way suggesting that there's nothing new to invent in CS, but
the current mindframe in many devs is that the state of the art is akin to
quantum mechanics following Newtonian physics by a couple of decades.

The current revolutions in CS are mostly capability based rather than concept
based. The ideas are relatively old, but the technology has allowed them to
come to fruition.

I think it would be beneficial for all new CS practitioners to have access to
a history of CS, as they would not just realise that the current trends aren't
particularly new, but that there are a huge number of equally old ideas that
are probably ready for prime time, but weren't feasible when first suggested.

~~~
pjc50
History and Philosophy of science is definitely a field of study:
[http://www.hps.cam.ac.uk/](http://www.hps.cam.ac.uk/) \- it definitely
doesn't come through from learning the subject in the normal manner, because
that tends to present a "finalised" view of science that doesn't include any
of the dead ends or lengthy mysteries.

HPS for CS wouldn't be a whole graduate course, but would be a worthwhile
module in a larger course.

~~~
pacaro
I'm pretty certain that I'm not the only Cambridge HPS graduate here.

It would be interesting to see some of the thinking behind current approaches
to HPS being applied to development teams, both as teams and as part of the
larger community of developers

------
hinkley
I keep forgetting the old axiom: don't use technology to solve a social
problem.

The biggest problem I have with microservice is that they lock you into a
particular data flow up front. And the only times I've worked on a project
where the data flow didn't change substantially from first implementation to
having a rich set of features and many customers? Those were the projects that
never got anywhere.

No battle plan survives contact with the enemy, and microservices are making
decisions early on that are difficult to change later. Because you've picked
one decomposition and then erected fences around the parts.

~~~
hobofan
> The biggest problem I have with microservice is that they lock you into a
> particular data flow up front.

You say that as if that's an unalterable fact.

> No battle plan survives contact with the enemy, and microservices are making
> decisions early on that are difficult to change later. Because you've picked
> one decomposition and then erected fences around the parts.

If you know about that problem, then don't do that! :P

In my experience, if you slice your microservice boundaries with the
philosophy that each of them should be able to be sold as a white label
product (or more realistically be reused for a completely different product
internally), microservices can survive pretty large pivots of the
startup/project with little changes.

~~~
ojosilva
I agree. Microservices enable startups to pivot very easily, or to implement
or diverge into new sales channels quicker than with monolithic solutions.

The same applies to larger business. With microservices, ie. large banks can
easily offer API services, enabling new revenue streams that were nearly
impossible to offer before.

Microservices are an excellent way to deal with future changes. The unknown
unknowns like some would put it.

~~~
JamesBarney
What size start-up do you think is benefited by a microservices architecture?

~~~
hobofan
If you have people that are experienced with microservices I think it can even
benefit a 2 person team.

The anecdote that comes with that:

I joined a startup as the second developer. The first developer built
everything in a microservices architecture with the background that we were
supposed to grow as a devteam significantly in the next 6 months. One of my
first tasks was setting up CI/CD for the microservices so we can launch the
product, which is a pretty routine setup for me since I've done the same thing
~5 times before. Then startup things happened and we stayed a 2 person team
for a long long time.

In the beginning, I was skeptical if the microservice architecture was really
the right choice there, but overall I think we came out ahead. There were a
lot of instances where we needed to upgrade libraries to fix some bugs or get
some newer features and only had to upgrade very little of the whole codebase,
allowing us to iterate faster. This would not have been possible in a
monolith. We also experienced some of the microservices downsides, like slower
refactoring and network connections being less reliable than in-process
function calls, but with the right approach and tools, they were not much of a
problem.

Microservices are not a free lunch, but they can be a cheap and tasty one.

------
Animats
This is partly a limitation of the mediocre inter-process communication system
in the Linux/Unix world. Calling another service is a Big Deal.

I've done a complex robotic system which had about ten processes running on
QNX. Most of them were running some microservice - GPS, INS, LIDAR, mapping,
logging, short-term vehicle control, etc. This worked fine. That's because QNX
does interprocess communication well. MsgSend/MsgReceive is like making a
subroutine call on the send side. The receive side is more like an event loop.

This sort of thing is common in robotics. ROS does something similar, although
the interprocess communication is slower. Usually you have dummy services for
simulation purposes, so you can run the operational code in a simulated
environment. We could run the system for real, or run it entirely with
simulated inputs and outputs, or could put the robot vehicle up on blocks and
run the system with fake inputs while operating the real vehicle, engine
running and wheels spinning but going nowhere. Everything could be run on one
desktop, or on the vehicle's own computers, or partially split. There were
shell files to launch the various configurations. We could plug in shims
between services and watch the data go by.

I don't see the fear of multiple intercommunicating processes. Even on Linux,
there are decent ways to distribute. They're not as good at hard real time as
QNX, but they work.

~~~
okreallywtf
It would be interesting to see a IPC system that abstracted away where the
other process was running or treated processes running on an entirely
different machine or VM as another local process.

Since scalability is probably a bigger concern in web architectures than
robotics, expecting to be able to use IPC to communicate between two services
is unlikely. I would imagine in many web architectures being able to scale
systems independently and dynamically makes up for the overhead in
communication protocols.

~~~
osullivj
You can get this kind of decoupling from a pub sub msg bus. And async too.
Much better than point to point HTTP.

~~~
Animats
ROS is pub/sub, but that works because most of the data traffic is one way,
sensor->compute->actuator. ROS isn't usually used for function call type
operations. If nobody's listening to a sensor, that doesn't hurt the sensor
any.

------
contingencies
Lots of problems can benefit from well defined interfaces: security,
complexity, maintenance, HR/project matching, need for parallelized or
decoupled development (eg. multi-team/timezone), use of existing codebases,
etc. As always the devil is in the details.

It is an old maxim in programming that correctly modeling the data is a huge
percentage of the design. For example:

 _Pike 's 5th Rule: Data dominates. If you've chosen the right data structures
and organized things well, the algorithms will almost always be self-evident.
Data structures, not algorithms, are central to programming._ \- Rob Pike,
Notes on C Programming (1989)

In that sense, in nontrivial problem spaces, if forced to generalize, then I
am generally more for spending time carefully developing interfaces (ie. a
paradigm potentially more closely aligned (in a network services context) with
the microservices model = older coders with maintenance chops) than
immediately writing actual code (ie. approach of the keyboard-happy iterative
tweaker = young coder with fire-and-forget habit).

Any real world project lies somewhere between these extremes.

~~~
CuriouslyC
What is central to a program is not the data structure or the algorithm but
the thing it intends to accomplish for human beings. If you start with a data
structure and say "what can we let people do with this?" you're going to write
bad software.

~~~
kotojo
I don't feel they are saying start with the data, before you even have an
idea. More, "What data do we need, and how do we structure it for this
problem?".

------
rektide
Found three rather boring very technical concerns to harp upon for what
Microservices are supposed to deliver and why they're not the bee's knees.

What microservices actually work towards is a viable strategy towards the Two
Pizza rule, where teams can be kings and queens of their kingdoms & drive
their own agenda forward without needing to consult with everyone else working
on the monolith. Containerizing your software allows containerizing your
culture, keeps there from being ancient legacy top-down hierarchical culture
and praxis set forth long ago and which will dwell ever on in the monolith
your whole company must collectively lurchingly keep trying to push forward.
Free yourself from the more brutal pieces of Conway's Law. Create an
organization that can continue to try new ideas, that allows team's freedom to
work without always bumping elbows with others.

At the end of a somewhat different thread amid these comments, gloverkcn
happened upon a wonderful synopsis:

 _The problem is that it 's easier to grab the people sitting next to design
something than scheduling a meeting with groups you rarely see. This is a key
driver of Conway's law._

[https://news.ycombinator.com/item?id=14416752](https://news.ycombinator.com/item?id=14416752)

Microservices & their platform infrastructure are the answer to make this not
a problem, to free you from tight organizational grips of Conway's Law.

For small and new companies, technical and organizational structuring has not
accrued. These are not major problems in early stages, because everything is
small enough to be changed easily anyways. But as time goes on, as software or
head count grows, maintaining the liberty to ongoingly innovate and pick up
new ideas and new technologies is a liberty that has to be fought for. Making
your way from a 1->many service organization comes with a lot of complexity
and cost, but it is a key step to allowing diversity and innovation and
technical growth, particularly for multi-department organizations.

~~~
sciurus
Did you read the section titled "Dependencies"? That was about social as much
as technical concerns.

I don't think the author disagrees with the goal of reducing dependencies or
coupling between dev teams. What they disagree with is whether moving to
microservices is necessary to achieve this. They think a "a monolith, composed
of well-defined modules, with well-defined interfaces" can get you the upside
without the downsides.

~~~
rektide
Good call out, great. To speak more directly to that particular section, I
focus on saying you can't decouple people or systems socially without
decoupling them technically.

Trying to upgrade from Netty 3 -> Netty 4 (like Twitter Finagle had to do) in
less than a big bang? Say hello to remarkable levels of pain and suffering one
has to do, probably shading one's own Netty and pointing other dependent
libraries at that shaded version. No matter how much decoupling you try to
build into APIs within your monolith, trying to get the thing to build and
ship together forces a co-interactivity of teams, coheres everyone to common
technical underpinnings and which exposes all folk involved to any of the
technical risk attempted by any one team. Decoupling may allow some
modularity, some ability to replace X with Y, but it's will always have to be
done within a common framing, a common framing that as the project grows will
become immensely harder to un-stick and push forward, will be much more
resistant to experimentation & great leaps forward.

Any software project has so much diversity and risk budget associated with it:
a willingness to innovate technically that depletes with use. Even if you
"decouple" you monolith, if everyone has to build and ship together you are
going to forever be bound into deep interdependencies. Trying to update a
logging framework or trying out a Reactive Functional Programming is going to
encounter vastly more resistance when you're working on the same sandcastle as
perhaps hundreds of other devs, dozens of other teams. And if it is low
resistance? Well then you're in more trouble- your monolith grows boundlessly,
with everyone being exposed to the rapidly growing risk and diversity that
other teams experiment with.

Isolation, containerization, is a good defense. I think Conway would approve.
Apologies, paragraphs above are a bit thought soupy- havent the time to edit.

~~~
sciurus
No apologies needed. You highlighted precisely the issue that is causing us to
move to proper "microservices" at Eventbrite. Without splitting our monolith
into separate processes, migrating to newer versions of Django or to Python 3
simply isn't tractable for us. Luckily we've already decoupled much of the
monolith by moving functionality into a homegrown service framework that lets
you run services either remotely or colocated in the same process without code
changes.

------
pulse7
Microservices have their place, but - as often happens - they were overhyped
to be a solution to everything and for everybody... Now the cooldown period
begins...

~~~
theparanoid
The 80s version was microkernels.

~~~
pjmlp
Well, I have one powering my mobile phone radio station.

------
timothycrosley
It depends how you do micro services. There are middle grounds. One big gain
of micro-services is that it guarantees things are separate and can be handled
by separate teams if the need arises. That doesn't mean you need to start out
that way. For instance in Python I use hug to create my microservices
[https://github.com/timothycrosley/hug](https://github.com/timothycrosley/hug),
then I can just install them to create a "monolithic" services that consumes
all the microservices, the great thing is that hug allows you to expose both
as a webservices, and as a Python library so I can consume as a Python library
with no overhead, until the need to split is evident, and then can split up
the services with very little work. Of course the need may never arrive, but
the modularity that is forced when using micro-services pays dividends quickly
regardless

~~~
mattmanser
I, my, I, I. Sounds like you _really_ don't need microservices.

~~~
StavrosK
Also refer to the handy flowchart I created in my post
([https://www.stavros.io/posts/microservices-cargo-
cult/](https://www.stavros.io/posts/microservices-cargo-cult/)) for judging
whether you need microservices:

[https://www.stavros.io/files/images/microservices-cargo-
cult...](https://www.stavros.io/files/images/microservices-cargo-
cult/flowchart.png)

------
kpmah
It always bothers me that developers who are working with a team they can't
trust to build a disciplined, modular application advocate moving to a more
technically challenging architecture as a solution.

~~~
maxxxxx
That's what I am thinking too. How are people who can't write decent libraries
suddenly able to define microservices and their boundaries?

~~~
korzun
You can't sell a decent library to stakeholders.

------
jpalomaki
I would love if each micro service article would start by defining what they
mean with "micro service". Wikipedia says "In a microservices architecture,
services should be fine-grained and the protocols should be lightweight."[1],
but that leaves lot of things open, especially what does the micro mean size
wise.

When these kind of things are not defined, people may take good advice and
apply it wrong way. You read articles how large companies are embracing micro
services. However a "micro" for them might mean very much different thing than
what is means for smaller company. Just like with "big data".

[1]
[https://en.wikipedia.org/wiki/Microservices](https://en.wikipedia.org/wiki/Microservices)

------
js8
It seems to me that the answer really depends on application.

There are two kinds of scalability requirements. Some applications scale
almost linearly with number of users, for example, Google Maps. That happens
if users need interact with each other in limited ways. For such, horizontal
scalability of a monolith is almost always a better answer than microservices,
and splitting the data before processing is almost always better solution than
Spark or Hadoop.

The second kind of scalability requirement is where the users interact, and so
the processing required scales more than linearly (quadratically) with number
of users. The examples are social networks, the more users you have, then you
need to deliver quadratically more messages to all of them. In this case,
microservices (and Spark and Hadoop) are probably better, since you can't
solve the problem just by scaling the monolith horizontally.

~~~
haggy
> For such, horizontal scalability of a monolith is almost always a better
> answer

This would be scaling vertically since your only option is to increase the
resources dedicated to that single application instance (more CPU, RAM, Disk,
etc). The definition of a monolith is a single application running on a single
platform so scaling a true monolithic application horizontally (adding more
instances of the application) is not usually possible.

~~~
js8
Maybe I just understand the word "monolith" differently. I understand it as an
application that cannot be easily broken to pieces that run on different
machines. So the only option is to scale horizontally, if at all.

What I mean is that you can e.g. run two monoliths on two machines, and
process half of your users at the 1st machine, and the 2nd half of your users
at the 2nd machine. If the users don't need to interact (or in a limited way,
for instance, you need to recalculate something each day or so), it's a good
enough solution just using horizontal scaling.

------
manigandham
This whole thing is about polyglot development and deployment.

"Micro" is a meaningless prefix. This is all SOA - service oriented
architecture. A "service" can be anything, it's a vague definition of whatever
is a natural encapsulation of a bit of logic in your application (or company).
This encapsulation can be easily done with separate classes, namespaces, or
even packages, while still running together in the same process.

In the end, you're putting some binaries on a server. The machine doesn't care
how often you do that or how many different binaries you choose to use, so the
only real reasons are multiple languages that aren't compatible in the same
process or massive apps/organizations that need to have completely separate
projects to make forward progress.

For everything else, microservices are a silly solution to no actual problem.

~~~
l1ambda
This. For languages with a good module system, it's almost trivial to use
modules either in-process (e.g., monolithically), or out-of-process exposed
via HTTP or some other network interface (as microservices). Then the
discussion becomes about writing modular code rather than microservices.

------
alpeb
The separation of concerns inherent to microservices is such a great
advantage, that in my opinion it's critical even for small teams. You can
bring an extra hand into the team without them having to have to understand
other parts of the code to do their job. A monolith with clearly delimited
packages will give you this too, but it won't allow you for example to place
each service under its own repo which would also provide you with the ability
to limit code access. That requires a lot more up-front investment, but it's
worth over the long run, unless your project has a very limited life-span,
which is rare and sometimes really unknown.

~~~
korzun
> The separation of concerns inherent to microservices is such a great
> advantage

It is not. Not even close.

You have to manage separate onboarding processes, ensure the standards are
completely up to date, ensure that training is extremely relevant and covers
every single critical component of the system.

You have to maintain up to date documentation of integration state, data flow,
testing capabilities, etc.

> You can bring an extra hand into the team without them having to have to
> understand other parts of the code to do their job.

That sounds awful.

> won't allow you for example to place each service under its own repo which
> would also provide you with the ability to limit code access

Let me guess, you are not paying for engineering talent? The notion of having
distributed teams for most of the companies is so ridiculous it's not even
funny.

What happens if a lead person on team X and Y quits? Are you going to retain a
full-time person to manage your deployment process now?

What happens if your budget is cut in half and you need to fire half of the
team? Do you expect other engineers to pick up something they have not touched
for the duration of their stay without slowing down business?

> That requires a lot more up-front investment, but it's worth it over the
> long run

None of that is true.

------
HeroOfAges
I've seen monoliths running on 10 year old technology using runtimes and
development kits that have been deprecated or are no longer supported by their
maintainers. These applications are being used in production today by billion
dollar companies. With microservices this never has to be the case. I think
it's madness for a company to be running old software with security
vulnerabilities and performance liabilities because no one wants to touch
what's basically a digital Jenga tower. Then you have people writing articles
like this one. I think we're beyond fads and buzzwords here. There are real
world advantages for building software this way. If you're afraid to embrace
the future, please step aside. You're making my job far more difficult than it
should be. These are my observations as a 40 year old developer.

------
mafro
The free Microservices vs SOA book gives an excellent run down of the
difference between microservices and SOA. It's really useful to have this in
mind when having any conversation about microservices.

[http://www.oreilly.com/programming/free/microservices-vs-
ser...](http://www.oreilly.com/programming/free/microservices-vs-service-
oriented-architecture.csp)

"One of the fundamental concepts to remember is that microservices
architecture is a share-as-little-as-possible architecture pattern that places
a heavy emphasis on the concept of a bounded context, whereas SOA is a share-
as-much-as-possible architecture pattern that places heavy emphasis on
abstraction and business functionality reuse."

------
fallous
Two things jump out at me initially. First, the assertion that cargo-culting
is bad is hardly revelatory, regardless of the paradigm. Secondly, the five
point list at the end of the diatribe is exactly what any transition from
legacy monolithic system would follow in order to pursue a microservice
infrastructure.

I can't decide if the author actually has a problem with an appropriately
deployed microservice architecture (and no, you do not need 100 engineers to
support such a thing... I replaced a legacy back-end system for a $100M/yr
revenue company with such an architecture using 4 devs) or if this is just a
misapplied generic rant about cargo culting that has been applied to
$THINGIREADABOUTTODAY.

~~~
adamdrake
Hi, author here. Thank you for your feedback, and I'll be sure to take it into
account in future writing.

I don't have any preference for or against microservices, as long as the
benefits they bring outweigh the drawbacks which come along with such an
approach.

The goal wasn't so much to rant against cargo culting, but to provide some
counter points along with a more measured progression for moving towards
microservices.

I agree that the points at the end should be what such a transition looks
like, but it's amazing how often companies skip the first two steps and try to
immediately break up their monolith as a form of technical bankruptcy, which
is rarely warranted.

Thank you again for the comments!

------
gagan2020
I agree overuse of almost everything is bad. Currently, I am working with one
another seasoned architect, who is very proponent of Microservices and AWS
lambda on same lines as mentioned in the article. My discussion with whom has
always articulated that let's do traditional portion(user management,
permissions, payments, etc) in a traditional way (we are using Django so in
that) and whatever tasks we have, lets do them in a microservices way.

In a traditional sense, we are implementing a system in Django that will be
deployed to AWS lambda via Zappa or SAM. All, the traditional task-queue tasks
will be separate microservices (lambda). So, We are implementing fusion and I
personally see future in that.

~~~
mrweasel
>I agree overuse of almost everything is bad.

One of our customers are developing a service, which in turn consist of four
minor services. They are completely separate services, one provide the core
functionality and the rest a supporting tools, but the core can run without
them. To me that's a completely reasonable seperation, and it allows multiple
teams to work in parallel.

One of the "services" above however consist of 10 to 15 smaller services, all
of which are communicating via http. The idea as I understand it is that these
are small components that can be reused in other projects, if needed. What I
don't understand is why these aren't just made into library that can be
reused, rather than having them as micro-services.

This project is an example of where microservices is done both right and
wrong.

~~~
gagan2020
That's why what I am trying is seems more right. Those task-queue tasks, we
anyway put to queue and then access it through workers. They usually, do
computation/IO intensive work or sending out email or notifications, etc. So,
its logical to deploy those as separate to aws lambda kind of service to
decrease total service turn around time.

------
adamconroy
As always, it depends. Personally I have been working for a corporation where
we use a microservice approach and it has been great for productivity, turn
around times on projects, and the systems/services are quite easy to
understand. We are full CI. Our microservices maybe aren't as micro as others,
we have a pattern of having monolith 'data services' and micro
business/functional/api services. The monolith 'data service', effectively a
layer that exposes CRUD over data, in some cases has 10 microservices that
interface. I could go on and on, but it works very well for us.

------
lprd
Could anyone explain what microservices are? I'm still learning as a junior
dev and haven't quite understood what this word means. How would you break
down an existing monolith into smaller 'micro-services'?

~~~
asadjb
Here's on of the sources I see most referred to when talking about
microservices. Maybe this could help you to understand them.

[https://martinfowler.com/articles/microservices.html](https://martinfowler.com/articles/microservices.html)

~~~
lprd
Thank you :)

------
matt_s
As with any trendy technology or architecture approach, you should first be
asking "What problem are we trying to solve?"

That's where the discussion should start. Better yet if that is a business
problem, then dive into technical solutions. Many times technologists take a
new (but not really) silver bullet and try to fit it in somewhere without
understanding if it solves a problem or not.

It also seems like we like to reskin solutions with new names and the industry
picks up on this, starts their marketing engines and the guys in suits come
around trying to sell the new silver bullet.

------
ris
As of late I've come to a new belief as to the _real_ reason for
microservices' current popularity. DevOps, deployment, and whatever-
hyperopaque-cloudy-service-amazon-have-launched-this-week are the cool & fun
things happening at the moment. Having loads of microservices to manage simply
gives you plenty of toys to play with.

Personally, the microservice-y project I'm currently working on makes me want
to burn my face off every day.

------
quadcore
It seems to me the overhead of microservices in terms of _productivy_ is O(1)
- but I can be wrong. If the overhead is O(1) then the question becomes
whether or not you already have the framework to do microservices. If you
dont, so yeah maybe you should do a monolith.

edit: that even sounds like business opportunity.

~~~
AstralStorm
It is O(n log n) like any kind of communication. Higher constant factor can
kill you too.

See Erlang and theorems also proven for clique networks.

~~~
quadcore
I was referring to productivity, not network communication. _an O(1) pain in
the ass_ as pg wrote.

~~~
AstralStorm
Developer communication, deployment complexity... more entities, more
overhead.

------
Yhippa
Just like Agile and vendor-based software solutions of the past it seems like
microservices are following the same path. I've seen projects starting off
with microservices because someone several pay grades above deemed it so. Lots
of resume-driven-development done here.

------
tlrobinson
This makes me wonder, are there any systems that make it easy to build a
modular monolith that can relatively easily be split into microservices in the
future?

Erlang (OTP?) comes to mind (though I have very little experience with it so I
could be off)

~~~
bpicolo
It's totally possible to write code that's straightforward to port later. I'd
say the biggest blockers tend to be database and code-organization related.

If you make a component that is the sole entrypoint to it's concerns, gets
it's own database, and have an ORM or database context layer that is capable
of dealing with that, moving to a microservice is as simple as taking that
same code, exposing an http layer to it, and making your original interface
call to the service instead.

If other code is joining into the relevant tables, skipping said interface and
doing it's own thing with the guts of it, you'll have issues.

This can of course be done in any language, really, though database tooling
can definitely vary in ease of doing it

------
shusson
I wish the author discussed microservices in the context of domains and
models. If you model your application and find a distinct isolated domains,
then consider creating separate services for them.

~~~
kyberias
But again, why? Making them services means distribution and that has a cost.
You must have some reasons to make them distributed. You should first consider
simply having the isolated domains as modules in your application monolith.

~~~
bostonvaulter2
Does anyone have good references for this kind of modules in Rails? Would it
basically be a gem?

~~~
mhw
It doesn't have to be: you can use (Ruby) modules, classes and namespacing to
split the application into logical pieces. Then start to enforce encapsulation
between these pieces, so all access to a given piece of the system goes
through a small, well defined and documented interface.

 _If_ you are thinking about migrating to microservices at some point, it's
also beneficial to try to limit the areas of the database schema that each of
these pieces accesses. There's no point having lots of microservices running
over the top of a single monolithic database schema: extracting microservices
should imply extracting the relevant part of the database schema and putting
it into a private database with operational access only through the
microservice. You can make this much easier while still working in a
monolithic application by being more disciplined about how much of the schema
a piece of the application needs to know about.

------
dorian-graph
Refreshing to hear (again).

Does anyone know of some good (modern) examples of monoliths that adhere to
these, and other, principles?

------
ninjakeyboard
counterpoint [https://martinfowler.com/articles/dont-start-
monolith.html](https://martinfowler.com/articles/dont-start-monolith.html)

------
the_cat_kittles
sometimes the idea of microservices seems like an attempt at taking the unix
philosophy of doing one thing and doing it well, and adding on the additional
requirement of doing it over the net. i dont think its a bad idea, but surely
its not worth the effort in lots of cases.

~~~
erikbye
The Unix philosophy is popularly simplified to "do one thing and do it well",
sure, but that simplification is not accurate. Many Unix programs do much more
than one thing. What the Unix philosophy actually is:

"The Unix philosophy emphasizes building simple, short, clear, modular, and
extensible code that can be easily maintained and repurposed by developers
other than its creators. The Unix philosophy favors composability as opposed
to monolithic design."

Which is a lot more akin to what's now called microservices. Unix philosophy
has always been about software modularity and do we really need to use
'microservices' these days to describe such an approach? The meaning is not
clearly defined anyway, what one company describes as a microservice
architecture might not be described as such by another. I certainly don't need
any buzzwords (e.g. re-invented terminology) to describe a modular software
system, and whether your processes IPC mechanism is HTTP/JSON, pipes, or
sockets, we are talking about exactly the same kind of system.

~~~
the_cat_kittles
i agree the buzzword is pretty useless. i appreciate the more in depth
discussion of the unix philosophy as well. thanks

------
faragon
TL;DR: microservices is yet another religion.

------
partycoder
The projects I've seen getting to technical bankruptcy have been with no
exception monoliths.

With microservices you have the ability to take one service and fix it,
reimplement it or replace it if necessary. Also use whatever technology makes
more sense for that project, as long as it speaks the same protocols and
sticks to the same interfaces.

With monoliths, there's always the analysis paralysis related to the possible
ramifications of a change, which slows down everything.

~~~
conradk
You could also have a "monolith" in a single or multiple compatible languages
(C, Rust, C++, etc) and have different features/modules separated in different
libraries. These libraries could then act as the "microservices".

Developers could be tasked with working on these libraries that would have 0%
overhead (since they are just a piece of code that can be used in another
library), instead of creating REST/SOAP/RPC APIs that come with the
HTTP/SOAP/RPC overhead.

This means you can start small, with a public REST/SOAP/RPC API. Less
infrastructure, less server config, etc. And if at some point you need more
scalability, you can always do an actual microservice (ie a separate
REST/SOAP/RPC API) with the library code the day you absolutely need it for
horizontal scalability or whatever.

I have seen this pattern used in the Rust world a lot. Projects expose both a
library and a binary. And the binary is built on top of the library. This
means you have 0 overhead if you want to use the binary's features, since you
can simply include the library and get going.

Also, I think that having the tools to support deprecation is big advantage
when working with a monolith. I think Rust has built-in support for compile-
time deprecation notices. PHP, for instance, doesn't have that particular
feature yet, as far as I know. Symfony tries its best by adding deprecation
notices __when you call __a deprecated method /function. But you have to call
it to realize it's deprecated.

I'm not here to promote Rust or hurt PHP, I actually love both. The statements
above apply to a lot of other languages as well. I just took the examples of
Rust and PHP because these are the ones I've worked with the most.

@xxs's comment is spot on as well.

~~~
partycoder
If you divide your project so self-contained components with scoped set of
responsibilities that are not coupled to the rest, that's fine as well. I've
used that pattern with great success.

Microservices are a way of achieving this, but as you describe it, is not the
only way. Discipline is another way.

Now, as you have more people involved, or release deadlines get tighter,
chances that tech debt gets introduced increase. Unless you go back and repay
the tech debt, situation would worsen.

In that sense, microservices are some sort of cap on the impact of tech debt
from rushed deadlines and lack of discipline.

~~~
potatoyogurt
Just saying that you're using microservices doesn't guarantee that you'll be
avoiding or capping tech debt. I've seen some hopelessly entangled
"microservices" that are so full of assumptions about how other parts operate
that they're essentially unusable anywhere except for one or two places. You
need that discipline whether you are separating components by service
boundaries or just by organization of code within a monolith.

I agree that microservices tend to make it harder to accrue tech debt past a
certain point, though. Although they can also dramatically complexify the
process of figuring out what is going on in an app if log files are spread
around a ton of places (which is mainly a tooling problem).

~~~
partycoder
That's what log aggregators are for.

~~~
imhoguy
This is true but for meaningful aggregation is not easy. You need to provide
correlation identifiers to identify log entries which are part of user request
invocation. This complicates the protocol between distributed services if they
are in different business domains with APIs using unrelated resource
identifiers. Usually correlation ID is dispatched over e.g. HTTP header or
separate API parameter. With monolith the issue also occurs but loggers
usually take care of this with thread local variables or some other contextual
storage.

~~~
partycoder
You will have to do it anyways, irregardless of how you structure your project
since logging is considered to be a cross-cutting concern.

------
threeseed
This frankly adds nothing new to the conversation.

And if you find scaling microservices harder than a giant stateful monolith
well then clearly you've done something wrong.

Likewise local development should be far easier if you define your APIs and
contract boundaries properly.

~~~
sundvor
I somewhat recently worked on a huge monolith where certain parts were "off
limits"; I was told to the effect of "noone changes those classes, we don't
really understand them". That business is not going too well now.

I searched the article for "domain" and "bounded contexts", found nothing.
Article is not without merit, however; Sam Newman in his book [1] cautions
about going too aggressive, i.e. splitting down into too many microservices
before the domain is fully understood. On the other hand, having systems cross
obvious domains is a definite warning sign (of disaster).

So the motto might be, use them, but in moderation (as with a lot of things).

1) [https://smile.amazon.com/Building-Microservices-Designing-
Fi...](https://smile.amazon.com/Building-Microservices-Designing-Fine-Grained-
Systems/dp/1491950358)

~~~
ex_amazon_sde
Amazon has been a precursor of SOA. There's no "Micro" in SOA, for a reason.

