
Monoliths Are the Future - feross
https://changelog.com/posts/monoliths-are-the-future
======
rubyn00bie
I couldn't agree more with an article.

Most people think a micro-service architecture is a panacea because "look at
how simple X is," but it's not that simple. It's now a distributed system, and
very likely, it's a the worst-of-the-worst a distributed monolith. Distributed
system are hard, I know, I do it.

Three signs you have a distributed monolith:

1\. You're duplicating the tables (information), without transforming the data
into something new (adding information), in another database (e.g. worst cache
ever, enjoy the split-brain). [1]

2\. Service X does not work without Y or Z, and/or you have no strategy for
how to deal with one of them going down.

2.5 Bonus, there is likely no way to meaningfully decouple the services.
Service X can be "tolerant" of service Y's failure, but it cannot ever
function without service Y.

3\. You push all your data over an event-bus to keep your services "in-sync"
with each-other taking a hot shit on the idea of a "transaction." The event-
bus over time pushes your data further out of sync, making you think you need
an even better event bus... You need transactions and (clicks over to the
Jepsen series and laughs) good luck rolling that on your own...

I'm not saying service oriented architectures are bad, I'm not saying services
are bad, they're absolutely not. They're a tool for a job, and one that comes
with a lot of foot guns and pitfalls. Many of which people are not prepared
for when they ship that first micro service.

I didn't even touch on the additional infrastructure and testing burden that a
fleet of micro-services bring about.

[1] Simple tip: Don't duplicate data without adding value to it. Just don't.

~~~
smoyer
We've moved a lot of services into Kubernetes and broken things up into
smaller and smaller micro-services. It definitely eliminates a lot of the
complexity for developers ... but you trade it for operational complexity
(e.g. routing, security, mis-matched client/server versions, resiliency when
dependency isn't responding). I still believe that overall software quality is
higher with micro-services (our Swagger documents serve as living ICDs), but
don't kid yourself that you're going to save development time. And don't fall
into the trap of shrinking your micro-services too small.

~~~
Aperocky
You’ll have endless race conditions to deal with, even when storage is central
and unique.

We learned and is continuing learning that.

~~~
idiocratic
Unique storage for multiple services sounds like a recipe for disaster. The
purpose of splitting services, at least one of, is to decouple parts of the
code at a fundamental level, including storage and overall ownership thereof.
You're probably better served with a modular monolith if you really can't
break storage up.

~~~
Aperocky
No, only one service is reading/writing, everything else just call that.
Still, things get quite lost when it involves talking to multiple other teams
and needing to keep everything in sync.

~~~
idiocratic
Ok, but then what's the point of splitting it in the first place? The way I
see it is to split your domain so that a team owns not only the code, but also
the model, the data, the interface and the future vision of a small enough
area. If a service owns all the data, then someone who needs to make any
change is bottlenecked by it and they would need knowledge beyond their
domain. So the key is defining the right domains (or domain boundaries).
Unfortunately most people just split before thinking about the details of this
process, so the split will sooner or later hit a wall of dependencies.

~~~
Aperocky
We need synchronous work flow and then asynchronous workflows. That was the
primary reason. Now that doesn't mean it must split, but since we're running
on multiple hosts anyway it wasn't hard to split off the asynchrounous
functions to another batch.

------
BrentOzar
I'm a database guy, so the question I get from clients is, "We're thinking
about breaking up our monolith into a bunch of microservices, and we want to
use best-of-breed persistence layers for each microservice. Some data belongs
in Postgres, some in DynamoDB, some in JSON files. Now, how do we do
reporting?"

Analysts expect to be able to connect to one system, see their data, and write
queries for it. They were never brought into the microservices strategy, and
now they're stumped as to how they're supposed to quickly get data out to
answer business questions or show customers stuff on a dashboard.

The only answers I've seen so far are either to build really complex/expensive
reporting systems that pull data from every source in real time, or do
extract/transform/load (ETL) processes like data warehouses do (in which the
reporting data lags behind the source systems and doesn't have all the
tables), or try to build real time replication to a central database - at
which point, you're right back to a monolith.

Reporting on a bunch of different databases is a hard nut to crack.

~~~
MadWombat
> Some data belongs in Postgres, some in DynamoDB, some in JSON files. Now,
> how do we do reporting?

One of the key concepts in microservice architecture is data sovereignity. It
doesn't matter how/where the data is stored. The only thing that cares about
the details of the data storage is the service itself. If you need some data
the service operates on for reporting purposes, make an API that gets you this
data and make it part of the service. You can architect layers around it,
maybe write a separate service that aggregates data from multiple other
services into a central analytics database and then reporting can be done from
there or keep requests in real time, but introduce a caching layer or
whatever. But you do not simply go and poke your reporting fingers into
individual service databases. In a good microservice architecture you should
not even be able to do that.

~~~
echelon
> But you do not simply go and poke your reporting fingers into individual
> service databases.

This is why I distrust all of the monolith folks. Yes, it's easier to get your
data, but in the long run you create unmaintainable spaghetti that can't ever
change without breaking things you can't easily surface.

Monoliths are undisciplined and encourage unhealthy and unsustainable
engineering. Microservices enforce separation of concerns and data ownership.
It can be done wrong, but when executed correctly results in something you can
easily make sense of.

~~~
smoe
> It can be done wrong, but when executed correctly [...]

Quite the self-fulfilling prophecy there.

> Yes, it's easier to get your data, but in the long run [...]

Systems can and should be evolved and adapted over time. E.g. deploying
components of the monolith as separate services. You can't easily predict what
the requirements for your software going to be in say 10 years.

And depending on the stage a company is, easy access to data for business
decisions outweighs engineering idealism.

~~~
TeMPOraL
> _easy access to data for business decisions outweighs engineering idealism_

I think there are different levels of sophistication of "engineering
idealism". GP talks about "data ownership", and I get the desire to keep the
data a microservice is responsible for locked in tightly with it. But let's be
precise why it's good: because isolating responsibility reduces complexity.
_Not_ because code has some innate right to privacy.

In my own engineering idealism, there's no internal data privacy in the
system. Things should be instrumentable, observable in principle. If an
analyst wants to take your carefully designed internal NoSQL document
structure and plug it into an OLAP cube for some reason, there must be a path
to doing that; if that's an expected part of the business, the service needs
to have it on the feature list, that this should be doable without degrading
the service.

Software needs to be in boxes because otherwise we can't handle it mentally,
but the boxes really shouldn't be that black.

~~~
edmundsauto
Isolating responsibility reduces complexity for that piece of code. It
increases complexity for assembling the whole thing into a holistic package,
which is usually what analytics primary need is.

YMMV, but the tradeoff is less complexity at the SWE/prod department, and more
at the analytics team.

------
closeparen
My employer adopted microservices for a very specific reason: it became nearly
impossible to deploy the monolith. With hundreds of commits trying to go out
every day, probability that at least one would break something approached 1.
Then everything had to be rolled back. Getting unrelated concerns into
separate deployable artifacts rescued our velocity.

It came with many of its own challenges, too! A great deal of infrastructure
had to be built to get from O(N) to O(1) infrastructure engineering effort per
service. But we did build it, and now it works great.

There is a reason monoliths were traditionally coupled with quarterly or even
annual releases gated by extensive QA.

~~~
tootie
The solution to this is just writing modular code and using an artifact
repository. It's a model I've rarely seen attempted even though it's much
easier than microservices and serves the same purpose.

You can have individual dev teams, with their own repo ,backlogs, own
stakeholders, etc all working at their own paces. They build modules (jars,
nuget packages, npm modules) and deploy semver versioned artifacts to a repo
like Nexus or JFrog. Any frontend/consumer applications can build towards
versions of those modules and upgrade on their own schedule. Only the
consumers need to worry about deployment.

This gives you the organizational flexibility but not the infrastructure
overhead.

The discriminating factor that makes microservices necessary if these
individual services have divergent hardware needs.

~~~
therealdrag0
My favorite project to work on was a modular monolith. It was a single
deployable but each component (vertical) had its own maven-module.

~~~
tootie
When developers complain about being verbose and not a great language for
coders, I counter that it doesn't exist to solve programming problems, but
rather organizational problems. The killer feature that launched Java wasn't
crap like checked exceptions, it was javadoc. Strict, self-documenting APIs
are 10X more valuable than any intrinsic language feature.

~~~
timwaagh
i dont care about it not being great. its good enough as a language. but
maven... but websphere... java is a hellish platform that ordinarily would not
win from interpreted languages or those which focus on fast compilation. But
it runs literally everywhere, including your toaster, but more importantly on
mainframes which also run the real mvp aka Cobol. run once, run anywhere
remains a killer feature no other platform has replicated as well.

~~~
sk5t
I think you are putting Java in the pillory today, for the worst aspects of
its ecosystem a decade ago.

Outside of perhaps stodgy banks, technical folks are not choosing to run their
JVM projects today on Websphere. Gradle is pretty darn nice.

~~~
timwaagh
It's what my employer uses, they dont really have much choice because they're
running on IBM hardware.

------
eternalny1
What is old is new again.

I've been a software engineer for over 30 years and have dealt with companies
always trying to jump on the next bandwagon. One company I worked with tried
to move our entire monolith application, which was well architected and worked
fine, over to a microservices-based architecture and the result was an
unstable, complex mess.

Sometimes, if it's not broke, don't try to "fix" it.

I can say the same regarding a lot of what is going on in the JavaScript
ecosystem, where people are trying to replicate stuff that works fine in other
languages in JavaScript. Mostly because they are only familiar with JavaScript
and don't realize this stuff already exists and doesn't need to be in
JavaScript.

~~~
thrower123
I can't imagine your level of cynicism. I've only been at this for ten years,
and the number of times I've seen the wheel come full circle and old ideas
come back into vogue, the problems with them rediscovered, reactions to those
problems, and then the thing that preceded them again take precedence is
somewhat depressing. At best I feel like we are grinding ahead a few inches
each cycle.

~~~
tylerjwilk00
It's like the movie Benjamin Button but with a case of amnesia every three
years.

~~~
thrower123
Sometimes it feels more like Memento

------
sytse
At GitLab a couple of years ago we had to fight the temptation to split the
application up in hundreds of micro-services.

I'm glad we did and today GitLab has a big monolith but also a ton of services
working together
[https://docs.gitlab.com/ee/development/architecture.html#com...](https://docs.gitlab.com/ee/development/architecture.html#component-
diagram)

I did an interview about this yesterday
[https://www.youtube.com/watch?v=WDqGaPGBZ9Y](https://www.youtube.com/watch?v=WDqGaPGBZ9Y)

~~~
silasb
Your architecture is similar to something that we want to get towards. How do
you handle standing up all these pieces (or even a subset of these pieces) in
the dev environment?

------
mikepk
I'm tempted to write a blog post... I bristle a little when microservices are
described as "best practice". Monolith vs microservice is really about
_people_ and _organizations_. Monoliths make sense in some contexts and
microservices in others, but the deciding factor is really the size of the
team and number of people working on different functional contexts.

The best analog I can come up with is monoliths in larger organizations are
like a manifestation of Amdahl's law. The overhead of communication and
synchronization reduces your development throughput. Each additional person
does not add one persons worth of throughput when you cross a critical
individual count threshold (mythical man month and all that).

I'm not describing this clearly so I should probably actually commit to
writing out my thoughts on this in a post describing my experience with this.

------
andrew_n
Spot on. The metaphor I typically use here is cleaning up a mess vs spreading
it around. If you have a really big mess and spend a year or two rearranging
it into dozens or hundreds of smaller messes, yes the big obvious mess is
gone, but the overall amount of mess has likely gone up and by segregating
everything you’ve probably made it much harder to someday get to a clean
state.

If you’re moving to microservices because the number of people working on a
project is growing too large to manage and you need independent teams, great.
If you’re refactoring to microservices because “we’re going to do everything
right this time,” this is just big-rewrite-in-disguise.

Whatever engineering quality improvements you’re trying to make—tech stack
modernization, test automation, extracting common components, improved
reliability, better encapsulation—you’re probably a lot better off picking one
problem at a time and tackling it directly, measuring progress and adjusting
course, rather than expecting a microservices rewrite to magically solve a
bunch of these problems all at once.

~~~
karmakaze
Complexity (as in unintended/unexpected behaviour) varies with N^p where p >
1.0 so having N messages of 1/N size is a definite advantage and _does_ make
it easier to clean up the little messes.

~~~
andrew_n
It depends on what the messes are. Separating into different services adds
significant overhead to addressing cross-cutting concerns.

If the modules of your system are already relatively independent with well-
defined interfaces, microservices would be fine and yes would make changes
like upgrading the language runtime version easier.

But when I think of messy, tangled, poorly-tested code that prompts people to
start talking about needing to refactor to microservices, I’m thinking about
different sorts of problems. The messiness I usually see has to do with lots
of missing abstractions, lots of low-level code reading and writing directly
to files and message buses and databases and datastores instead of going
through some clean API. This makes it really hard to change things, because
instead of updating some API backend, you have to find and update all the low-
level accesses.

Now the problem is, typically when going to microservices, people aren’t
looking at the question of, “What common stuff can we pull out to make all our
messy code simpler?” They’re taking the existing, messy modules, with lots of
cross-cutting shared abstractions dying to get out, calling the existing
module a service, and putting a bigger barrier around it.

There are many ways to approach the problem of moving to cleaner, simpler
abstractions, and microservices can help. But you can easily go to
microservices without addressing all the needless complexity, instead
crystallizing that complexity in the process, and many organizations end up
doing exactly that.

------
tynpeddler
There are two big reasons to go to microservices (note that the exact
definition of microservice can vary a lot).

1\. Organizational streamlining. If the team working on the monolith becomes
to large, then coordinating and pushing out changes quickly can become
incredibly difficult. One rule of thumb I've heard is the two pizzas rule. If
two pizzas can't feed the team working on a system, it's time to break up the
system.

2\. Horizontal scaling. If some components of your workflow require much more
computing power than others, then it makes sense to break up your system to
move computationally intensive tasks to their own services.

While there are lots of other decent reasons to break up a system, if you
can't invoke at least one of the two above reasons, you may be shooting
yourself in the foot. I think he's dead on when he points out that if you
don't have engineering discipline in the monolith, then you won't have it in
the microservices.

~~~
keanebean86
I have this idea for a new framework/language. I'm sure if it either already
exists or it's a dumb idea in practice but anyways.

You build a monolithic application. Everyone works on the same code base.
Things are broken up into modules/classes/packages. From the programmers point
of view it's just like working on a standard Java project or something
similar.

The magic happens at the method and module boundaries. When the application is
first started everything works normally. Methods call other methods using
addresses. As the application runs some parts of it become hotter than other
parts. At some trigger point an included process starts that spins up 1+ cloud
instances. Only the hot code is deployed to the instances. If necessary the
instance is load balanced on multiple nodes. You configure the triggers and
whatnot as part of the applications config. The framework/language would
either come with support for popular cloud services or allow you to create
whatever system you need to create the instances.

My hypothetical language/framework would proxy all method calls and remap
object instances to the new instance(s). If the extracted code cools down
enough it is integrated back into the main monolith. At that point proxying is
turned off and the methods use address again.

Using this approach you get the all the advantages of a monolith (interface
compatibility checked by compiler, not needed EVERY service writing their own
http code, etc). Of course you can't optimize latency as easily and merging is
harder with monoliths. There's undoubtedly a hundred other reasons why this is
a terrible idea.

~~~
thinkingkong
Its not a terrible idea but introducing a network boundary adds all sorts of
constraints and issues that a normal program flow can safely ignore. Network
partitions dont happen within a local system. I’d read up on CAP and Fallacies
of Distributed Computing. They’ll more or less explain the challenges.

~~~
jayd16
>a normal program flow can safely ignore.

This isn't really true other than network errors are more likely than a
machine getting shut down but you should really be writing your code as if
something could go wrong at any moment.

~~~
daxfohl
It's a different class of error though. You can have code that ensures
transactional integrity everywhere in the case of power failure, but that
alone doesn't mean it handles network partitions, 429 responses, timeouts,
token expirations, corrupt responses, etc efficiently or correctly.

------
bob1029
We wound up taking a similar journey:

Monolith => Microservices => Monolith

I wouldn't say the journey was completely pointless, because the fact that we
had to deploy 10+ services to make a single environment whole required us to
build extremely powerful CI/CD management tools that we happen to be able to
re-use in the (new) monolith case today. This journey was also a really good
growth and learning opportunity for the team. Everyone who has touched this
project and has seen both ends of the distributed<=>monolith spectrum is now
radicalized towards preferring the monolith approach.

On the trip back into a monolith, we didn't just stop with the binary outputs
of our codebase. We also made the entire codebase a monorepo. We have a single
solution (VS2019) within that monorepo which tracks all of our projects.
Prior, we had upwards of 15 different repositories to keep track of. Being
able to right-click on a type, select "View all References" and legitimately
get every possible reference to that type across the entire enterprise is the
most powerful thing I have yet to see in my career.

------
sebringj
My experience with microservices has been just a shift in worries. I don't
worry about scale or ssh configs but I rather worry about cloudformation and
cloudwatch or billing impact. It has also been some challenge to get testing
locally to work easily and there have been quite a lot of meetings and
discussions used up on that alone. I don't find the microservice pitch from a
developer perspective to be easier at all, actually harder overall. I do like
the approach of gcloud or elastic bean stalk better for cloud as you have an
auto-scale but still can do local testing easily for a monolith approach. The
use case for microservices IMO is more like you have a couple of highly-used
sets of functionality that are disproportionate to your monolith and can be
split out to save money but not to build everything around microservices and
pretend everything is easier. Personally I feel my cognitive load increases
when using microservices purely.

------
holoduke
Designing an application from scratch where pure microservices is implemented
is in my opinion the same as over engineering possible future performance
issues. Splitting up your application in many services requires a lot of
thinking and designing. Challenges with syncing, communication etc are not
always easy to deal with. That's why I agree to start as a monolith but with
architectural principles to still have multi modules/components. But I would
for example never split up the database in multiple.

~~~
9dev
I'm totally with you here, but from my experience with a chaotic team, people
end up doing weird things when they have access to the full database from
every microservice: say you have an authorization service and several
applications with a public API that rely on tokens signed by that service.
Suddenly we get a requirement to automatically generate a user account when
something happens in another application. I didn't look what my devs were
doing for a second, and suddenly the offending application does an insert on
the users table. The developer even went the extra mile to copy-paste the
token generation method into their code.

I ended up restricting database access, but wouldn't have thought that was
necessary. To me it was obvious all services should only communicate via their
public API, but I guess that's not so much of a no-brainer as I thought.

~~~
uncletaco
> The developer even went the extra mile to copy-paste the token generation
> method into their code.

I had to make an authorization service and the idea to use a single authorizer
to handle token generation and authorization was shot down by management due
to worry about "lambda startup times". I complained that the startup time is
less than a second (for nodejs) and honestly would not be an issue. They gave
the task to someone else to have the token generated in their service. The
developer did it by copying the code I'd written into their own service
verbatim.

This is why I don't like microservices as we do them. Management would rather
we wrote small programs with a lot of duplicated functionality in many repos
instead of writing a large program where we can enforce some discipline. This
is also better for them because they meet with us individually to ask for
functionality rather than have design or architecture meetings where we can
push back on implementation details.

~~~
baq
Why is management making engineering and architectural decisions? Reeks of
micromanagement. Tell them to solve people problems and let engineers solve
technical problems. Update your resume regardless of outcome.

------
reading-at-work
I think the underlying point, as expressed by the author, is that trendy new
architecture patterns will never be a panacea for bad engineering, though
that's often how they're implicitly sold as ideas.

~~~
rjkennedy98
It is actually sold as a way to fix bad engineering? I've literally never
heard that before. I think almost everyone knows that microservices are hard.

On the other hand they are often sold as a way to increase developer velocity.
And I do sometimes wonder if that is the case (based on personal experience).

~~~
mrits
We did a massive rewrite of a 7 year old giant code base to microservices
gradually over a couple years. One of the only few positives that we gained
from it was that when we crashed we didn't bring down everything at once. The
biggest negative was that we ended up with an extremely noisy event bus that
made backups and diagnosing out of sync issues very problematic.

------
markbnj
I often agree with Kelsey Hightower, but there are so many things he doesn't
mention here. For example, being able to independently deploy components frees
up certain kinds of development worklows. Distributed components also scale
and fail independently, and you can use nifty things like message queues
between them to provide resilience and soak up load spikes. I'm sure the
pattern has often been applied in the wrong use cases, and that many people
have over-applied it, but "the monolith is the future" seems just as wrong as
"microservices are the future." We are nowhere near the size of a large
bank... or even a small bank, and yet we've benefited from a distributed set
of independently deployable and scalable components. You can call them
microservices, or not. I can think of ways we could restructure on a
monolithic backend, but just noodling on the idea leaves me with more
constraints than benefits. Idk, it's a thought-provoking statement at least,
but I sort of wish we'd stop reacting to fads with anti-fads.

~~~
0x445442
Being able to independently deploy components is only useful if you can test
the deployed components in isolation. This requires extremely well thought out
interfaces and is really what makes the components independent.

It seems in many cases teams have decided to port their highly coupled
monoliths over to highly coupled distributed monoliths and now they have the
worst of both worlds.

------
MosheZada
Why do we need to choose one of monolith and microservices? What about simply
"services"? Monolith doesn't have to be split into 50 microservices, it can be
split to 3 services

~~~
js8
> What about simply "services"?

Let's call them "tiers". I think 3 is a reasonable number..

~~~
disintegore
"Tiers" imply some sort of vertical partitioning. These "non-micro" services
may coexist on a layer, transcend boundaries, overlap, etc.

In any case, there's already such a term. It's called SOA:
[https://en.wikipedia.org/wiki/Service-
oriented_architecture](https://en.wikipedia.org/wiki/Service-
oriented_architecture). Microservices arguably evolved out of this.

------
zackmorris
What's really going on here is that a remote procedure call (RPC) to a
microservice or REST API is conceptually equivalent to calling a function in a
library specified by an interface in a header file. There is an incredible
amount of handwaving that obfuscates minutia around synchronous blocking vs
asynchronous callbacks/promises/async-await but there is no reason why we
can't convert from the distributed to local paradigm losslessly.

What I'm not seeing is any attempt to go in the opposite direction. A compiler
should be able to look at ordinary code and slice it up into microservices
automagically, converting the header interfaces to API specifications like
OpenAPI/Swagger. We should literally be able to write a monolithic program in
any functional or C-style imperative language and get a conversion to a bunch
of lambda functions. If that doesn't work, then something is seriously wrong
(probably having to do with determinism, like inadequate exception handling
for timeouts, etc).

So frankly, the first day I saw lambdas, I was skeptical. I don't understand
the point of writing all of the glue code by hand. Incidentally, I reached
this same conclusion after manually building a large REST API around the JSON
API standard just before GraphQL went mainstream and made a mockery of my
efforts.

I think that the HTTP spec and things like separation of concerns serve a
purpose for human readability. But we're well past the point where the gains
made by the early internet are providing dividends in today's highly-
interoperating stuff like Rust, Go and Node.js. Basically 90% of the work done
today would be considered a waste of time (bike shedding and cargo culting) in
the 1980s and 1990s. Just my two cents.

------
dcwca
The author does not seem to understand when to correctly apply microservices.
There are two basic use cases: 1) Different parts of your solution have
different load patterns and it is economically beneficial to scale them at
different rates and 2) Different teams need to be able to work & ship
autonomously. It's not at all about technical merits or architectural beauty.
It's about people and costs.

~~~
Glyptodon
I kind of think the items (1) and (2) you list don't automatically mean micro-
services, so much as they mean separation of concerns can be beneficial.

Isn't there room for a middle ground with modularity that can live in between
a full blown monolith or a full blown microservices pattern, particularly for
operations that are more medium scale?

~~~
unlinked_dll
Yea just take the ideas that apply to your problem domain and implement them
in a way that's sane. Design/architecture patterns are much more useful as
templates to be specialized for your problem.

I really don't like the dogmatic view of architectures. Leaves no room for
craftsmanship, and it's only really useful for creating code monkeys that have
to follow a spec and need to be interchangeable cogs in the machine.

------
philipkglass
I have seen monoliths successfully transition parts of their functionality
into small services. I have not seen a microservice-first approach work very
well. When you're building something new, your intuitions about which parts
are going to be tightly coupled and which parts are going to be relatively
independent are just guesswork.

Once you've iterated on a monolith enough to see which parts are relatively
independent and would actually _benefit_ from decoupling, _then_ you can move
them into separate services.

One example that comes to mind: I wrote a recommendation service that also
handled user feedback events. This was the easiest way to start. After about a
year I saw that we were iterating faster on the event processing than on the
actual rec delivery. We were also deploying this monolith across more machines
mostly to scale up event handling capacity. So we broke the high volume event
handling out into a separate service that was smaller and optimized
exclusively for event processing.

------
DanielBMarkham
With respect to the author, who probably is a much smarter person than I am,
this is yet another in a long, long series of HN articles that should be
grouped under "I don't know what the hell X is, but I was an expert in it, and
I can tell you it sucks"

I've seen X be a dozen things: UML, databases, User Stories, Functional
Programming, Testing... It's too much to list.

Yes. If you do it that way it will hurt, and you should stop. I don't know
this author, but I suspect that many people who jump into microservices are
not getting the foundations they need to successful. The idea that
microservices are just broken-up monoliths is a big clue. They're spot on
about marketing and spend, though. In this community we're quick to hype and
sell things to one another whether it's a good idea or not.

I've seen some great criticisms of microservices, some of which made me pause.
Now, however, I think there's a reasonable way through the obstacles. It
doesn't have to be a mess. Nothing is a magic bullet, but about anything will
work if your game is good enough. You don't buy a bright and shiny to make
your game better. Doesn't work like that.

~~~
davidy123
Kelsey Hightower is a rather pivotal person in the Kubernetes world. It's
unusual that he's basically cautioning people not to use the system he's so
involved in. His point is that many people are doing microservices wrong

~~~
DanielBMarkham
Kubernetes is a deployment strategy. It should be orthogonal to microservices.

I'll delete the comment if I was unnecessarily cruel or missed the sarcasm. It
was not intentional. But it is important to understand that you want to think
of persistence and deployment coupling as independently of your microservices
strategy as possible. The vast majority of problems we see with people
implementing microservices is people carrying baggage over from some previous
project or pet technology. K8S's great. It's just not relevant here.

~~~
davidy123
You're right, though k8s is often associated with microservices you can deploy
a monolith with it. But there's a disconnect where as an expert in associated
areas he's saying people aren't doing microservices properly, and you're
saying just do them properly.

------
JMTQp8lwXL
It's okay to ship a bunch of services together, if you can be serious about
keeping hard boundaries between subsystems. Microservices force you to do this
(e.g., your microservices might have to communicate via REST APIs, but they
can't access eachother's internal implementation details).

Your customers do not care about your monolith. They don't see a monolith; all
they see is features. Untangling it may or may not be the right choice.

In a certain set of situations, the path forward, instead of trying to
untangle your monolith is --if you so desire-- create new services actually be
true microservices, and keep your monolith as-is.

~~~
weego
I agree with your words but the realist in me has to point out that
microservices don't force you to do anything, it's a pattern not a highly
opinionated and restrictive framework.

There are plenty of clusterfuck hybrids out there with services sharing
database state etc. Anything can be an antipattern when you add people into
the mix.

------
bkanber
I've settled on a compromise in this debate. Halfway between monoliths and
microservices is the shared-library model. Instead of creating a microservice
for your image processing, break it out into a standalone NPM or Composer or
whatever module, then use that in your monolith. Gives you good separation of
code and responsibilities, gives you good upgrade paths for your monoliths,
avoids the overhead of microservices.

~~~
buckminster
Just like we were doing with COM and DLLs more than twenty years ago! It
worked very well.

~~~
bkanber
Yes, exactly! Everything old is new again.

------
thayne
This is based on the assumption that the purpose of microservices is to split
up code. And that isn't a good reason to use microservcies, because it is
possible to modularize code and still combine in into a monolith. But for me
the purpose of microservices is to isolate different pieces of functionality
in order to increase stability. Microservices allow you to scale the resources
available to each service independently and allocate the appropriate resource
for each service, deploy changes to individual services at a time, rather than
changing the entire application at once, have different SLAs for different
services, and if done right can gracefully handle failure modes in one service
without taking down the whole application.

------
0x445442
The article touched on it and I've experienced the same thing... The added
complexity of having to manage 10's to 100's of different code bases,
pipelines and deployment concerns is a huge downside and should be considered
before adopting the new shiny.

------
LennyWhiteJr
One thing a microservice architecture does really well is enforce bounded
contexts. Oh you want to access that data? Well you need to go through the
public API because it exists in a separate process. In a monolith it's all too
easy to just 'call this piece of code and grab what I need' (no one will ever
know). Project isolation can help but it's not a silver bullet.

The author makes good points though, there are many places doing microservices
because it's the hip thing to do and a monolith would easily suffice. But if
you have independent software teams in your org that should be able to deploy
code independently, then microservices makes a lot of sense.

As in all things engineering - it depends :)

~~~
cc81
That is true and when you are at big enough scale it probably works.

But when you have chosen a cool new microservice architecture for your team to
implement and you grab that small user story that spans 3-4 different services
things suddenly went from. "Hey, easy implementation and refactor and the
compiler will tell me if I fucked up" to something much more time consuming
and error prone.

In an ideal world that would not happen of course. Just like it in an ideal
world a monolith is built correctly as well.

------
djsumdog
Microservices are really about engineering around things. It's to make your
team's stuff resistant to everyone else deploying things that breaks stuff.
It's turning developing into a wired biological system-hybrid than I've
written about before:

[https://battlepenguin.com/tech/microservices-and-
biological-...](https://battlepenguin.com/tech/microservices-and-biological-
systems/)

I agree with the author on a lot of points. You shouldn't start out with
bricks. You should build the house first, and one you get that figured out,
only then should you turn the individual rooms into modular building blocks.

------
rantwasp
as always the truth is somewhere in the middle. monoliths make a lot of sense
when you're starting out and you can see all your code in one place and you
can build, test and deploy everything together. as the service grows there are
arguments to be made around splitting it (based on usage patterns, loads,
etc).

the things that most people don't get is that: microservices are not free (now
you're doing all this devops stuff N times and you have to think long and hard
about changes that need to happen across api boundaries). The anti-pattern is
that you take your monolith and you split it in 10 but apart from actually
doing all this work you still treat it as a monolith (ie you still do mono-
repo because it's convenient, the deployment still happens at the same time
for all services, you centralize everything when it comes to logging and
metrics and you even force people to do things in a certain way when it comes
to their service). Everything grinds to a halt and now you're more concerned
about "growing" the team to fix the issues that popped up and maybe chasing
the new shiny thing to keep your resume up-to-date. Even worse people start
feeling like they "own" their service and now the decisions that are made are
maybe locally optimal but who cares about global optimization.

So my take is: start with a monolith and in 85% of the cases you'll be just
fine forever. you don't need all the bells and whistles to get the job done.
Introduce new things only so solve actual pain-points and when you do actually
thing through what it means to introduce them (so go N->N+1 and never 1->N)

~~~
crimsonalucard
>as always the truth is somewhere in the middle

This is the greatest illusion of our time. We tend to think of all things in
the world as "middle" like apples and oranges. If this is your viewpoint then
you are biased, the truth has equal probability in being in all extremes just
as well as the middle.

~~~
rantwasp
fair point. the likelihood of it being 100% oranges(or apples) is 0. so not
true middle but in between

------
whack
I agree that for most startups, monoliths are all you need. But at some point,
as your engineering organisation keeps growing in size, there are other
benefits to microservices. Benefits not addressed in the article.

Data isolation. Allow individual teams/services to own their own data stores,
and prevent any other team/package/service from reading or writing to their
data store, and inadvertently breaking the associated invariants.

Performance isolation. Prevent one team/feature hogging too much
memory/cpu/io, and negatively impacting every other team as well. Debugging
performance hogs in a sufficiently large monolith becomes infeasible at a
certain point.

Deployment isolation. Allow individual teams to made code updates and
deployments whenever they want, without having to be tied down by a company-
wide deployment process.

Language/dependency isolation. Allow different teams to use whatever language,
dependencies, and dependency versions make most sense, for their use case.

At bigger companies that have hundreds or thousands of engineers, monoliths
simply do not scale, and need to be broken down into more manageable pieces.
It's unfortunate that smaller companies start cargo-culting these same
practices without thinking critically about whether they actually need them.

------
c3534l
I don't think we're facing new problems with cloud computing and large
systems. We're facing the same problems we've been faced with again and again
since the very beginning of computer science. Over time, the ease and scale of
what you can do with computer resources increases and we have to organize and
understand the existing resources we have.

But we keep reinventing the same solutions at each scale. At one time we had
to invent functions to enforce segregation of responsibilities and create
abstractions and shorthand. We had to group these together in modules and
libraries. We had this clump of programs running on a computer that we had to
organize into an operating system. Now an operating system is nearly a program
or function and people are regurgitating the Unix philosophy and the end-to-
end principle like it's a new thing. In the end, we're going to wind up with a
well-architechted series of integrated microservices which present a
comprehensible interface to users through a handful of abstractions that have
proven useful over the years.

Computing is cheap enough that we can now talk about meta-computing, a higher
level of abstraction from a computer, which is multiple layers of abstraction
on top of eachother. Now we just have to build the next layer. And I think it
will basically be a sort of meta-operating system. The same things, but we'll
call it "orchestration" and "microservices" instead of a file/process/whatever
manager and threads.

At the moment, however, we're still offering piecemeal services and products
and so we don't have many fully formed concepts of what it is to build a cloud
system. So things are still a bit chaotic, but at some point in near future
we'll get there.

------
NightlyDev
A lot of people seems to think that monoliths are a bad thing and
microservives a good thing. That isn't the case. There is nothing wrong with
monoliths.

Trying to make everything work as microservives just for the sake of it, or
because it sounds cool is just a terrible idea.

Start out with a monolith, and if you later see a need to create a
microservive, then do it, when you have more knowledge about the bounderies of
the service.

I love creating high performance services and playing with containers. It sure
is cool with microservives that can scale linearly over a lot of machines. I
also enjoy using the latest frameworks.

But guess what, my first ever service is just using a cheap dedicated server,
serves an average of ~250 highly dynamic webpages each second while still
using less than 7 % CPU, on PHP and MariaDB. Last 12 years have resulted in
about 6 hours of downtime. A couple of hours planned, a couple as a result of
denial of service attacks and a couple when there was a power issue at the
datacenter.

So what I'm trying to say is that more complicated doesn't mean that it's
better.

------
ChrisMarshallNY
I think it depends on the application.

Some are best served with aggregates; some with monoliths.

For myself, I have always developed in a "layered," and "modular" manner, with
discrete subprojects; each, given its own configuration management and
lifecycle. The resultant applications tend to be "monolithic," but some are
parts of a larger, loosely-connected architecture.

Works for me, but YMMV.

------
andrewrothman
My approach to this is pretty simple. For existing systems, don't refactor
from one to the other unnecessarily. If you do decide to refractor, do it in
small steps, one piece at a time.

There are huge advantages to both patterns. For newer systems, if there's a
clear enough split such as "backend" and "frontend" (where frontend is a
statically-hosted SPA) then it could be advantageous to keep the codebases and
deployments separate.

If data is shared between services, then keeping the code to interact with the
data all in one service is likely most useful.

I like to use a few services, with one often ending up being the large
"monolith" potentially with a few supporting microservices on the side as it
makes sense. "As it makes sense" means that the service has a specific
individual encapsulated concern. Billing could be a good example, depending on
how it integrates with the rest of the system.

I find microservices very useful to encapsulate independent concerns and for
experimentation (don't want to rewrite the whole app using some new tech, but
the billing service is small enough to give it a shot). The main problem
points are the glue that holds it all together, duplicating code shared
between services, and changing apis / data schema.

Ultimately, it's best to know what you and your team is/will be most
comfortable with managing based on everyone's skillsets and the product at
hand. If you spend time to understand the differences between the patterns in
practice, and remain realistic about the advantages and disadvantages of both,
you can arrive at an informed decision that works well for your team.

And lastly, make sure you pick something and then build your product. These
details don't mean anything to your customers. If you made the wrong choice,
you'll know when it's the right time to switch.

------
artellectual
I think most people who flock to Micro Services are looking for a better
design / architecture choice for software. The thing that Micro Service can
teach you is Single Responsibility Principle and learning to segregate
responsibility. You need to define a clear scope of the 'modules' in your
system.

At the end of the day if you don't architect your system correctly Monolith /
Micro Services won't help you.

For me and my team now I have 1 ideaology regarding this topic. I don't care
whether it's monolith or micro service. As long as I can have clear
segregation of responsibility between the different modules. Our company now
has a monolith (core banking app) that has modules that handle their own
responsibility and communicate whether its over http or internal communication
bus we developed it doesn't matter. We can easily move modules out into a
separate service if we need.

What determines the factor of whether we move things into it's own service? A
few things. If we need to deploy / scale something independently we will
decide to take on the overhead and move things out into their own external
service. Or if something has a specific security requirement that will
increase the complexity of the overall system we will isolate that and deploy
it separately. Otherwise we keep things as a monolith. For example in Banking
there are many things like the ledger / transaction data that are highly
sensitve that require certain security requirements like being hosted on a
cloud that has certain standards. We will deploy this part on GCP. But

People seem to love to stereotype and find a one solution fits all. There is
no such thing. Everything in engineering requires a deep level of
understanding of the problem and making choices and the problems present
itself.

I believe most apps can start their life out as a monolith, and can grow and
divide as needed. There just isn't a one size fits all for anything in tech.
That's what I've learned.

~~~
jackcosgrove
I think the monolith vs microservice question is more about organizational
needs than anything else. Technically neither is superior, and the debate
pales compared to the need to only hire quality developers. Good developers
can make either pattern work, and bad developers can break either pattern.

~~~
artellectual
You said it man. I agree with you.

------
fyndor
The argument I have always heard is that microservices are a way to solve
problems arising in large organizations with multiple teams trying to push
code all in to the same repository and managing the deployments from such an
organization. I get why separating the different teams' services in to
separate silos helps, but the fact that the solution puts a network between
the different parts where there was none feels wrong. You are trading one
problem for another problem. Networks are slower and less reliable than
calling a function in the same application.

Microservices may be the solution to this problem right now, but I believe
someone is going to come up with some other solution (tooling etc) that allows
you to get the benefits of migrating to microservices without having to add a
unnecessary network layer just to solve an organizational problem.

------
theflyinghorse
Last monolith I worked on was 6.5M LOC FinTech SaaS product. It was extremely
hard to work with from development perspective.

\- Every change you made could break things elsewhere in a surprising way.

\- Deploying changes was a nightmare - we had volounteer teams be on daily
rotations of merging because merging was so incredibly difficult.

\- Different teams and groups of teams would acquire this tribal knowledge of
how to do things in their corner of the system. You needed to acquire the
tribal knowledge before you could start to work in that region of code.

\- Build times were atrocious! Sometimes folks would come up with a way to
only build a part of the application and that would be considered innovation.
"hey, 15 minute build instead of 1h!"

\- Our QA's were STRESSED

Today I am convinced that the product was several applications masquerading as
one. I am not willing to subject myself to that again.

------
osdev
Modular Monoliths would be a healthy middle-ground between monoliths and
micro-services. My comment on the same topic of modular monoliths is here:
[https://news.ycombinator.com/item?id=21853902](https://news.ycombinator.com/item?id=21853902)

------
jeffnappi
I'm happy to see this article on the front page it certainly resonates with my
perspective - however I'm not sure it's really an either-or situation.

One thing I've noticed is that big tech is taking advantage of giant mono-
repos, while everyone else is stuck with 10s-100s of git repositories
haphazardly connected and managed. For example - most off-the-shelf CI systems
and VCS platforms smaller organizations are using are per-repository (GH, GH
Issues, CircleCI, etc).

Managing micro-services would be a far easier task when all of the services
(and infrastructure as code) live in the same repository, changes can be
staged across multiple services at once, and tests are automatically ran for
only the necessary dependencies.

Are there solutions for effective mono-repo management outside of FAANG? Am I
wrong? :)

------
maytc
Not sure where in history people started to believe that micro-service
architecture is a simpler and easier-to-operate architecture. Each service
needs the same operation overhead as one monolith.

The big blocker most monolith faces as the application gets bigger and is
deployed into more and more machines is that _releases becomes a bottleneck_.
Scaling monolith's applications are difficult because partial rollout is
usually not possible as "services" are often tightly coupled.

Micro-service architecture forces services behind a set of APIs. While the
APIs may have breaking changes, each can be independently deployed. In other
words, teams can do releases at their own pace.

The main cost-benefit analysis here is how important is independent releases
vs the cost of operational overhead?

------
one2know
Perhaps the solution is a single server where one can drop a zip file
containing their microservice and have it automatically deploy without the
hassle of setting up a new pod/server/container. Congratulations you have
reinvented java servlets 1.0 circa 1996.

~~~
zelly
My prediction: I think soon we will see public clouds provide WebAssembly
hosting, where you upload a single WebAssembly binary and it just works. Your
whole application is compiled and bundled to that WebAssembly. There will even
be glue code that helps migration from a microservices architecture by
stitching them together in parallel containers, all compiled down to
WebAssembly of course.

------
wellpast
If you’re choosing to hop the network as a means of decoupling code that’s
absurd because you can decouple much more cheaply with PL and build facilities
— without introducing a network hop.

But there are problems that justify a service. Decoupling code is just not one
of those problems.

------
mstump
I call B.S.. He's conflating two things. One is a deployment infrastructure,
another is an application pattern. I can deploy a monolith in k8s and I can
deploy micro-services on a single server or a fleet of on-prem servers using
any of the legacy dev-ops deployment automation tools. Sure k8s makes doing
micro-services easier because it’s got a lot of the raw building blocks
necessary to handle the CD process but in no way are the two concepts related.
They’re tackling different problems. He's making the same mistake that he's
criticizing others for by equating the two. What he is really describing is an
organization doing a “big bang” refactor with poor planning and execution.

------
coding123
I was trying to find that article someone wrote about how everyone doing
microservices should have about 500 databases now. (actually I can't remember
fully what that article was) But I was just hoping that whoever wrote it could
provide details as it's clear the pendulum is definitely swinging back to
monoliths.

Also I feel like all tech goes this way. Years ago to do "big data" you had
drill, kafka, HDFS, pick your cloudera or hortonworks, roll up your HBase,
your storm, spark - hire a team to install it.

It seems like now all we do is purchase Elastic cloud, and write a one-off
spark script or pandas job and call it a f*cking night.

I guess things go in circles.

------
marsdepinski
90% couldn't write a good monolith. 90% can't write good micro-services. Both
share the same problem of modularity and separation of concerns. One comes
with much lower development and operational costs for the competent. The other
hides and creates new problems. I've developed on both architectures and there
are use cases for each. It's never all or nothing. I can't even imagine how
many engineers would be needed now to maintain the solid money making
monoliths I've worked on with teams up to 8 over a decade ago. When the money
dries up we'll be back on the monolith train.

------
beardedman
I wish we could appreciate the middle ground more & stop evangelising
extremes. Our work is complex & deserves considered approaches that don't
always fall within the current zeitgeist of our industry.

------
trey-jones
So many problems arise from the idea that "we are going to do this thing to
the maximum extreme and exclusion of other possibilities."

Just look at the US legislature right now. Anyway, it doesn't have to be
Monoliths vs. Microservices. It can be a compromise. Perhaps the microservices
are a bit less segmented than we have been imagining. It might be OK for a
microservice to do more than one job. As the highlight shows, the fundamental
ingredient is Engineering Discipline. If we strive for that it might work out
in a Monolith, Microservice, or somewhere in between.

------
pbiggar
We have a similar thesis in Dark. People do microservices because they have to
(well, some people have to, some people do them cause they're shiny). But in
terms of actually understanding your application, microservices really don't
help. And they bring so much complexity with them.

Our concept is to allow the decoupling of microservices, with the tooling of a
monolith. Kinda hard to describe and we haven't done it yet, but basically
give you the ability to write it as a monolith, but also have the separate
scalability/deployment of a microservice.

------
_bxg1
I think the greater takeaway is, "there aren't many technical recommendations
that can be made in a broad, sweeping way". Anyone who says differently is
just selling you a trend. The right solution is based on the use-case. This is
the third or fourth time we've oscillated between monoliths and microservices
(under different names, of course). It won't be the last. It happens in every
corner of the industry. There is no one-size-fits-all solution to _anything_ ,
as seductive as that idea is.

------
rschloming
I would agree with this article a lot more if it said that most people don't
understand the problem microservices are trying to solve, but instead I think
it contributes to the confusion.

It's true that a microservice doesn't magically create cleaner code, better
designs, or anything like that. It can actually make all those things harder.
Designing good remote APIs is hard, maintaining consistent code quality over
lots of different codebases is hard.

All a microservice does is give you a way to independently release the code
that lives behind a small chunk of your larger API (e.g. [http://apis.uber-
for-cats/v2/litter-boxes](http://apis.uber-for-cats/v2/litter-boxes)). This is
why a good API gateway that's built for microservices is one of the first
tools you actually need, and can get you surprisingly far.

It turns out that despite the complexity, this is an enormously valuable
capability in a lot of different situations. Say you have a monolith that you
can only release once every six months and you urgently need to get a new
feature out the door. Or maybe half your code can't change very fast because
it's mission critical for millions of users, but the other half wants to
change really fast because you're trying to expand your product.

Of course the big bang refactor into microservices that he describes isn't
really going to help you in any of these situations, but then again big bang
refactors don't tend to help in much of any situation regardless of whether
microservices are involved. ;-)

------
oneplane
That is a rather pessimistic view... While it's true that microservices aren't
the solution to all problems it does simply extend the separation of concerns
into infrastructure which is practically a perpetuation of good software
design. (just like if you do it right, you should have loose coupling and high
cohesion)

Microservices make a lot of sense when you release often, run multiple
versions, have a lot of people working on independent components or have a lot
of different scalability needs. A monolith can only scale in its entirity and
often only vertically. That means that even if just one component cannot be
locally optimised the whole application has to scale up.

If you only have a single application or task to build software for (i.e. a
CRUD system for a CMS) then it makes no sense to split that out. Just like it
makes no sense to build your own crypto, do your own CRM, do your own RDBMS,
or do your own filesystem for that matter. That would just be adding overhear
and engineering complexity where none is required.

while bad engineering will be bad engineering no matter how it's engineered,
that doesn't make a whole pattern bad just because a lot of people apply it
wrong. Goes for microservices as well as monoliths. (and XaaS)

------
notJim
I dunno if I'm taking crazy pills, but it seems like the
organizational/deployment-level concerns supposedly solved by microservices
are much better solved by continuous deployment, and it's weird to me that
this seems to be such a contrarian viewpoint.

Continuous deployment alleviates merge/coordination issues by integrating
small changes frequently, which makes conflicts rare. Deploys are safer, again
because you're deploying small changes often. And if something bad does go
out, you can "roll forward" instead of rolling back, by reverting the bad
commit. This is less harmful to velocity, because it doesn't require rolling
back the other good commits in the deploy along with the bad ones.

I have less experience with microservices than with continuous deployment, but
they seem to bring a lot of problems. Microservices take the fixed costs of
deploying an application and multiply them by the number of services. Instead
of centralizing one team to update dependencies and infrastructure for the
whole application, every team has to spend 10-20% of their time doing that
work. In the monolith case, everyone on the engineering team is familiar with
the single codebase and architecture. But in microservices land, there are
often more microservices than engineers. So when an engineer leaves, they pass
off a whole pile of code, infrastructure, and architecture patterns that
almost no one has any familiarity with. I do think you could avoid these
problems, but overall microservices seem very high risk for little reward.

The one case I really see for services is when you have tasks with different
load characteristics. But in that case, you can still have N monoliths (for
small N), rather than the massive proliferation of microservices.

~~~
cc81
I think it is about autonomy and planning as well. You own your API, you plan
your features and your backlog will be filled and you will deliver it. You
will sunset it and when you on-board new people into the team they only need
to learn your part.

This requires quite a big application though for it to be worth it in my
opinion.

------
dathinab
The answer is like often in between.

You might want to always breaker up a monolith but there is indeed little
reason to do it with micro services. You could just use modules. Or better
breaker it into a number of libraries with well defined interfaces, which you
then compose into one monolith binary.

But there are very good reasons to split out some code into services (which
might or might not be micro services, just not in the same process).

One is that it (easier) allows you to use more than one programming language.
Normally you should avoid that, but there are sometimes reasons for it for
example if 80% if what you need is implemented in a library available in that
language.

Another one is you can have different reliability constraints for different
parts of the system. (Like number of instances handling load parallel).

Another one is reuse between different systems (e.g. sharing of user
management by e.g. using OpenId Connect).

Another one is that you can upgrade part of the system without stopping other
parts.

....(a bunch more)

So in the end I would brake it in parts and compose that parts into a number
of services but I would not bother with the whole "micro" part and other cloud
marketing bs (because that's what it degraded to).

------
dvirsky
Clear boundaries and contracts are just a side effect of working in a service
architecture, I've found. If this is your reason for doing it - just define a
style guide on how to isolate code and make sure people (or procedures) keep
it.

But there are many other valid reasons for services - different deployment
cycles, better resource utilization, faster and safe deploys, etc. It's just
about using the right tool and thinking about implications.

I don't know if such a framework exists, but I really want a system that
abstracts this to a certain degree - while the contracts between parts of the
system are defined, whether any module works as a service with its own
deployment policy over network or as part of a monolith is not expressed in
application code but as a configuration, and code generation handles the
underlying logic. So you can write your app as a modular monolith, but when
you think that for operational reasons there is a reason to spin off some part
of it as a service, you reconfigure your build rules instead of your code.

------
Jupe
All designs have trade-offs. When trade-offs appear, you either accept them or
mitigate them...

If it's important to know how many blue widgets are bought at night in Europe,
vs. how many blue watcha-ma-call-its are bought in the evening in the US, and
your location, orders and product data are in separate micro-services, you are
kinda out of luck.

And, as mentioned by others, replication and API wrappers on micro-services
suck for reporting.

If you built an eventing system, you'd be better off tapping into that to
update the central reporting data store (warehouse/lake/etc.) I've used this
myself to "good" effect. (some chance of failures, a little behind the times,
etc.)

The central database may be "monolithic" in nature, but at least you'd be
_able_ to report on the data. If you expect to modify data in the feeding
micro-service's databases, then yes, you do have a monolith. But, if it's
"just" for reporting, it's like a dynamically-updated replica of the pertinent
data for your reporting.

------
kjs3
_We’re gonna break it up and somehow find the engineering discipline we never
had in the first place.”_

This, in a _single sentence_ , captures all of my misgivings/discomfort/etc
with the mad rush to micro-services in my organization. To whit...if we had
the operational maturity to really effectively take on micro-services, we
wouldn't need to rush into it.

------
AzzieElbab
That spiral thing just keeps spining and spining. Of course monolith is and
easier thing to work with now, after we figured how to do write big
multithreaded services in half reliable way. We also have large boxes that do
not fail too often and do not cost more than 10 smaller boxes orchestrated
with all the crap you need to run microservices based systems.

------
fmakunbound
I knew I could ride the micro-services fad out.

It was only ever useful for massive deployments that only massive systems like
facebook and others needed. However their engineering teams dominated the
discourse and others followed, pretending if they too had the same
requirements even though their engineering teams were small and their systems
far simpler.

~~~
y-c-o-m-b
> pretending if they too had the same requirements even though their
> engineering teams were small and their systems far simpler

Cue Hadoop, AI/ML, block-chains

------
collyw
The way I look at it a microservices architecture is going to be the
equivalent of a monolith with a number of unreliable network connections,
being held together at the devops layer. You are pushing complexity from the
application layer to the devops layer. Personally I don't see that as an
advantage in itself.

------
tflinton
Distributed monoliths (or micro services) do have some advantages:

1\. Easier for users to see "who owns what" (albeit a module pattern could fix
this as well).

2\. Different hardware resources or scaling for different parts of a monolith
really isn't possible. If one module requires 16GB then everytime you scale a
horizontally you must have at least 16GB, you're at the mercy of your worst
module in the monolith.

3\. Deploys are very difficult, and as you scale to over 10 developers
increasingly becomes difficult to push up (it takes one persons bad commit to
hold everyone in the organization from deploying).

4\. Security boundaries are easier to define, each "module" in a monolith
effectively has access to all resources for all modules.

5\. Poly-languages are easier, albeit depending on the base language, you
could do a lot of transpiling on a monolith but.. ew.

6\. HTTP status codes and request paths can give you a clear view of how calls
are happening in your system; in a monolith you'll only get stack traces
generally on errors, not on successes, usually you need to invest more in
static analysis and APM stuff for a monolith.

7\. Microservices can be cheaper when you scale, you don't have the GCD of
memory/CPU/disk requirements as you do in a monolith.

8\. GCD of implementation details, if one request requires a sticky session,
all of your requests require stick sessions...

9\. More complex and long builds, most monoliths have component-based hot
reloads, but even those can take 30s to a minute in my experience, and a full
build, that's at least 20.

10\. Harder to unit test, this can vary by language but without clear
boundaries and resource definitions monoliths can be very tricky to unit test,
microservices/distributed monoliths are inherently smaller with clearly
declared resources so it becomes easier to find where and how data flows in
them.

~~~
cc81
>3\. Deploys are very difficult, and as you scale to over 10 developers
increasingly becomes difficult to push up (it takes one persons bad commit to
hold everyone in the organization from deploying).

I feel that is a way too low number. If don't have good enough engineering
practices with branching, pull requests/code reviews, unit/integration tests
to handle 10 people then microservices will be painful as well.

I would say like 3+ teams at least to really justify it? You can do it earlier
but I don't see it as a necessary benefit.

>7\. Microservices can be cheaper when you scale, you don't have the GCD of
memory/CPU/disk requirements as you do in a monolith.

Yes, but by default they are more expensive until you reach a certain scale
and it needs to be a specific type of scaling.

------
denieus
The title mentions that Monolith is the future, but fails to explain why.
Either way, let's focus in two items:

> We’re gonna break it up and somehow find the engineering discipline we never
> had in the first place.

Indeed. I worked in a company that had a monolith, but the project was
structured in modules. Every module had a Facade, which was the official way
of communicating. Although in practice you could access other modules'
entities, you weren't allowed to do that. As you can imagine, this rule was
broke many, many times. Developers would look at the entity and see the data
they want was there and ignore the facade right way, plain and simple.

If you split your project into separate services, and those services aren't in
the same runtime application, there is no way to break this rule anymore. The
team that didn't follow the rules has no other choice, it has to go through
the APIs. Even better, they won't design the API themselves most of the time.
Whoever maintain the service will want it to be cohesive, and will not care
that much about the other team need to an urgent fix. Putting workaround
becomes way harder, and this change alone improves design a lot.

The second point is that anyone that shared the same service/application with
another team probably faced the situation where you couldn't deploy (or was
too afraid to do so) because the other team pushed a lot of new code to
master. You suddenly don't know if the deploy will break everything or not.
When you see, you're spending a lot of time coordinating with many people
about whether you can deploy it or not. Something that should be in production
if a few minutes sometimes get delayed for days.

Of course that microservices are not a silver bullet, and there are teams that
will benefit a lot from a monolith. With that said, I find hard to believe
that monollith will come back in companies where the development team grew to
be more than a few developers, because the trade-offs are not worth it.

------
notacoward
I'm living this nightmare now. The service I work on now is split up into a
score of micro-services. Information about a single object is splattered
across several services, databases, and caches. Little thought given to
consistency, less to how these different bits of information will be combined
in ways they obviously need to be. Coordination problems everywhere, creating
bugs and sapping performance. Global resource/load control is practically
impossible, as is rigorous testing.

Some of these are general distributed-system problems. Some would be less
severe with a better but still microservice-based architecture. But _in
practice_ the microservice message that a lot of people get is that you should
make every trivial bit of functionality its own service, and that road leads
to disaster.

------
parhamn
The truth in this is that these things just depend heavily on both culture
(e.g. communication patterns, "This is infra team's problem", etc) and
technology choice limitations (e.g. dependencies, api boundary safety, general
safety, etc). And probably a ton of other things.

The underlying technology is a bigger deal than people give it credit I think.
I've written frameworks and complex applications at prior workplaces to try to
manage microservices well. Now (cloudsynth), we use go + grpc + typescript and
everything feels like it can be isolated/sharded if and when it needs to.
Golang and webpack have great tooling for splitting things off, isolating
dependencies, etc.

Sometimes you don't have to live in the bimodal world of MicroServices vs
Monolith.

------
jy2947
I think we all want 3 things -

1, back-end services with clear boundary, that decouple concerns based on dev
teams' domain responsibilities, with less dependency among each other,and
respected source of record. This is very much the "micro-service" is for.

2, middle tier services to consolidate or aggregate back-end APIs to serve the
front-ends (especially the mobile apps) and take care of the business logic.
Back-end guys all love micro-services, but someone must put them all
together....GraphQL so far seems to fit this bill

3, Analytics and reporting, this is a totally different animal from the
product development, and have almost opposite requirements. This is where
whatever your ETL or Data Lake or Data Pipeline is used, along with your
preferred BI or analytics tooling.

------
durden0
Do not really agree. Monoliths aren't the answer for bad engineering practices
any more than micro-services are.

The fundamental problem a lot of companies have, especially fortune 500 type
legacy shops, is that they haven't accepted(at the c-suite level) that they
need to become tech companies to compete with startups that are eating their
lunch .

Switching to micro-services to try and deploy more features while starving
your development teams of talent and funding won't make you a tech company. If
you want faster development + more features then you need lots of development
teams, and large development teams means micro-services so that you don't have
a slow to change, interdependent mess after a year or two.

------
paracorp
This blog post and the top response is an example of an echo chamber.

Microservices, when done right (driven by well defined bounded contexts) are
simpler to develop and iterate against; but that's not why we do
Microservices!

You should not do Microservices without considerable experience in authoring
integration tests, a clear understanding of the domain, observability tools,
and a team that can handle debugging distributed system.

Bonus: You do not need a distributed system if you are working out of a single
data center. You should not do Microservices if you think they're cool. You
should not title your blog post claiming Monoliths are the future. If your
future has a horizon of never scaling out then yes I guess they are ...

------
languagehacker
I had trouble making the jump here from SOA to k8s. Those are orthogonal
things. What a monolith solves and what microservices solve are also
orthogonal things.

Many companies move to microservices so that they can evolve different parts
of their platform at different rates, and invest _differently_ in different
business domains and product applications. Attracting talent for a problem in
higher demand is one example of the lever you can pull, but so is writing a
part of the application in R for data science or Java for stream processing,
and hiring from a richer or different talent pool as a result.

------
one2know
Pretty much microservices happened because of managers. Managers are promoted
to high level jobs such as Director or VP based on number of direct reports.
This won't happen with a small team of devs. So they need an infrastructure
team, a devops team, a SRE team, a QA team, a microservices core team, an
access management team, network engineering team, AWS integration team, and
the list goes on. What was once a four person team is now a 50 person team
costing 10 million per year, but, hey, the guy/girl gets his VP promotion for
setting all this up.

~~~
Udik
I don't know why you were downvoted, the importance of projects (and their
leaders) is measured on the team sizes and budgets. So increasing costs makes
the project and its leaders more important and valued.

~~~
one2know
Its essentially the same argument the article is making: managers want to
spend and hire. But there are lots of managers at work right now reading HN so
downvotes.

------
shanemlk
Yet another extremely vague Hacker News article building controversy due to
its failure to communicate anything tangible. Is this article talking about
monolith deployment architectures or monolith code bases?

If it's about monolith deployment architectures, the use case is really
important.

If it's about monolith code bases, you need to define what a monolith code
base even means, because that could mean anything. Are we talking storage
size, custom written code lines, framework architecture, or just the number of
people and/or teams building the underlying technology?

------
rjpn
It depends a lot on the app your building. If you are a startup, it definitely
is much easier to build a monolith and focus on the product features. Micro-
Services may not have that much upfront cost of building but as your product
grows it requires lot of engineering effort and budget to maintain it. You can
have an engineering team of 20 people maintaining a monolith which can serve
few million customers. The same product broken into micro-services will
require 4-5 teams of 8 people. It is much easier to hire for a single skillset
and grow the team.

------
ping_pong
Microservices are only useful when your engineering org is too large and you
need to enable the engineering process to scale. If you're at the point where
you can't deploy code because the engineering team is stepping on each other,
that's the point where microservices come in handy.

But it also requires a heavy investment in dev-ops and on-call issues. Because
when one small thing fails, it becomes catastrophic in ways you can't imagine.
So there's a huge tradeoff between engineering convenience and actually
customer impact and uptime risk.

------
jasonlotito
You know, at some point, I get tired of these rehashed ideas. This isn't a new
thought, this isn't a unique perspective. Yes, we fully realize that you
shouldn't jump on the newest fad just because you want to. Microservices
aren't a silver bullet. All the things that have been talked about for
years/decades now.

Monoliths aren't the future. They never left. Rather, they are still an
option, along with microservices.

Blindly adopting anything is silly and error prone.

It gets tiring hearing the same advice preached every few years about a new
techonology.

------
tjpnz
Each to their own. I've worked with both, success with micro-services is
largely a function of the organisation. My current company started
transitioning 18 months ago, I wouldn't say it was easy but we are in a far
better position now than we were with our monolithic architecture. Our success
is largely due to the company understanding that big changes would be
necessary - things like org charts, engineering culture, engineer
responsibilities and job descriptions (you would be forgiven for thinking we
only hire SREs).

------
mfDjB
Perhaps there is a problem where people are splitting a perfectly good
monolith into microservices, but I do wonder how do you deal with large scale
machine learning without microservices? I am a rather small operation and I
still have models taking several gigabytes worth of memory and ANN indexes of
about the same size, which clearly couldn't operate in a monolith unless it
was a massive machine, and even if it could not every request would
necessitate such power. How does a dogmatic monolith approach solve these
problems?

~~~
franciscop
Is there any problem with separating the ML into a different service/machine,
and everything else together? Then you can treat the ML in the same way you
treat an external service, or your DB or Redis (if external). While no longer
a pure monolith that certainly doesn't qualify as a microservices
architecture.

Note: no idea what I'm talking about, I'm genuinely curious if that's a valid
solution.

~~~
mfDjB
You could do that, but then I wonder if that isn't going down the road of
microservices? The ANN services for example would need to interface to the
database if you want some kind of real time ANN service.

------
dron01
My experience is that devs, devops and admins are striving for better and more
robust operations and software development workflow that in the end will help
out company in multiple ways. So they read what's new and learn and adopt new
approaches and tools. But BI department just want to keep working with Excel
because they don't know and don't want to spend any effort learning anything
new. So now you have a conflict. Scary microservices vs good old comfy db
select queries and excel.

------
eof
As someone in the middle of destroying a monolith I hope the title is not
true. Of course distributed-yet-still-deeply-coupled systems are possible they
are at least harder to create.

For me though the most important thing is grokability. Our monolith is to a
point literally no one on earth can understand the whole thing.

Even if the system is complex, the individual deployables being fully
understood by some number of engineers is extremely valuable and drastically
reduces search space for the cases where things don’t go as planned

~~~
MockObject
> For me though the most important thing is grokability. Our monolith is to a
> point literally no one on earth can understand the whole thing.

Well, then you're really going to love it when the concerns are spread across
different codebases connected by APIs!

~~~
disintegore
Introducing boundaries between problem domains should drastically reduce
cross-cutting concerns. It also makes issues easier to find.

Not that this isn't possible in a "well engineered" monolithic system, but
design constraints are usually better than hoping for engineering discipline.

------
mjgs
I personally quite like the "double monolith", one for the website and one for
the api, but I think there are some really large deployments like AWS for
instance, where using microservices makes a lot of sense, but it's tricky to
pull off operationally because as the organisation changes shape, the code
base needs to drastically change too, and probably a lot of teams just can't
keep up, monoliths are more rigid but they don't crumble as easily.

------
Shicholas
Microservices can work, and fwiw, here are 3 things I've learned to the hard
way:

1\. Use a package manager, and export interfaces to a given service in each of
the consumers. It's great that GitHub now offers it for node. 2\. Create a DAL
library for I/O to a given database (e.g. PostgreSQL, or Mongo) that be
consumed by other services. 3\. Enforce styling company-wide with one source
of truth. We use GitHub Actions, so we can enforce styling with a shared
GitHub Action.

------
LawrenceHecht
Monoliths are "part of the future". Data shows that there is dissatisfaction
with using both monolithic and microservices architectures at the same time.
In 2017, 66% were using hybrid approach to approach, but only 54% in 2019. See
the interactive graphic at the bottom of
[https://thenewstack.io/observability-and-
elk/](https://thenewstack.io/observability-and-elk/).

------
eightysixfour
Monoliths and Microservices are technical expressions of organizational
structure. Talking about them in only a technical sense misses the forest for
the trees, IMO.

~~~
davnicwil
Completely agree and actually think this is the root of the problem.

People look at what really big successful companies are doing and draw
inspiration from that. Problems come when they mix up cause and effect, and
then view things through the wrong lens.

As you say, microservices are a mostly organisational, partly technical,
effect of having to scale a huge techincal org. But then when taking this end
state and viewing it purely through a technical lens (as, naturally, technical
people are wont to do) it's rather easy to convince yourself that it's
actually the cause of this huge technical org's success.

~~~
cc81
Yes, I had a discussion about a very senior (but not developer) why we he
thought we must enforce microservices in our applications from a technical
perspective. He thought it was incredibly important to be able to scale parts
of it independently and it is a normal CRUD internal application with a couple
of thousand users.

Of course he looks at Netflix and Google as best practices but that discussion
probably spent more money in salary than our server costs.

------
Legogris
I feel a lot of the excitement for microservices came from fatigue and
frustration with overly complex monoliths. Turns out that the complexity was
just brought out of microservices, one abstraction layer up into the
architecture.

I think a lot of the principles in classic OOP design (SOLID) can be applied
to microservice systems: Classes/Objects <> Services.

------
choiway
I don't think the microservice vs monolith process is as cut and dry as people
make it out to be. Microservices are hard because abstractions are hard. I've
found working on abstracting code within a monolith usually results in better
microservices. Haven't seen a good process for starting a project with a
series of microservices.

------
bitL
We need AI-driven microservices to manage all the versioning, deployment,
fault-tolerancy, distributed log-debugging mess... Humans should not be forced
to write anything complex in microservices. The only good thing is that it
brings many engineering jobs and managers can build huge teams, looking great
to their superiors.

------
rudolph9
A system composed of micro services can be thought of as one monolith.

This debate of monoliths vs. micro services is like debating what integers you
sum to arrive at 10. 10 + 0 vs. 4 + 4 + 2 ?

Everything has tradeoffs. Let’s focus our discussions on methods for
understanding the problem set and weighing the tradeoffs of potential
solutions.

------
debt
Code is meant to be thrown away. Not to mention there exists industrial-
strength automated code refactoring which will just get cheaper and smarter.

So write code you can deploy today, monolith or micro serves, in the not-to-
distant future we'll be able to cheaply refactor it at scale into any style
you want.

------
oconnore
Why do people ever think that one or another extreme is the most optimal
solution? In real life the answer is basically _never_ "go all the way".

By this principal, the right answer is probably along the lines of: a few
services, carefully curated. Something like "macroservices (plural)".

------
vlttnv
From what I understand the author is against a bad implementation of
microservices. I don't see a good argument against microservices but rather
"We just chopped up a monolith and things are not better therefore
microservices are bad". Am I missing something?

------
dksidana
I can guess that following pattern will emerge "Develop like microservice but
deploy like a monolith"

So a container will have group of services rather than hosting single service.
Similar will be happen for databases where different databases will on same
host.

------
mbrodersen
If you can’t write well functioning monoliths, then you most definitely will
fail implementing the same system using micro-services. Micro-services have
all the complexity of the monolith _plus_ the complexity of a distributed
system added on top.

------
Merrill
The adoption of a new technology can be an occasion to reorganize, reassign,
and create new effective teams in the organization. It may allow the solution
of people problems whether or not it actually addresses any technical
problems.

------
qaq
You move to microservices now you are in the land of distributed systems so
unless you are at a scale that leaves you no choice be very aware of this. You
are multiplying edge cases by a factor of 100 in many cases.

------
cryptozeus
This is like saying microservices are the future. Both are incorrect, of
course this is case by case bases and one should keep an open mind and should
know the pros and cons of both. Apply either one when needed.

------
jonathannorris
The problem he is describing is solved with a Mono-repo, not a Monolith.

We've seen great success with a Mono-repo that enables the sharing of code
across Micro-services, and enforcement of code and deployment processes.

------
yahyaheee
Eh I think 5 years from now you’ll be able to deploy and manage a distributed
architecture like a monolith. There is so much energy in this space and the
problems are being solved we’re just not there yet.

------
jpswade
Basically if you can’t do monoliths well, you can’t do microservices well
either.

However, using more services and being less concerned about the servers
underneath is an opportunity that shouldn’t be ignored.

------
pier25
The problem is not that micro-services/monoliths/serverless are good or bad,
it's people picking something for the wrong reasons. Everything has pros and
cons.

------
snambi
Totally agree with this article. Split the application when necessary. But
creating 10/100s of mirco-services is a maintenance as well as runtime
nightmare.

------
NeoBasilisk
Ok, I read 5 years ago that microservices are the future and that monoliths
are dead. I'm glad we're making progress.

------
rapind
Microservices are just the new EJB. Those of us who know better will only
increase abstraction when necessary.

------
ryanmarsh
FTA _“You know what we should do? We should break it up. We’re gonna break it
up and somehow find the engineering discipline we never had in the first
place.”_

Both arguments are wrong. There is no substitute for engineering discipline
and no paradigm will save you from a lack of it.

I care not whether you build microservices or monoliths, but please sir, do
not blame the paradigm when your team can't do anything right.

------
elsonrodriguez
Having microservices means you can have 20 CI/CD pipelines that don't work
instead of just 1.

------
tudelo
I have never worked on a real 'production' monolith. I wonder what i'm missing
out on :)

~~~
eej71
For me a single monolithic stack is far more debugable.

~~~
topkai22
Depends on how big that monolith gets. Microservices can definitely be
overkill, but at some point its a real liability to not have well isolated
abstractions. Yes, you can do that within a monolith, but the temptation to
break isolation is just greater when its all in the same folder/repo.

------
patsplat
Don't debate coding standards beyond the 2 pizza rule. No more than 8
engineers per service.

------
andrewstuart
If you have a monolith, then microservices are the future.

If you have microservices then monoliths are the future.

~~~
oblosys
The monolith/microservices discussion always reminds me of this 'Sandboxing
Cycle' xkcd: [https://xkcd.com/2044/](https://xkcd.com/2044/)

------
carapace
This is incoherent, it reads like a bad AI wrote it. It's like a textual
Rorschach test.

------
franzwong
I am just thinking if I use different kinds of SaaS, is that also treated as
microservice?

------
nerder92
Microservices solves an organizational problem not a software one.

------
hderms
Obligatory mention that services are mostly an enabler for organizational
scaling and, in my opinion, shouldn't generally be something considered for
only technical reasons. Having hundreds of engineers working on a monolith is
at least as challenging as having ten engineers working on ten separate
services.

There is also a clear distinction, in my mind, between microservices
philosophy and 'macroservices', as I call it. Buying into a system with more
services running than engineers is very different than having a number of
teams, each working on their own single or handful of services.

I would argue that the organizational scaling derived from microservices
resembles diminishing returns somewhere in the domain between a single service
(monolith) and more services than engineers (microservices).

------
pojntfx
Write a monolith first, then decompose into microservices.

That's it.

------
rodrigods
The fact this even reached Hacker News front page surprises me, maybe because
the writer is "famous"? Pure sensationalist tittle with a very poor and
marketing driven content.

------
macca321
Shame we don't do HATEOAS, just RPC-over-HTTP

------
tonfreed
Yep. I call this resume oriented development, which I think is in the Gotime
podcast he links at the bottom here. Martin Fowler wrote a good article about
the same thing
[https://www.martinfowler.com/bliki/MonolithFirst.html](https://www.martinfowler.com/bliki/MonolithFirst.html)

tl;dr: If you don't understand the problem domain, build a monolith following
sensible engineering principles to get going ASAP and then split it out when
you understand where the functional lines actually are.

------
macca321
Feature toggles are the answer here

------
asdfasdf1231
> Now you went from writing bad code to building bad infrastructure

just write bad infrastructure as bad code. Get on with the times

------
shamanreturns2
the real wtf is why these cargo cult religions around fetish buzzwords exist.

------
mikorym
FYI this is about monoliths vs. micro _services_ , not micro _kernels_.

------
vira28
Repeat with me.

Life is a circle. Life is a circle. Life is a circle.

------
awinter-py
eat your heart out stanley kubrick

------
aidenn0
TL;DR: Modularization is good, but if you don't have a modular monolith, you
won't build modular microservices.

------
crimsonalucard
Stop calling it monolith. If you want to create a new trend you need a new
buzzword to trick everyone.

I got one: Macroservices!

------
baron816
Maybe the title should be “Bad Engineering is the Future”.

~~~
baron816
I think I need to explain this joke. I don’t mean that Monoliths are bad
engineering, just that, as the article suggests, microservices don’t prevent
bad engineering. And in many regards are inevitable without the right culture.

