
Monolith First (2015) - levosmetalo
https://martinfowler.com/bliki/MonolithFirst.html
======
taurath
If you don't have a product yet or the parameters could change quickly with
new business insight, you need to be able to change it fast. With
microservices you will be spending half your time figuring out orchestration,
building data flows that people can understand, and doing ops. Last startup I
was in delayed their launch date for >6 months because of their architecture.
Way too many people think they need it, but a load balanced monolith can take
you from 0 income to able to hire more engineers.

~~~
eropple
I do devops. I consult for startups. And while it would make me a lot more
money in the short term to fuel their microservice-first, sparkly-architecture
aspirations, this is exactly the approach I take when I pour some water on
that. Your Big Ugly Monolith will get you where you're trying to go if
_anything_ will. You don't need services, you don't need microservices, you
don't need some bloggable-as-heck Kubernetes setup with pet sets holding your
billionty different datastores--you need a webserver and _one_ data store and
_maybe a cache, eventually_. You grow from there.

Where I _do_ push, though (and this often surprises people because "what does
a devops person know about code architecture"\--the answer is "a lot, both
from writing it and seeing it badly written"), is _hard_ demands of app
statelessness and an encouragement of business logic internals that are
functional in nature, with I/O, wireups, etc. handled at the outer edges of
the application. Functional-core-imperative-shell lends itself to
decomposition later if you need services (and I say "need services" because
the size of the set of companies that "need microservices" is within epsilon
of zero)--you replace RPC with a network layer because you built your app with
clean, bright-line divisions.

Rule of thumb: if you have two "service" talking to the same set of data in
the same general-purpose datastore (i.e., not pub-sub, not opposite ends of a
job queue), they're the same service.

~~~
Animats
_Rule of thumb: if you have two "service" talking to the same set of data in
the same general-purpose datastore (i.e., not pub-sub, not opposite ends of a
job queue), they're the same service._

Why? The whole point of a general-purpose database is to allow multiple
applications to use the same data. Consider an online ordering system for
merchandise. There's an service for checking product availability. There's a
service for building a shopping cart. There's checkout and payment, the only
one that faces the outside world that needs high security. There's customer
order tracking. On the back end, there are various services which deal with
fulfillment, shipping, accounting, and reordering. There may also be customer-
relationship systems which can read order data for marketing purposes. Each of
those functions can be worked on independently.

~~~
dasil003
In this case I think it's proper to consider the database as a service in its
own right. This is the way it always used to be done, and there are
significant advantages, such as being able to focus on safeguarding the data,
and leveraging ACID and constraints/stored procs to make the application code
less error-prone.

The downside is you have a potentially hard scalability ceiling, and you have
coupling of every downstream service to the schema which means all teams have
to coordinate with the DB team for schema changes.

I think startups now always go the vertical services route without thinking
too hard about it because A) they all aspire to be Amazon/Facebook/Google even
though 99.99% will never face that scale and B) resume-driven development.

~~~
rdnetto
A variation on this would be to put the database behind a service that
abstracted over the schema, though that only works for basic CRUD queries and
not complex aggregations. This service would probably evolve from the
monolith.

~~~
collyw
You mean like an API?

------
phamilton
I all comes back to Conway's Law (Your software will look like your
organization).

Microservices allow and require low coupling in the organization. If you want
to reduce coupling in your org, you'll be well served by microservices. If you
want tight collaboration in your org, you'll be well served by a monolith. As
orgs grow into multiple independently executing units, a monolith starts to
limit the ability to independently execute.

~~~
mixedCase
I believe a simpler explanation is that it's easier for some devs to cleanly
separate concerns when they're forced to do so by the constraint of process
separation, rather than language modules/packages, where it's too easy for a
junior dev to break the architecture with a single import.

Keeping a monolith's concerns cleanly segregated does require a small amount
of discipline.

~~~
XorNot
I would propose that an alternate explanation is that when the processes are
_actually_ separate in deployment, you filter out strong personalities (or
management types who are overly involved in things outside their area) from
dominating the development process as well.

Stopping someone working in one area from deciding they just don't like the
look of the other is a benefit or all its own.

------
morphemass
Just about every one I've interviewed with recently has been breaking their
monolith up into micro-services for some reason.

When I've done this in the past I had a key goal: reliability. The cost was
about 10x the development effort of the monolith in order to add an extra 9 to
the reliability. The monolith was wonderful for getting up and running quickly
as a business solution but it actually crippled the business because they had
failed to identify how essential reliability was. KYC.

Personally I've come to the conclusion that the main benefits of SOA/MSA are
not necessarily technical but more organisational/sociological. Having
distinct silos of activity/responsibility, separate teams and communications
channels; all can make a large project more manageable than the monolith by
allowing the lower level problems to be abstracted away (from a management
perspective).

~~~
srtjstjsj
Reliability is not important to a startup.

[http://www.whatisfailwhale.info/](http://www.whatisfailwhale.info/)

~~~
JshWright
Reliability is not important to _many_ startups.

There are plenty of industries where solid reliability is a hard requirement.

------
gluczywo
_even experienced architects working in familiar domains have great difficulty
getting boundaries right at the beginning. By building a monolith first, you
can figure out what the right boundaries are_

This line of thought reaches two decades ago and was expressed in a wonderful
essay "Big Ball of Mud"
[http://www.laputan.org/mud/](http://www.laputan.org/mud/)

EDIT: updated with the quote

~~~
maxxxxx
A lot of people don't have the discipline to write decent libraries so they
need the overhead of microservices to force them to structure their code
reasonably. It seems to me you can have exactly the same boundaries between
components that you get through microservices by just having a good compenent
separation.

~~~
eropple
This is exactly true. And so is the reverse: competently written code can be
segmented out into a SOA by replacing your internal procedure calls with a
network call.

~~~
Terr_
Caveat: If it's acceptable to have significant latency or the interaction is
asynchronous.

~~~
eropple
Sure, but if it's not acceptable to have significant latency then they're
probably the same logical service, yeah?

------
FRex
The common pattern he mentions reminds me of the concept of 'semantic
compression' (one big function and lots of variables first, then break it up
into structs, classes, functions, etc.) by Casey Muratori:
[https://mollyrocket.com/casey/stream_0019.html](https://mollyrocket.com/casey/stream_0019.html)

It's a very nice and natural way to write code to do it all horribly dirty and
only when a sizeable portion if ready to start cleaning it up and making it
look and read good.

Both are basically "good comes from evolving/refining bad".

------
dankohn1
It may be a bit simplistic for HN, but you may enjoy I talk I've given,
"Migrating Legacy Monoliths to Cloud Native Microservices Architectures on
Kubernetes", and especially the visual metaphor from slide 26 on of chipping
awsy at a block of ice to create an ice sculpture.

[https://docs.google.com/presentation/d/105ZgwafitwXH6_sWevFH...](https://docs.google.com/presentation/d/105ZgwafitwXH6_sWevFHHUerciuv4ckDQ_CXjGPjv0Y/edit#slide=id.p35)

------
jakozaur
Rule of thumb: Number of full-time backend engineers divideed by 5 and rounded
up is number of microservices you can afford.

E.g. if you have 500 employees having 100 microservices is fine. If you have 3
engineers and try to have 20 microservices you are wasting tons of time, you
should do monolith.

------
navalsaini
I agree, monolith first and have proposed a talks to few JS conferences on
this topic. I however have not worked in a company that uses microservices
architecture at a big scale (like uber, instagram, etc). I am keen to
understand - (1) what does it mean to run a microservices architecture from an
org. point of view? (2) How are principles like 3 depths deep enforced? (3)
How does a developer decide to create a new microservice vs when to reuse one?
(4) Who manages the persistence layer and associated devops tasks (backups,
failover, repset, etc)? ... mostly that is the uncovered bits for me. I came
across a very recent talk by uber on these lines - JS at Scale
([https://www.youtube.com/watch?v=P7ek4scVCB8](https://www.youtube.com/watch?v=P7ek4scVCB8)).
I think a few talks on the organizational side of microservices would give
people a clear idea if they really need one. Also, though the startups use the
term microservices, but their architecture does not in reality has as many
boxes as compared to the uber talks that most of us listen to. The startup
microservice architectures do have single point of failures and they just
break it up to make it easier to scale beyond a 100 or so concurrent users.
The decomposition is mostly around tasks that are IO bound (serving APIs) and
other tasks that are more CPU bound (some primary usecase). So startups using
microservices may not be that bad actually. They could just mean that we do an
RPC using redis for some computationally intensive usecase.

~~~
ChristianGeek
What is "3 depths deep?" Google doesn't turn up anything.

~~~
navalsaini
Services call other services in a microservice architecture. Three depths deep
would mean that, there are only three network hops and no more (its strangely
similar to inceptions 3 dreams deep adventure). Each network hop adds latency
and a threat of failure from network layers. So usually a 3 levels deep rule
is followed to keep the failure levels low, making debugging easier, time to
find the culprit low, etc in a microservices architecture.

------
alexandros
We started resin.io with a microservices architecture from day one, and we are
still happy with the result. It was very painful to get it up and running, but
once that was over, we were good to go. The boundaries we defined early on are
still solid, and the result works well. One critical detail however, is that
all our persistent state lives in one place, minus specific well-understood
exceptions. Arguably, starting with microservices helped us define strong
boundaries we weren't tempted to blur over time.

All this said, I do sometimes wish we had started with a monolith, if only
because we paid the microservices tax in deployment and infrastructure
maintenance way too early, long before we had the scale to warrant it. I feel
starting with a monolith would have probably meant more progress in less time,
though with a risk of not being able to refactor smoothly when the time came.

Overall a hard call to make, since I'm happy with the result, but wonder about
the pain it took to get here, and at the same time counterfactual universes
are hard to instantiate...

~~~
olingern
Any advice you would give to a team going this route?

And, I've found that integration tests help keep my sanity if a refactor
occurs anywhere. Any learning(s) for adapting to refactors across services?

~~~
alexandros
I think keeping all the data in one place has been one of the smartest things
we did. If we'd been crazy enough to give each microservice its own
persistence, we'd be neck deep in chaos by now. It only happened for one
service due to reasons that we should have ignored at the time, and it keeps
biting us in the rear to this day. Thankfully we're getting closer to
reversing that mistake, oh happy day.

------
nichochar
I disagree with this, but only because it makes the assumption you're working
with a single core type language, like java, python or C++.

I think if you design fault tolerant micro service based services with
something like the erlang BEAM VM, things will workout well, since you're
being very careful about message passing from the beginning.

------
bsder
Fowler failed miserably when building a monolith. See:
[https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...](https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compensation_System)

Why should we believe his statements about microservices?

Personally, my experience in microservices vs monolith has been as follows:

If your system needs fast update with quick rollout of new features, monolith
is probably superior. Being able to touch everything quickly and redeploy is
generally quicker in a monolith.

If your system needs to be able to survive component latency/failure,
microservices are probably superior. You will have hard separation that
enables testability from the beginning.

Overall, I find the monolith vs microservices debate insipid. We have _LOTS_
of counterexamples. Practically everybody writing Erlang laughs at people
building a monolith.

~~~
true_religion
From reading the wiki article, it says the goal was to replace the payroll
system for 89,000 people by 1999. By 1997, it went live in a staged rollout
for 10,000 employees. Within the same year, the performance of the software
improved 8300%.

Sadly, Chrysler was bought[1] out in 1998 and the project was canceled in the
year 2000 for unknown reasons. 1999 when it was intended to be fully deployed,
Crystler was in the middle of a reorganization, which would lead to layoffs of
more than 21,000 people in the next 3 years.

I'd guess organizational politics had more to do with the cancellation than
anything the development team did wrong.

[1] Oh, sure it was labeled "a merger of equals", but consider that the share
price fell to 50% within a year afterward and Crystler was eventually split
off and sold... it just makes me think of the Time Warner + AOL merger.

~~~
bsder
Starting here also gives an interesting view of whether of not it was a
"success":
[https://books.google.com/books?id=Nxi7O7FCdIEC&pg=PA43&lpg=P...](https://books.google.com/books?id=Nxi7O7FCdIEC&pg=PA43&lpg=PA50&ots=n0leM5OZaQ&focus=viewport&dq=chrysler+comprehensive+compensation+post+mortem)

To be fair, he has an axe to grind, but C3 was quite far from a success.

To be fair, big projects fail. A lot. For many reasons. So, I'm not going to
blame the people involved.

And I find Fowler to actually be probably the most level-headed of the bunch
to come out of that project.

However, I _WILL_ apply requisite amounts of skepticism when those same people
start peddling their "expert knowledge" about subject matter in which they
provably didn't "beat the average".

~~~
acdha
That reads like a lot of accusation without much evidence, and the author's
credibility was immediately called into question for me when they dismissed
Y2k as a non-issue because it didn't produce widespread problems, ignoring all
of the problems which had been fixed in the previous decade. (This is like the
people saying closures after a snow storm weren't necessary because there was
no traffic, ignoring the reason why)

The only real lesson I feel comfortable drawing from that is that we need more
diverse case-studies than C3 because very few other projects are going to have
the same environment with a huge company, multi-divisional politics and then a
merger happening shortly into the project, one of the best defined problems,
etc.

------
kishorepr
Like others have pointed out here, it's incredibly hard to know the
application boundaries up front, which are are required for building micro
services.

I think solutions that are a hybrid of Monolith and Microservices work out
well. As another person pointed out.. this can be fairly easily achieved by
having a monolith with multiple sub-projects to get separation of concerns.
The code is all in 1 place so it's easier to design and refactor. You can also
deploy different sub projects as microservices if you need to later on. So
it's basically having a monolith with separately deplorable sub-components.

Once boundaries are clearly understand, it can then be easier to physically
separate services

------
olingern
> Almost all the cases where I've heard of a system that was built as a
> microservice system from scratch, it has ended up in serious trouble.

I wholeheartedly disagree with this point.

I've found that if I build monolith first, it becomes harder to draw the line
of how to separate endpoints, services, and code within the system(s).

If I design in a "microservice first," or just a service oriented design -- I
find that there is much more clarity in system design. In terms of exposing
parts of a system, I find that the microservice first approach makes me
consider future optimizations, such as caching policies, whereas, in a
monolith, I would proceed in a naive, "I'll figure it out later" approach.

Each school of thought has its downsides. Monoliths move fast and abstracting
parts of the system later that arise as bottlenecks is a tried and true
pattern; however, there aren't too many product / business folks who want to
hear:

"Hey, we just built this great MVP for you. It probably won't handle
significant load, so we're going to go off in a corner and make it do that
now. Oh yeah, we won't have time to develop new features because we'll be too
busy migrating tests and writing the ones we didn't write in the beginning."

The flip side is, microservice first has a lot of overhead, and (as things
evolve in one system) refactoring can be extremely painful. This is an okay
trade off where I'm at... for others, maybe not so much.

~~~
sbov
> significant load

Please define significant load.

~~~
tunesmith
Sometimes it's not about load, but speed of innovation. A huge complex
monolithic codebase might not have a lot of load, but it can still limit a
team's ability to experiment with new features due to big ball of mud.
Decomposing areas into service might enable faster innovation better than
refactoring the whole monolith.

~~~
acdha
That seems orthogonal to me: is adding a network boundary really the only way
to enforce basic software engineering practices? It seems just as likely that
the same organizational issues would lead to e.g. learning that your data
model is wrong and part of the app now needs to dispatch thousands of queries,
and fixing this is harder than refactoring a couple parts of the same
codebase.

(Note: I'm not saying microservices are bad – I just think that the process
which lead to that ball of mud will unfold similarly with a different
methodology)

------
decisiveness
What many seem to have missed from this is the bit at the end where Fowler
concedes:

> I don't feel I have enough anecdotes yet to get a firm handle on how to
> decide whether to use a monolith-first strategy.

after linking and mentioning points of a guest post [1] (with which I strongly
agree) which argues against starting with a monolith. A key part from that
post:

> Microservices’ main benefit, in my view, is enabling parallel development by
> establishing a hard-to-cross boundary between different parts of your
> system. By doing this, you make it hard – or at least harder – to do the
> wrong thing: Namely, connecting parts that shouldn’t be connected, and
> coupling those that need to be connected too tightly. In theory, you don’t
> need microservices for this if you simply have the discipline to follow
> clear rules and establish clear boundaries within your monolithic
> application; in practice, I’ve found this to be the case only very rarely.

[1] [https://martinfowler.com/articles/dont-start-
monolith.html](https://martinfowler.com/articles/dont-start-monolith.html)

------
yellowapple
Even if you're building a monolith, though, you're generally well-served by a
monolith that pretends to be a bunch of microservices - i.e. it could be split
into microservices easily if the need arises, kind of like how some "hybrid"
OS kernels could (in theory) be split into proper microkernels if the internal
function calls were replaced with messages (the NT kernel is built this way,
IIRC). Each part of this "chunky" monolith should provide a proper internal
API, and no other part should have to call into that part's internal
functions.

This should be easy to achieve in most "object-oriented" languages (like Ruby;
a Rails monolith should have no problem being structured this way, even if
quite a few of the ones I've seen in the wild seem to forego this). Erlang
(and Elixir by descent) is also well-suited to this, since you can break your
application into a collection of processes that - whether individually or in
combination with other processes - can act like their own little
microservices.

------
lisa_henderson
Last year I worked at an electronic publishing firm which had wasted $3
million and 5 years on a Ruby On Rails application which was universally hated
by the staff, and which we replaced with 6 clean, separate services. The
problem with the Ruby On Rails app is that it was trying to be everything to
everyone, which is a common problem for monoliths in corporate environments.
But the needs of the marketing department were very different from the needs
of the publishing department. A request for a new feature would come in from
the blogging/content division which would be added to the Ruby On Rails app,
even though it slowed the app down for everyone else.

Six separate services allowed multiple benefits:

1.) each service was smaller and faster

2.) each service was focused on the real needs of its users

3.) each service was free to evolve without harming the people who did not use
the service

There was some duplication of code, which suggests a process that is the exact
opposite of "Monolith First":

Start with separate services for each group of users, then later look to
combine redundant code into some shared libraries.

------
rukuu001
Here's Matt Ranney talking about how Uber's microservices-first approach
allowed them to scale their workforce super fast; also how those microservices
became a kind of decentralized ball of mud:

[https://www.youtube.com/watch?v=nuiLcWE8sPA](https://www.youtube.com/watch?v=nuiLcWE8sPA)

------
Havoc
I'd say the more correct interpretation is "don't introduce the complexity of
modularity too early"

------
garganzol
Every one who eats the food from a thought leader like Martin Fowler
eventually meets a trap. Shiny ideas "that sound interesting" are like a
candle light for a moth.

I created a simple rule long time ago: <insert name of a "thought leader"
here> last.

------
sctb
Discussion from a couple of years ago:
[https://news.ycombinator.com/item?id=9652893](https://news.ycombinator.com/item?id=9652893).

------
marichards
Modular monoliths can be a simpler medium. Writing modules of functionality
that work on their own (in memory integration test) can easily be tested,
separated into microservices or assimilated into a monolith. Be wary of
runtime function shared between modules as it will strictly couple the two and
risk side effects on each other, tending towards spaghetti. But for monolith
quick wins they can help for sharing management dependent resources like
database transactions.

------
rukuu001
You (and I) are almost certainly going to get it wrong first time around.
Which approach is most forgiving of errors? I'd say monolith.

------
y2hhcmxlcw
At what point will corporations that still design massive systems as an
unmaintanable monolith figure out they can architect things better and save a
ton of developer dollars? At what point do they start taking good points from
articles like this and either break those up into microservices or some other
solution?

~~~
losteric
When they realize business managers don't know jack about computers, and
delegate more authority to engineers and/or hire product/technical managers.

Development processes and software architecture follow from business process
and architecture... it's hard to be agile and develop services with clean
separation of responsibilities when business insists on monolithic hairball
project reqs with fixed deadlines.

(aka Conway's Law:
[https://en.wikipedia.org/wiki/Conway%27s_law](https://en.wikipedia.org/wiki/Conway%27s_law)
)

~~~
y2hhcmxlcw
I wonder at what point the financial pressure to stop designing bad software
becomes so high that it overrides the political pressures that created the bad
designs and practices? To a community like HN it's just normal every day
thinking to design even at least a decent web application, but at some
companies that's seen as either visionary and impossible or even immature. But
at some point it seems there would be so much money on the line to trim the
number of man hours going into maintenance nightmares they would fix it. I
sometimes wonder if big companies will wake up to this across the country and
there will be big lay offs becuase they adopt modern architecture and they
don't need so many people? Does this seem feasible or will conways law hold
even as financial pressure to do better starts to really go up? Or will the
rewrites take even more people and therefore there won't be layoffs/pressure
on job market?

~~~
losteric
Well, Conway's Law states that code reflects the organization's bureaucracy...
bad software means bad leadership and decision making, likely spread
throughout the company. Companies will root out those inefficiencies if and
only if they are doing poorly. Deep cultural changes are hard to drive if the
company is doing relatively well, no one wants to take the "risk" of trying to
improve.

------
oDot
There is a middle-aged, and it's building a monolith that's anticipated to
being broken down.

~~~
oDot
Just wanted to point out that I was swyping, lol.

------
stuartaxelowen
I quite like the "web server and stream processors first" strategy, since it
will take you much farther and retain the same code efficiencies as the
monolith, but will also give more operational efficiency at minimal extra
cost.

------
holografix
Monolithic 12-Factor apps where you can abstract some of the requirements to
managed services, like a DB service, an email service etc. Someone already
mentioned here but stateless app processes is a must.

------
jaxn
I think the same argument should be made for NOT writing tests for a
prototype.

Build something useful, fast. Then refactor. Write tests when refactoring or
fixing a bug, but not when prototyping.

~~~
invisible
One issue with this approach is that refactoring can cause bugs or a change in
behavior so you're risking bugs twice (initial write, then when doing
refactor+teats). If you could guarantee the code was testable from the start
then maybe this would help with the approach you outlined.

------
a_imho
Might be OT, but what is the opinion on Martin Fowler in general?

~~~
acdha
He's smart and experienced. That doesn't mean he's always right but I would
consider what he says and reason through disagreements. Most commonly, I find
the right/wrong arguments are actually reflecting the fact that the underlying
environments aren't as similar as they might seem at first glance. Someone
giving advice based on working on an F500 team with 200 developers will have
seemingly-bizarre priorities to a 5 person startup, just as advice from
someone at Google used to handling multiple orders of magnitude more traffic
is likely expensive overkill for that small team.

~~~
eropple
I agree with this. I used to be a lot more down on his work, but it wasn't his
fault so much as all the wannabes who bought it uncritically. (Much the same
as stuff like Kubernetes--you aren't Google, they have Google problems, you
don't have Google problems, stop automatically adopting Google solutions to
problems that are a-web-server problems.)

~~~
guscost
> you aren't Google, they have Google problems, you don't have Google problems

Don't underestimate how hard it can be for developers to accept this. The
hacker community can become a tedious game of one-upsmanship at times, and
it's way too easy to slip into "imposter syndrome" mode. Often, the people
barking loudest about the newest ideas have slipped themselves, and are just
trying not to appear clueless.

But _cluelessness is fine_. It's the default state of being, we all need to be
comfortable (if not satisfied) with it.

I don't want to suggest that Martin Fowler is clueless, of course. He has
described _many_ prudent battle-tested techniques that can be absolutely
essential in context. If you haven't seen his article on Collection Pipelines,
it's relevant to all kinds of modern programming:
[https://martinfowler.com/articles/collection-
pipeline/](https://martinfowler.com/articles/collection-pipeline/)

------
tomerbd
There is a really interesting discussion here, but I need to quit my day job
to read it all :O

