
In Defence of Monoliths - nkurz
http://techblog.bozho.net/in-defence-of-monoliths/
======
unoti
There's a dangerous, contagious illness that developers of every generation
get that causes them to worry about architecture and getting "street cred"
even more than they worry about solving business problems. I've fallen victim
to this myself, because street cred is important to me. But it's a trap.

An important meta-idea that encapsulates all of this is remember to solve your
business problems as your first priority, and implement technology in service
to that goal first and foremost.

It's far too easy to put various technology decisions in the driver seat,
ahead of the business problems. This leads to all kinds of anti-patterns over
the years. For example, I saw software development organizations destroy
themselves trying to do 100% pure UML-first software using Oracle's
Designer/2000 and Developer/2000 tools, because that was just the "right" way
to do things. I've seen organizations destroy themselves in the morass of
trying to do things with Enterprise Java Beans (EJB) because it was just the
right way to do things. I've seen companies spend 10x more time getting their
automated build processes and super duper full test coverage going than they
spend trying to actually write software.

And I've seen companies navel-gaze over their processes and methodologies and
buzzword compliance far more than they worry about meeting the needs of their
customers. If you've ever worked for a company with a 15 year history of
ironclad determination to use all of Microsoft's latest preferred data access
methodologies, then you have first-hand knowledge of another example of this.

I'm not saying microservice architecture is a worthless fad, but I am saying
that putting your business needs into the done-basket absolutely must be the
first priority; never lose sight of it. I'm all for being wary of technical
debt, but "technical debt" isn't nearly as dangerous as whatever you call that
contagious illness that infects developers and makes them put architecture
before business needs.

~~~
Walkman
I agree with you, but the opposite of this is also horrible. I'm working for a
company where they only focused on business needs, and the result is horrible.
Terrible, unwieldy codebases, full of bugs, have no security at all. It's hard
to develop features, it suffers from feature creep, have absolutely no test
coverage at all.

It's important to find balance.

~~~
unoti
You're totally right: balance is the key. At least where you work there's a
deployed product in production, though it is difficult and unwieldy. When you
go the architecture astronaught direction, it's very easy to never make it
into production at all!

------
timothycrosley
It depends _how_ you do micro services. There are middle grounds. One big gain
of micro-services is that it guarantees things are separate and can be handled
by separate teams if the need arises. That doesn't mean you need to start out
that way. For instance in Python I use hug to create my microservices
[https://github.com/timothycrosley/hug](https://github.com/timothycrosley/hug),
then I can just install them to create a "monolithic" services that consumes
all the microservices, the great thing is that hug allows you to expose both
as a webservices, and as a Python library so I can consume as a Python library
with no overhead, until the need to split is evident, and then can split up
the services with very little work. Of course the need may never arrive, but
the modularity that is forced when using micro-services pays dividends quickly
regardless

~~~
eropple
This is more or less what I'm doing right now with Dropwizard. I started
building this app as a set of separated microservices, but was finding it to
slow me down more than I wanted it to--so I flipped it upside-down and loaded
every "application" into a single Dropwizard service. Which is uncomfortably
like application servers, but I'm using this almost exclusively to get around
doing the inter-service wire-up that I'd have to do manually, rather than
letting my DI container do it. Each module's still separated, and if we need
to fork services off in the future, it's about twelve lines of code.

------
cpitman
> decentralize all things – well, the services are still logically coupled, no
> matter how you split them.

I think this is missing the point. It isn't just about decentralizing
_services_ , it is also about allowing you to decentralize _teams_ and
_decision making_. With very large teams working on a single monolith,
features that are complete often cannot be deployed because of the larger
organizational overhead of planning and coordinating a release. The higher
coupling in most monoliths also restricts a teams ability to try new things
and take risks.

If performance of technology is the only metric that matters to you, then yes,
microservices are probably a horrible idea. If you are having difficulty
scaling your _teams_ , then it might be worth looking at.

~~~
pdpi
> The higher coupling in most monoliths also restricts a teams ability to try
> new things and take risks.

I think it's you who's missing the point. That right there is why.

Turning your monolith into a micro-service architecture should add up to
basically: \- Wrap your internal modules in your favourite form of RPC \-
Replace the modules with stubs that call the RPC

If you need more work than that, the problem with your application isn't being
a monolith, it's having functionality be way too tightly coupled.

If turning your application into a micro-service architecture actually _is_
that simple, then you already have well-delineated modules that your teams can
focus on. Deployment is just about integrating the latest stable version of
each module, and the decision process within each team needs only respect the
contract around the interface they provide -- same as a micro-service.

Saying you need a micro-service architecture to keep a sane internal structure
to your application is a symptom that you need to review your engineering
practices, because people aren't respecting the interfaces, and it's throwing
out the baby with the bathwater.

~~~
to3m
Don't forget that an RPC call can take longer to complete than a function
call. Loose coupling doesn't imply that your program expects operations to
routinely take 1ms to return results.

~~~
pdpi
I was replying to a comment that ended with "If performance of technology is
the only metric that matters to you, then yes, microservices are probably a
horrible idea". I figured that it'd be more effective to make my point without
bringing performance into the discussion.

------
rcconf
We've had a lot of issues with micro services. They're extremely difficult to
debug and end up being too generic. I argue the reasons why we introduced
microservices has merit (massive code base) but does not mean they are simple
to use/maintain.

Here are some examples:

1\. A chat service (race conditions, synchronization issues between the game
server and chat service that had to be debugged using sequence diagrams)

2\. A payment service that handled Facebook, PayPal, and other payment
methods. (race conditions, complicated integration, difficulty upgrading for
different products, complicated code base to handle multiple use cases)

3\. An authentication service (complicated protocol, hard to integrate)

4\. A worker service for handling bcrypt (because blocking a single threaded
server for 0.5 seconds is not acceptable, race conditions)

5\. A tracking service (could have probably just been a library you included
instead of hitting an API)

Core issues:

\- Race conditions

\- Synchronization issues

\- Complicated to upgrade/maintain for multiple products

\- Become too generic and solve multiple problems (services tend to be used
for multiple products in the company)

\- Extremely difficult to debug

\- Complicated error handling for when a service is not reachable

pro-tip: don't share databases between multiple products for each service,
deploy a new service for every product with its own database.

I think I'm scratching the surface here, but I would be really careful with
introducing this kind of architecture when you can do it all in one server.

~~~
debacle
I don't mind to sound crass, but it sounds like you implemented microservices
poorly. You could s/microservices/threads/g on your post above and it would
point much more to a problem with your implementation rather than a problem
with threads.

~~~
rcconf
Writing multi-threaded code is hard, writing micro services is even _harder_.
I don't think your argument holds here.

Micro services are hard to write, so don't write them unless you absolutely
have to.

~~~
debacle
I think in 2015 we should be pretty good at writing code without race
conditions, especially considering the tools made available to us by our
programming languages.

------
abritishguy
I work for a startup bank in the UK (Mondo). Our Go microservices architecture
allows us to maintain velocity whilst staying secure.

The core banking services that actually move money around are isolated from
the more "fluffy" customer facing ones that we want to be able to push updates
to several times a day. We have to have incredibly rigorous procedures for
updating services that control money, if we had a monolithic architecture then
these procedures would have to be used even if the change was simply cosmetic.

------
DanielBMarkham
Being a new "convert" to microservices, and coming from a classic OOA/D/P
background, I read these critiques with great interest. I keep waiting for one
that shows me what I'm missing.

What I'm finding, however, is that many of these authors have such a broad
understanding of microservices are that they miss any benefit. Then, of
course, they complain about there not being any benefit, natch.

I suspect -- and what I have feared -- is that the term "microservices" has
been co-opted and rebranded by a variety of vendors and proponents. The goal
here is to sell products and services, not necessarily solve problems.

There is certainly a huge wheel of hype in the technology world, where things
become cool, then old hat, then abused, then nobody does them anymore, then
they return under a new buzzword.

Having said that, in the future I'm going to always make a point of defining
exactly what I mean instead of just using the term "microservice". My current
definition is something like this:

\- Pure FP

\- Unix Philosophy

\- Each microservice has less than 150LOC

\- Common code, types, and persistence functions are moved to shared libraries
to reduce interop concerns

Not sure if this makes a difference in the discussion, but I know that it will
help me keep straight whether various authors actually have quibbles with
microservices -- or are just re-applying their pre-existing OO thinking to a
place where it doesn't necessarily map so well.

~~~
vskarine
don't you think 150LOC is a bit extreme on the low side? I don't see how this
is possible unless you compose services out of other services... and in that
case it's probably a nightmare to debug anything. Can you elaborate a bit on
types of things that this worked great for you?

~~~
DanielBMarkham
I can't walk through a solution to a complex domain inside of a HN comment,
but I can provide an overview of the theory.

Take a large monolithic app or framework. What I've found is that if a problem
is properly coded, while you may have tens or hundreds of thousands of LOC,
the actual code doing the work is quite small, on the order of hundreds or
thousands of LOC. The rest of it is all "wiring".

Moving to pure FP means that a lot of the code structure of OOP disappears and
you're left with just the critical pieces. This consists of composed functions
performing translations on immutable chunks of data. Your functions become
microservices and the composition of functions, scheduling, and moving around
of data become Net/Dev Ops.

There are many ways to fall off the path here. You start using mutable data,
you start associating services with business domains, you start coupling
microservices together more tightly than necessary -- there's a ton of ways
you can accidentally screw up, and then you're probably better off with a
monolithic app.

Microservices should be like ls, cat, or chmod -- small pieces of composable
functions that run directly in the O/S. They don't blow up the system when
they fail, they do one thing and only one thing, they're configurable using
command-line switches, they're transport-independent, and so on.

~~~
emilyst
From GNU coreutils:

$ wc -l src/ls.c

    
    
        4980 src/ls.c 
    

$ wc -l src/cat.c

    
    
         768 src/cat.c 
    

$ wc -l src/chmod.c

    
    
         570 src/chmod.c

~~~
codahale
Obviously, those are monoliths in dire need of decomposition.

------
debacle
An honest question - has Martin Fowler ever written any _code_ that makes him
so much of an authority on software design?

I don't disagree with everything he's said, but what contributions has he made
that makes him so much of an authority on how I write code?

~~~
jjbiotech
That's an impressive logical jump you've made there. Starting with a man
simply writing a blog post with his opinion on software architecture, all the
way to implying that he's an authority on writing code.

He never said he was an authority...

~~~
cableshaft
You Google "Martin Fowler" and the snippet of text that accompanies his
personal website is "Object-oriented programming expert and consultant, one of
the leaders in refactoring, author of the book 'Refactoring: Improving the
Design of Existing Code'"

He's a self-proclaimed "expert" and a "leader". Yes, he's basically saying
right there that he is an authority on writing code.

That being said, I've read some of his Refactoring book awhile ago, and it's
mostly good advice from what I remember.

------
Glyptodon
One thing I don't think gets hammered on enough is that if your organization
is not large microservices will create a lot of overhead for little gain.

If you have few developers most of the time it's a lot more efficient to have
a modular monolith than it is to have 50 entirely different rocks.

Decentralizing teams, decision making, and allowing increased independence of
agency within a larger org is great. Trying to do the same thing when you only
have 5 people is usually borderline crazy.

------
_pdp_
🅰s always, there is room for both styles of programming. Some applications
simply do not require this type of architecture. Testing a monolith is
certainly easier than testing thousands of microservices. Debugging is also a
lot easier. Dependency management is also hell of a lot easier.

Yes microservices (or simply services) are very good at large scale
applications but they are simply an overkill for any startup project. Why not
concentrate your efforts on selling your product first vs building the perfect
infrastructure that no one uses? When time comes to upgrade - well upgrade.
Amazon is a good example of a large company making the move from monoliths to
services and they executed well. You can also do the same thing when it is
absolutely required. You know, refactoring!

------
patrickmay
One advantage of a microservice architecture not mentioned in the article is
the ability to scale services independently of other services. Being able to
fire up more instances to address a bottleneck is often much simpler than
managing threads in a monolith.

~~~
bozho
that is true in some very rare cases. CPU-intensive bits of the application
should definitely be separated, so that they can scale independently. But
that's not the main point of microservices.

~~~
acdha
I don't think I've ever seen a piece selling the benefits of microservices
which didn't mention the scaling or reliability benefits as a primary
motivation.

------
exelius
Yeah, my general rule is that unless your production environment is already so
complex that you have multiple people who do strictly DevOps work, you should
probably stick to a monolith. It's a lot easier to transfer state between
multiple deployments of the same monolith than it is to go full microservices.

Microservices are a gigantic pain in the ass. They increase operational
complexity _significantly_ , and they will require you to make a lot of
investments in building/buying/implementing infrastructure around things like
logging, monitoring and config management. Your number of possible variables
in QA explodes exponentially, and suddenly you have to worry about things like
API versioning.

If all of this sounds easier than dealing with the tech / organizational debt
in your monolith, then microservices may be right for you. Otherwise, save
yourself the trouble and focus on scaling your business instead of your
technology platform.

------
Sleaker
I'm not advocating for microservices, but I think the blog post fails to
understand the difference between patching a single microservice and deploying
a single microservice without needing to restart your entire environment
stack, and patching a single module in a monolith which requires you to
restart the entire monolith... Maybe that's not an actual issue but the author
seems to disregard the fact that restarting a single microservice is
explicitly different than restarting the entire monolith for the very reason
that time may become an actual factor.

~~~
lemmsjid
Not disagreeing with you, just adding some color.

We have a monolithic application (well, an application composed of many
libraries that have a dependency tree, but live in the same process) that
contains individually deployable services that talk to one another over RPC.
So when you're patching part of the codebase, you can deploy a branch to the
servers that are executing the code you've just changed, thus no need to
restart the whole thing. But you still have the benefits of shared libraries,
moving functionality between services, collapsing service calls that are no
longer performant, adding new calls temporarily, etc.

In this model, you can introduce and remove service calls when it's useful, as
opposed to as a result of how the code was put together. You will often do
this for performance reasons, but just as often you'll do it for deployment
flexibility. When you want to prototype something new, you can create a new
service, branch the codebase, and have it talk to the rest of the
infrastructure via service calls. When you're done you can keep it as a
service, or, just as often, you can collapse the service back into other
services so we don't have excessive RPC calls.

When would we consider moving to services backed by different codebases? If
our company were to grow so large that highly disparate teams would want to
manage their own dependency trees--that's around when it makes sense to me to
go microservice. (I not sure I'd call it microservices then, more SOA).

------
thecourier
I have seem cases when a group of stateless monoliths would have solved more
efficiently a business case than the micro-services alternative.

I'm not saying micro-services are a fad, because they are great for isolating
domains and degradation of service, but that comes with a high amount of
boilerplate code for remote invocation and serialization.

I would personally say, if your project has not reached 10,000 lines of code
do not go that way. also if you can solve the problem at hand with a cluster
of stateless monoliths, keep it simple.

Dear friends, keep at hand the Occam's razor

------
bullen
The important thing is to improve tools, and with a monolith you can't do
that.

So build a distributed PaaS with hot-deploy, then a distributed HTTP database;
then you can use microservices without any of the problems mentioned
(complexity becomes a non problem since everything is using the same
complexity and it very quickly becomes bug free, overhead is removed by PaaS)!

The important step with microSOA is that each developer can choose his tools.
That trumps everything else.

------
csears
I think the choice of monolith vs microservice should be largely driven by the
size of your team. Some guidance I heard on a podcast recently (can't recall
which) was to only consider microservices if you had more than 50 people
trying to deploy code in the same app/system.

