
The microservices cargo cult - stelabouras
http://www.stavros.io/posts/microservices-cargo-cult/
======
song
Yes, a thousand times yes! Microservices are yet another tool in the box but
they shouldn't be used on everything. And it makes no frigging sense for a
startup or any new project to start with microservices...

The main advantage of microservices are in scaling and in reducing complexity
of a big system but those advantages only make sense when you have enough
traffic so that you have to scale or when your system has become complex
enough to warrant microservices.

When first starting development, the most important thing is speed of
development to get feedback from users as soon as possible. It's much faster
to develop a clean, well optimized monolith than to spend a lot of time
developing a whole bunch of micro services. And while thinking in term of
microservices will help you to better conceptualize your software
architecture, at this stage, you don't have all the informations needed to
have a clear idea of what the final architecture will be and you'll often end
up with microservices that are divided in suboptimal ways causing a lot of
pain.

~~~
iofj
The article didn't even mention a lot of drawbacks :

Changing anything means changing half a dozen programs. Potentially in
different languages.

Building any new feature into the program means you get to "walk the
dependency tree", making sure no serialized stuff from a new version gets sent
to an old version. Good luck with circular dependencies.

Related: deleting a field ... never going to happen. We're talking years of
planning. Any field added ... ever ... has to be taken along for the ride for
years. Oh and don't even think about deleting the code that interprets and
supports the field for the same reason.

Also related: best of luck with the interactions between "oh we're not going
to do that after all, sorry about asking you to already push that and update
the database" and the "we really need to do this next thing right now"
features. And the

Constant serialization overhead. People go overboard with this microservice
crap and the result is that 99% of your program's cpu time is used putting
objects to json and back (which is very expensive due to constant alloc'ing
_), and you have 10-100 times the normal memory overhead.

Microservices should be like optimization : build your program without using
them and then figure out where they'd make sense.

_ yes I know, you can sort-of avoid it these days with cap'n proto and
flatbuffers

~~~
matwood
> Building any new feature into the program means you get to "walk the
> dependency tree"

Same with monoliths, except it is often a worse experience. At least with
microservices I _know_ the interface is all I have to worry about. In
monoliths of any size inevitably someone has reached into parts of the program
they shouldn't have just to get something 'done quickly'. And this is one of
the main benefits of microservices - enforcing the interface boundaries.

> Related: deleting a field ... never going to happen. We're talking years of
> planning. Any field added ... ever ... has to be taken along for the ride
> for years. Oh and don't even think about deleting the code that interprets
> and supports the field for the same reason.

That's just poor design and happens just as much in monoliths. The db is the
almost always the challenge when removing a field. I could argue that
microservices make it easier since the service providing access to that field
could remove it from the db and then dummy it out until clients are updated.
Also, why wouldn't someone remove the field from all the clients when removing
it from the supplier?

With that said, I agree that microservices should be something that happens
organically from a monolith. Think about an amoeba that reaches a certain size
and only then do parts split off. I also think there is some ambiguity to what
constitutes a _micro_ service. I'm sure my idea of proper granularity is
different from others.

~~~
marcosdumay
> At least with microservices I know the interface is all I have to worry
> about.

You wrap your code inside a reusable library if you want to encapsulate it and
expose only a small public interface. You don't start a new service for that.

~~~
tracker1
It doesn't ever work that way in practice for larger applications though...
once you have multiple teams with many developers working on something, those
library boundaries fade, and tbh sometimes they should just be bypassed in
favor of more shallow layers (for a monolith).

Once there are more than a handful of people working on anything no one person
understands the details of the whole. Smaller, specialized services can at
least act as and enforce those boundaries as upstream comments said.

~~~
zo1
> _" Once there are more than a handful of people working on anything no one
> person understands the details of the whole. Smaller, specialized services
> can at least act as and enforce those boundaries as upstream comments
> said."_

Until you have so many services, that you also need someone that understands
the details of all the different services interacting.

Really, if you can't control your developers from leaking through separative
concerns, then they'll do it in any architecture. Monolithic or microservices-
based.

~~~
tracker1
I think you can have separated services by concern without getting too deep in
terms of complexity though... I wouldn't use the term "micro" though... I also
think if you are only doing mutation, you can combine them, or do libraries in
a control service... there are lots of ways to skin the issue...

Digging down through 7 dll projects in one solution, then 11 in another to add
a small feature that should take 10 minutes taking 4 days isn't so much fun...
same would go for 30+ service projects...

I'm also not big on starting with smaller/micro services either.. or ORM, or
DI/IoC for that matter... Build the monolith and break pieces off as needed.
I'm fine with that... I was saying it's easier to herd cats when they're not
in the same space as eachother.

------
togusa
I'm in the middle of a microservices mess than was forced upon us. I have
nothing positive to say. If you're in the SaaS space already and it's not a
greenfield project, it's orders of magnitude better to deploy lots of smaller
identical monoliths than it is to try and build and deploy lots of services
and manage the contracts and complexity between them.

Major problems I've seen are: per transaction performance sucks due to the
network or IPC channels, development friction, logical complexity,
infrastructure complexity, managing contracts between services, debugging
failures, monitoring performance, bootstrapping new staff and the biggest of
the lot: headspace.

If you want to succeed, at least in the short term, just keep your monolith
tight and fast and without sprawling infrastructure requirements. Single
machine, single process, single storage engine (or database), single messaging
system. Then scale that to multiple instances. If your site deployment
requires at least 20 machines due to sprawl, you're going to be even more
screwed when you throw microservices at it, not less. If your application is
incredibly complex, it's not going to work either. The problem domain needs to
be small and easy to consider as it's difficult to cleanly extract a chunk of
your average monolith into a standalone concern.

There are also people with technical authority in many companies who blindly
follow the latest fad without real consideration of suitability, risk
assessment or accountability. If someone starts waving microservices, AWS and
everything else around, they need to fight their position and everyone needs
to assume that isn't the default end game.

~~~
aartur

      > it's orders of magnitude better to deploy lots of smaller identical monoliths than it is to try and build and deploy lots of services and manage the contracts and complexity between them.
    

This article from 2003 by Martin Fowler is exactly about that:
[http://www.drdobbs.com/errant-
architectures/184414966](http://www.drdobbs.com/errant-
architectures/184414966)

~~~
togusa
Indeed. I remember reading that back then!

------
aartur
Microservices are advertised as a means to modularization, but it's what
programming language modules are for - they are defined on source code level
and can be freely used in different runtime components without
network/ops/version-management headaches. When you have your module defined
that way, you can think of exposing it as a microservice because this may make
sense for your use case.

Imagine that each Python module runs as a microservice. For many modules this
would lead to huge performance degradation, for example a regexp module can be
called thousands times per second, the running time of a call is usually short
and replacing an in-process call with a network call will give 100-1000x
slowdown.

But if you take a different use case of the same module - complex regexps
running on large texts, potentially causing out-of-memory errors, then packing
the module into a microservice can make sense - separate processes can have
large caches, an out-of-memory error terminates an instance of a microservice
only and not the calling process.

Generally I think the advice should be to always use source code modules in
the first place, and create microservices using these modules for specific use
cases only involving runtime needs like caching, fault tolerance, scalability.

~~~
togusa
Most of the runtime needs in this space can be managed with circuit breakers
without having to provide separate processes or memory spaces, be they
temporal or resource based.

For example your regex, could be given 20ms to run and use a maximum of 8Mb of
heap before it is interrupted. This can happen on a single thread and drop
back to the caller if an exception is thrown. I'd love to see a language
feature which defines the maximum stack and heap for a particular scope i.e.
in C#:

    
    
       Breaker.Heap(8.MiB(), () => {
           Breaker.Time(200.Milliseconds(), () => {
               // risky operation
           });
       });
    

(we already do the time breaker, but not the heap)

Edit: correct calling convention.

Edit 2: add missing extension method brackets

~~~
aartur
Interesting. Is the circuit breaker a feature of .NET running on Windows?

To my knowledge, it can't be implemented on Linux + Java or Python (a thread
can't be terminated from outside, and some syscalls involve a whole process).

~~~
togusa
It's a library I wrote.

The time breaker is actually quite complicated. It is a wrapper that sets up
some global parameters on the thread for timeouts on async/await calls and
handles the timeout conditions. It integrates with our own async wrappers for
external http calls, message delivery, query execution etc. It only enforces
that all aggregate async calls will complete or fail by the end of the timeout
period. Realistically this is usually around the 500-800ms space as load
spikes can break everything otherwise.

------
daxfohl
This leaves open the question of what _are_ microservices. Are they _of
necessity_ completely isolated units deployed with Docker and Kubernates on a
CoreOS cluster and communicating only via HTTP each with independent
databases? Yes this seems a bit much for most projects.

There are midway options though. Even the lowly batch job is a good way to get
some of the decoupling without having to go "all-in". I find batch jobs and
message queues give me 80% of the benefit of "microservices" with only 5% of
the pain.

In fact someone needs to write an article on "levels" of "microserviceness",
(which certainly has multiple dimensions and branches) and point out the
benefits and drawbacks of each level.

Of course the end game being: "a Docker container for each line of code."

~~~
daxfohl
> a Docker container for each line of code

Yes, this is the surest way for developers to guarantee the code itself is
always bug-free. Everything becomes "just an ops problem".

2017 predictions: buzzy job title: Ops-Dev. Labview-style graphical
programming
[http://www.ni.com/newsletter/app/largeimage?lang=en&imageurl...](http://www.ni.com/newsletter/app/largeimage?lang=en&imageurl=%2Fcms%2Fimages%2Fdevzone%2Fpub%2Fnrjsxmfm912163998723206173.jpg)
becomes the new hotness. Amazon and Google introduce new graphical programming
workflow coordination services for lambda. Nothing ever gets finished.

(2018 predictions: google discontinues this service)

(2019 predictions: every machine instruction now belongs to its own docker
package; the successor to Kubernates is marketed as a distributed asynchronous
virtual CPU. Data centers begin requiring small nuclear stations to generate
the electricity to power them. Still nothing gets finished).

(2020 predictions: someone rediscovers this "heroku" thing; things start
getting done again. Hot new industry: nuclear waste disposal)

(2119 prediction: nuclear stations powering 2019's data centers are still
running at full power but servers are all dormant; nobody wants to turn off a
server "just in case").

~~~
arethuza
"graphical programming workflow coordination services"

Those things are always sold to CIO types based on the idea that "you won't
need developers" and then you end up with some eldritch graphical abomination
where being being able to visualise things just makes it worse....

Draw a rough diagram and then write your code - fine, generate a diagram from
code - fine, generate code from a diagram - nightmare.

------
markbnj
I think this article isn't very useful. It's unfortunate that we have this
human need to blow things up and then deflate them. I dislike the term
"microservices" because for most organizations all it does is plant a flag all
the way across the yard from "monolithic." Suddenly the guys at the top have a
new buzzword and the engineers are all trying to make their services as small
as possible so they're micro enough.

Most of the gotchas the article mentions aren't logical consequences of
decomposing into smaller services at all. You don't have to have different
data stores for each service. You don't need to "marshal data" between
services. If a service needs to call a service it's just a client like any
other client, so if we want to call standard http request/response handling
"marshaling" I guess it will sound more complex and scary. Breaking a
monolithic app into smaller pieces doesn't increase complexity, it reduces it.
And to the extent you have more things to monitor that probably means you can
now monitor and control things that were more or less invisible outside the
log data in the monolithic architecture.

More importantly, decomposing a problem into logically related areas of
functionality that can execute separately allows you to make the most
efficient use of compute resources, and it is consistent with the idea of
favoring multi-processing over multi-threading. In almost every way groups of
simpler things collaborating makes much more sense than large complicated
things that do all. It's only when we create these Knights in shining armor
that people start feeling like they have to be knocked off their horses. Use
the tools and techniques that make sense.

------
bru
Some of the weaknesses can be tempered by not using HTTP to communicate
between the microservices:

\- "slowdowns on the order of 1000%"

\- " bunch of code necessary to marshal/unmarshal data [...] there are always
dragons in there.

And also problems of versioning, data integrity, etc.

I've had those problems in a microservices architecture. That's things that
are solved by protobuf[0]. Your servers exchange small, efficient structured
data and you get tons of other benefits ({un,}marshaling for free, integrity,
versioning, ...).

Potential downside: a language you want to use having no protobuf API.

Finally, I see another downside to the microservices architecture: it may be
decided that the smaller, decoupled code bases should be stored in multiple
CVS repos. Which turns into a nightmare: a single bugfix may span across
multiple repos and there is no clean built-in way to links commits across
them, you still should sync the interfaces (e.g. with git submodules), etc.
This is a thing I've witnessed firsthand, and proposals to merge the repos
were dismissed since "We [were] using a microservices architecture". Yes, it's
a mistaken implementation of the microservices paradigm, but it still happens.

edit: I recommend protobuf not by preference over other equivalent solutions,
but because it's the only one I know and have used. Alternatives are evoked
below.

0: [https://developers.google.com/protocol-
buffers/](https://developers.google.com/protocol-buffers/)

~~~
3pt14159
I really recommend against using protobuffs. There are long standing bugs that
Google just refuses to fix in the public version. I can't remember what they
are off the top of my head, but I know a semi-prominent YC company that uses
them and they pull their hair out all the time.

Just use zerorpc. It's more reliable than zeromq + protobuffs and it comes
with a bunch of freebies, like built in heartbeats, streamed responses, etc.

~~~
mdup
How about Cap'n Proto? Their marketing is great, but I don't recall seeing
many experiences with it, so I'm asking for feedbacks here.

~~~
3pt14159
First I'm hearing of it. Looks really promising, but without any experience
with it I cannot really say.

~~~
swsieber
They wrote it specifically for sandstorm.io, so they're dog-fooding it.

------
davedx
So I think there is a parallel to this with the whole "everything must be in
tiny modules on npm" movement in the JavaScript community. If you do this, you
end up with lots of repositories, a rigid separation of concerns but a network
between you and your dependencies that will get hit a lot unless you wield
some npm link sorcery, and a bunch of extra complexity.

A modular monolith application is what people have been writing since people
thought up the notion of modules. Enforce proper discipline when building your
app out and you won't need these physical walls between your functional areas.

I'm currently reading SICP, and the notion of using "block structure" in Lisp
to compartmentalize and encapsulate functional areas of code is introduced in
_Chapter 1_.

Get the basic stuff right before you start introducing complex systems to
split up your software.

~~~
MichaelGG
And there's probably a parallel to "one file per type" with tons of little
types. It feels like you're making progress and doing stuff. You've not just
done one task, you've deployed changes to 20 services. It also feels like
you've made things more abstract than they actually are. It also feels more
like Real Serious Programming.

I've used a Twitter API client (works fine) that implements itself across _7_
separate libraries. All same project, they all work together, and they aren't
used in any other system. Just separated things out for the fun of it. One
library for Twitter client "Factories". One for Twitter "Credentials". Another
for Twitter client "Security". Zero benefit to the user or to the project. But
it certainly makes things seem more important, eh?

~~~
alexro
It totally makes sense to have separate libraries even for one project if they
can be implemented by different people and tested separately. I'd shout for
that design any time of the moon.

~~~
MichaelGG
They can't be deployed separately. There's no point in splitting them up.
Testing and development aren't affected.

~~~
mdpopescu
Deployment is not the only thing that matters. Development is also important.
I am actually working on an interpreter for a web scraping tool, where I'm
working on the interpreter and my friend on the IDE. The likelihood of them
ever being deployed independently is low but the separation is helping us
focus.

------
sz4kerto
One thing that is usually overlooked: do microservices fit your team
structure? A team who spends all day together can manage a monolith very well,
synchronous releases (everything-at-once) are not a problem. If you don't need
24/7, then it's even better.

However, if you're a distributed team (maybe across timezones), quick
discussions are difficult and 'costly', then microservices might worth the
effort. Managing the deployment and operations is more difficult but sometimes
much less coordination is needed when people communicate through APIs and not
Skype and Slack.

~~~
danmaz74
As others have already pointed out, you don't need microservices to achieve a
good modularization of your code. Then have each team work on a different
module, with clearly defined APIs between them. No need to separate them with
HTTP calls.

~~~
federico3
The main benefit of microservices is that each team can _deploy_ their service
independently.

This is really useful to reduce the level of required of interaction (and
pressure) between teams.

~~~
marcosdumay
It may reduce development interaction, but it'll certainly turn testing into a
nightmare.

If you have teams that communicate badly, you'll need people specialized in
deployment and assigning issues to them. That's not a good situation anyway,
but it's the less worse of them.

------
dunkelheit
Having read one of the success stories linked in the beginning
([http://philcalcado.com/2015/09/08/how_we_ended_up_with_micro...](http://philcalcado.com/2015/09/08/how_we_ended_up_with_microservices.html))
I think I am starting to get it. That was a rather candid article describing a
team which after having got burned by a sprawling "monorail" architecture
decided to split it up into services (nothing micro- there) based on
_organizational_ viewpoint (if there is some knowledge about a part of
application shared by 3-4 engineers it makes sense to split it into a separate
module so it can be developed faster). And as I am inferring from the pictures
their services don't call each other much so it is really close to a modular
monolith. So if "migrating to microservices architecture" really means
"splitting that damned monorail into something more manageable" then it is a
really good idea after some point.

------
donpark
This phenomenon is just part of human nature. Same thing happened with OOP,
design patterns, TDD, etc.

To apply microservices effectively, you should first build the monolith,
modularizing at the source code level and adding choking points as needed.
Over time, microservices will naturally roll off the monolith not unlike
boulders rolling off mountains after rain or earthquake. Don't go dynamiting
in anticipation.

~~~
eponeponepon
You don't necessarily need to _build_ the monolith - _design_ it first, for
sure, but then pick apart the design and see where the individual components
lie.

~~~
donpark
Right. And I'm saying most folks either don't know how to do that or can't
because of constraints like time and fluidity. Design one starts and ends up
with are more often than not miles apart.

------
angdis
I can't help but think that much of the intent behind "cargo-culting" is
simply people building up their resumes for future career development.

If you want to work in a sexy new technology, but you need to develop
experience in that new stuff to be marketable it is totally understandable to
try to build up skills by forcing the implementation of over-sized solutions.

In other words, many employers aren't willing to take on folks if they don't
have the requisite experience on some new stack and that compels folks to gain
that experience anyway they can, including "cargo-culting" stuff that isn't
necessary just for the experience gain.

~~~
StavrosK
That hasn't been my experience, from the people I've seen. It's mostly just
excited engineers wanting to play with a new thing (which is great) but doing
that in the most available place, their job (which is bad).

The advice I give them is "do whatever you want in your house (or your side
project), but critically evaluate your business needs and only use what makes
sense for your business".

Personally, I have a very low-traffic guinea pig side-project that I like
working on, and I just try every new thing there.

~~~
angdis
Well, yes, but I would say that the desire to "play with a new thing" is very
much tied to career development.

~~~
StavrosK
The people I know aren't doing it in a conscious way to further their career,
they just like playing with new things. The career advancement is incidental.

~~~
jakeway
I'm willing to bet that it is a bit of both.

------
DanielBMarkham
The article is a little weak, but well worth the read.

I love the microservices concept, but fair warning: as bad as OO has gotten
over the past 20-30 years, microservices promise to be even uglier.

Why? Because not only are you mucking around in the code, _you 're also
mucking around in how everything connects to everything else in your cloud_.

Just like we saw vendors come out with click-and-drag ways to create new
classes, now we're seeing vendors start to sell "pre-finished" microservices.
Get the disk out of the box, boot it up, fill out a couple of forms, and
voila! Now you have microservices.

That worries the living crap out of me because microservices _are_ the
architecture of the future. You just can't get from here to there using a
magic bullet. Learn you some pure FP, make everything composable using the
Unix Philosophy, and keep your LOC to a bare minimum. Toss off every damn
thing you don't need.

As much as I know they are the way forward, I have a bad feeling that
consultants will have plenty of billable time coming up straightening out a
lot of messes.

------
jakozaur
Rule of thumb: Divide number of full-time backend engineers by 5 and you get
ideal number of microservices :-).

Too many microservices is a complexity mess, too little means you have a
monolith that is hard to iterate on.

~~~
StavrosK
Looks like I can afford half a service, then :P

~~~
jakozaur
Looks like you can afford to have monolith and don't worry about this
microservice thing :-P.

------
calpaterson
I don't agree that scaling up is an unqualified advantage of microservices. In
practice you have to co-ordinate communication between your services. Though
some of this communication will be asynchronous publish-subscribe (ie: speed
unimportant) some communication will necessarily be synchronous.

If one heavy part of your rails app takes up 90% of the processing time, there
is nothing wrong with just getting a bigger machine for the whole app. The
bigger CPU/memory/whatever will be spent on the heavy part and the rest will
be normal.

For most business, scaling is not a problem - they can just get bigger
machines. Having to re-implement transactions across your microservice
architecture really is a problem. Very often transactions need to cross
microservice boundaries and that really requires a lot of thought

~~~
alexro
>> that really requires a lot of thought

Yes, sure, but that's the resources spent right. Not the monolithic spaghetti
crap you waste time on trying to figure out what went wrong - 99% businesses
everyday activity

~~~
pathsjs
Spent right? Implementing a protocol to coordinate transactions (if it is
possible at all, depending on the various backends you work with) to get what
you could have already done in your database?

~~~
alexro
You can keep transactions in one part of the system and the queries in
another. CQRS as one example.

------
krisdol
I see the strengths and weaknesses in the article, and the complaints from all
the comments here, but I still find the trade off of microservices worth it.
It requires sophisticated ops and well defined deployment tools and dev
environments, but we have to handle ten billion requests a month on our stack.
The ease at which we handle that scale, and the speed at which engineers get
to iterate and deploy makes microservices all the more worth it.

~~~
twic
If you used those sophisticated ops and well defined deployment tools and dev
environments to develop a monolith, wouldn't it be just as easy?

As for scale: are you suggesting that having extra code executing and network
calls being made in your system makes it _more_ scalable, rather than less?

Ten billion requsts a month is 3805 requests per second on average; i'd guess
that means 10 000 requests per second in the peaks (correct me if i'm wrong!).
Is this considered challenging scale today? I'd buy four DL380s and call it
done.

~~~
krisdol
>If you used those sophisticated ops and well defined deployment tools and dev
environments to develop a monolith, wouldn't it be just as easy?

No, it wouldn't allow us to be as fast. Most of our services are under
<200LOCs (not a policy, just happens to be the point where people seem to
split things out). The idea is that any service can be rewritten completely in
a few days.

There are no tie ins to any platform, compiler version, syntax, or language.
This might sound like chaos, but it's a huge productivity gain, as I feel full
ownership over features I write. Naturally, we aim for good docs and code
coverage, and use continuous deployment and integration tools to keep
everything green.

As for scale, any microservice can be run across any number of instances
without having to scale up the entire platform. This allows us to identify hot
areas and deal with them effectively.

We don't use network calls (well, not HTTP or TCP) to communicate between
services. Services themselves are pretty transport-independent and work well
over tcp, but NATS is the transport of choice at the moment for inter-service
communication.

------
amelius
> Data segregation: Since all your data now lives in different data stores,
> you’re responsible for relationships between data. What would be a simple
> cascading delete in a monolith is now a complicated symphony of
> dependencies, calls and verifications.

IMHO, this is the biggest problem with microservices: "Transactions" are not
available in a microservice environment. You'll have to work really hard to
get anything that comes close.

~~~
rch
That sounds more like one example of a suboptimal design than a valid
indictment of whole paradigm.

------
rndn
I think a cargo cult also has something to do with signaling, sort of like a
status symbol ("They can't really be Y if they are not X!" -> "Look at us how
X we are!"). It's a self-reinforcing meme that is used as a heuristic for
value estimation, but usually fails catastrophically because of its heuristic
and self-reinforcing nature.

~~~
debacle
I think that's more a result of the kind of person that adopts a cargo cult
mentality.

~~~
rndn
What do you mean exactly? I mean that the status symbol aspect of a cargo cult
is a crucial part of its self-reinforcement mechanism.

------
mschuster91
I don't get the trend to split up everything as micro as possible.

Use a proper framework like Symfony (or if, like many people, all you want is
a CMS, Drupal) supporting MySQL master-slave or multi-master replication and
separation of web frontend and file hosting, host it on AWS (or plain old
dedicated servers), put in Cloudflare if you're scared of DDoS kids, and be
done. If you need SSO use either provided SSO plugins or an LDAP backend if
the SSO is only required for various platforms provided by you.

Said architecture can be built and run on a single server and if you're
dealing with spikes you just spin up a couple frontend servers and be done.

~~~
scient
I don't think you get it... In most cases we are talking about applications
beyond the complexity of a configurable CMS. Also assuming that replication
solves your (potential) database bottlenecks does not account for write-heavy
scaling.

~~~
mschuster91
Many people (and especially consultants) over-exaggerate the customer's demand
for a complex system.

I have seen people shipping SAP to small brick-and-mortar stores with a tiny
webshop...

------
ed_blackburn
Agree with the premise. An excellent example premature optimisation or YAGNI.
An alternative is to factor your code by business capability / bounded context
as microservices endorses. Factor the code as such but don't deploy the
logical partitions as physical ones.

Keep it all in one deployable artefact, in-process for as long as you possibly
can. Use an in-proc message bus first, don't dive into Rabbit until you know
you need it. As soon as you require that infrastructure cost for http, mq,
monitoring a ballooning of boxes / VMs, deployment complications you'll notice
the spike in operational expenditure.

Grow you architecture organically.

------
k__
If I learned one thing in Software Engineering it's "modularization matters
most". And microservices seem to be the epitome of this concept.

If you have to work with different people, you need a way to minimize
dependencies between them.

Also, the more encapsulated things are, the less the starting skill of a
person matters. You just need people who get things done. Later you can switch
out the bad modules easily. Which is a huge economic factor.

I can't count the hours I spent with fixing horrible monoliths and the years
it took to replace them.

But if there is a horrible microservice, you can do this in a fraction of
time.

~~~
jasonkester
Sadly, often the reason you need to work with different people is that you've
architected your product to be so complicated that it needs lots of guys
working on it.

It's amazing how much a team of one can do if you don't saddle said team with
arbitrary complexity such as a microservices architecture. Maybe you'll need
to scale to that level one day. But you'll definitely want to ship. One guy
and a sane architecture can do that.

------
oldmantaiter
TL;DR Microservices have their place, and can be useful for certain
environments, but they are not a fix-all.

They can be pretty nice for multi-tenanted development environments. Sure, you
could use any of the other isolation techniques, but being able to provide an
environment that can be started quickly (and somewhat easily depending on the
rest of the services required). Not to mention that the popularity of
container systems and their ease in understanding (Dockerfile vs RPM spec)
means that other people can hack away at the dev environment without having to
know the ins and outs of building proper packages (although they should
learn).

Now, for a production environment, I would never move to a microservices
architecture for the reasons listed in the article and my own dislike for
adding overhead and complexity to solve "issues" that can be easily dealt with
using tools that have existed for years (proper packaging with dependencies
etc..).

------
copsarebastards
I agree with the YAGNI-ish approach, but talking about micro services as if
they provide modularity is entirely off-base. The decision to use micro
services should be driven by _scalability_ , not _modularity_. If you're
saying that it's going to be terrible to make changes to your codebase, simply
bolting micro services on top of that is going to make things worse.

A well-designed micro service architecture is modular in that each micro
service is basically a nice wrapper around either a query or an update. But
you can organize your application into an API of queries and updates without
micro services.

To be honest, if you don't at least intuitively understand this, you have no
business architecting a production system large enough that this matters.

------
lectrick
Many of the advantages of microservices can be achieved by refactoring your
monolith code to be less monolithic.

I would suggest using functional styles wherever possible, plenty of isolated
unit testable code, and a hexagonal architecture
[http://alistair.cockburn.us/Hexagonal+architecture](http://alistair.cockburn.us/Hexagonal+architecture)
that pushes all the I/O, mutation, side effects, etc. to the very boundary of
your code. Also see Gary Bernhardt's "Boundaries" talk for more interesting
thought in that vein
[https://www.youtube.com/watch?v=yTkzNHF6rMs](https://www.youtube.com/watch?v=yTkzNHF6rMs)

------
agentultra
There are also a distinct lack of tools for debugging co-ordination and
scheduling problems in a _micro-service_ (or as they used to call it in my
day, Service Oriented Architecture) system.

In an asynchronous RPC scenario, does Microservice A listen for the
appropriate response message from Microservice B before continuing work on
Request X99? Does it respond to all messages in the appropriate order? What
happens in a cascading failure scenario when the back-end system Microservice
B relies on is taking too long due to bad hardware/burst traffic/DDOS/resource
contention?

Do you have tools that can analyze your program for critical sections where
you need explicit locking and ordering mechanisms? Do you have analysis tools
that provide guarantees that your fancy distributed architecture is
complete/correct?

These are just a sample of the things OpenStack has to think about -- a micro-
service architecture for managing, orchestrating, and authenticating access to
data-center resources. It's a hard, hard problem and an on-going effort by
thousands of well-paid engineers across the globe to get right.

I have no doubt that a small team of talented developers could stand up a
system of APIs around their core services to get a system running. However I
can guarantee that they will be making huge trade-offs in terms of correctness
and reliability.

At least with a _monolith_ (is that a pejorative?) application you do have
tools to analyze and debug your code that work well and have been battle-
tested for a couple of decades. I suspect you would produce fewer bugs if you
were constrained for developer talent and time.

------
jscruz
Micro service architecture is good to evolve a monolith project who need to
scale when dealing with a huge amount of calls. It's great to be able to
experiment with different implementations and technologies, do A/B testing. It
enforces to have single responsibility modules at architecture level avoiding
bad practices if you are dealing with different/remote dev teams.

You have challenges, though. One of them is when implementing micro services
you need a cultural change in your business to be able to adapt to the change.
You need to deal with more complex architecture, you need to implement your
own solution to deal with the architecture, spend time defining a dev ops
culture if there is none, ...

Businesses are usually pretty different between others so you can not expect
to have the same solution to deal with your problems (For example, using
Netflix approach as a silver-bullet solution).

I've heard so many times the concept "micro services" as the goal as same as
"big data" as the solution. Again, we should analyze what is our problem and
what we want to solve before selling the new shiny thing and making things
over complicated.

~~~
debacle
Do you think you can do microservices without devops? I see the correlation
you're making but I think a more traditional organizational structure can work
fine.

~~~
jscruz
I don't think it's impossible but harder. As far as you have great
collaboration and communication between teams probably you can achieve the
same.

------
cookiecat
Martin Fowler identified a lot of the same tradeoffs in this video:
[https://www.youtube.com/watch?v=2yko4TbC8cI](https://www.youtube.com/watch?v=2yko4TbC8cI)

One benefit I haven't seen mentioned yet: microservices are effective at
reducing the mental "page size" when working on any particular part of the
system.

------
debacle
I think some of these points are gross exaggerations.

> You immediately increase the things your servers have to do tenfold.

Really? It's _ten times_ as much work to implement microservices?

> Personally, I’ve seen slowdowns on the order of 1000% when moving to
> microservices (yes, ten times slower).

Then you implemented your microservices _wrong_.

I think that the author's understanding of the goals and purposes of
microservices is maybe a bit misguided. Microservices are about front-loading
scaling problems, not about having a clean architecture or smaller codebase.
If you never need to scale, you don't need microservices (but you're probably
wrong).

The flowchart at the end of the post really underscores for me that this
author's argument is not genuine. He holds up this shibboleth of a
"monolithic" architecture, something that doesn't really exist in 2015.

~~~
talonx
> Really? It's ten times as much work to implement microservices?

No, it says the _servers_ have to do tenfold more work, not _you_ to implement
them. Whether that's correct or not is another discussion.

------
simonpantzare
The application I work on most of the time is largely monolithic and usually I
have no problems with that. Some parts have been extracted to their own
codebases and are deployed separately because of performance reasons.

We also separated the main backend/API codebase from the frontend mostly
because the frontend devs work prefer to work within the Node ecosystem
instead of Python/Django and so that we don't have to think too much about
synchronizing deployments. The tests for the backend code take quite long to
run as well compared to the frontend tests, so having this separation is nice
for the frontend devs that way too.

What I some times would like to have better infrastructure support for though
is throwaway prototypes/projects that can live in their own codebases and have
access to all the regular databases, blob storage and so on, as well as
databases that are private to the prototype that I can do whatever with with
no risk of doing something bad to the important databases/storage.

I would also like these prototypes to be able to register themselves with the
load balancer to take care of everything under `/halloween-experiment/` for
example and have the load balancer add headers like `X-UserEmail`, `X-UserID`,
`X-IsEmployee`, and so on so that I don't have to implement
authentication/authorization in every prototype.

Today these types of prototypes need to live next to the "important" code so
that they can use the same CI pipeline and easily be made public or visible to
employees and use real data.

I'm following projects like [https://getkong.org/](https://getkong.org/) with
interest, and together with everything happening around Docker such as EC2
container service or Kubernetes, as well as projects for service
discover/configuration like etcd or Consul, it feels like we're getting there.
There are just so many projects to keep track of, and you need to figure out
how to make them all part of your CI pipeline. :)

------
junto
I love his little worflow diagram embedded in the article:
[http://media.korokithakis.net/images/microservices-cargo-
cul...](http://media.korokithakis.net/images/microservices-cargo-
cult/flowchart.png)

~~~
macca321
I wonder if the box on the right should read "You need automated regression
tests"

~~~
bbrazil
It should read "split up your monolith".

Just because one extreme isn't working for you doesn't automatically mean the
other extreme is the right solution.

------
pjmlp
Not only that, the microservices is just SUN-RPC, CORBA reborn and we all know
how they worked out.

------
evantahler
Develop as a monolith, deploy as services: engines.

[http://tech.taskrabbit.com/blog/2014/02/11/rails-4-engines/](http://tech.taskrabbit.com/blog/2014/02/11/rails-4-engines/)

------
jnet
"As with everything, there are advantages and disadvantages"

The author focuses on microservices, however, I think there is a larger point
to be made. It is not that some particular architectural pattern is bad or
good, it's that when you don't fully consider the requirements of your
application and apply some pattern or technology just because it's the hot
item this week you are going to end up with problems. This has less to do with
microservices, in my experience, and more to do with less technical managers
making decisions for a project when they don't fully understand.

------
lolive
Damn! Now what is left as the next big thing?

~~~
StavrosK
Whatever was the next big thing after SOA! Just look at the Slashdot archives
for 1990, we can probably figure it out quickly.

~~~
josteink
A micro-services service-directory micro-service?

Maybe we should call it MDDI?

------
tibbon
At my job we've avoided microservices thus far. 90% of our deployments are
just to Heroku. Every now and then I lament in my mind that we aren't using
the coolest new tools (Docker, microservices and all the things that come
with), but what we have works _really well_ , and we can easily scale up by
10x and things will still work.

Every time I think of the mess that it will cause to break up things to
microservices, I'm glad we aren't doing it- yet. When the time comes, we'll
roll out to services as-needed, but that day isn't today.

------
agentgt
The problem with microservices for us has been the composition of operations.
Yeah we use the Rx* observable patterns and it helps but the code is still non
intuitive for new developers if the language is pretty much
procedural/imperative. Even with languages like Scala it still gets confusing.
Even if you have a language where threads are cheap (golang) you still have to
compose the operations.

I have been meaning to see if there are microservice like frameworks for
Haskell similar to Hystrix (which is what we use).

------
lobster_johnson
Yet another article that misses a huge aspect of microservices: _Reusability_.
(I'm going to borrow from an older comment [1] here.)

Almost all of the swathe of microservices we've developed internally are
general-purpose. We've built a dozen or more user-facing apps on top of them.
If I wanted to build a new app today, I would typically sit down and write a
Node + React app, configure some backends, and I'd be done. I don't need to
write a new back end because I can just call our existing services.

If you look at what a modern web app is, most apps these days are actually
stupidly similar. They typically need things like:

* User accounts

* Authorization with existing OAuth providers (e.g. Facebook)

* Some kind of database to store and search structured content

* Notifications (email, text, push)

* Storing images or video

* Syncing data from external sources

* Analytics

We have generalized, reusable microservices that do all of this.

Let's say I want to build a HN-type link aggregator with comments. I will use
our document store to store the links and the comments in a nice hierarchical
structure. I will use our login microservice that mediates between an identity
data model and an OAuth account registry. I can use our tiny microservice
devoted to recording up-/downvotes. I can use our analytics backend to record
high-level events on every UI interaction.

I can write this without a single new line of backend code.

This ability to "pick and mix" functionality you need is the real, largely
undiscovered beauty of microservices, in my opinion. It's the same idea that
makes AWS attractive to many people; you're building on the foundation of
thousands and thousands of work and reusing it.

We just whipped up a new site recently where 95% of the work was purely on the
UI, since all the backend parts already existed. The remaining 5% was just
code to get data to the system from a third-party source, plus some
configuration.

Reusability requires that you plan every microservices to be flexible and
multitenant from day one. It's a challenge, but not actually a big one.

Is it possible to do this monolithically? Sure. I would be afraid of touching
such a beast. We have very few issues with code devolving into "legacy", for
example; the strict shared-nothing APIs ensure that half-baked, client-
specific hacks don't sneak into the codebase. If anything messy happens, it
happens in the client app, and that's where it should happen. Eventually
you'll throw the app away, but the backends remain.

------
wiremine
Of course microservices are just another tool in the toolbox.

I think what's frustrating is the lack of support in moving from a monolith to
a microservice architecture. I haven't built a lot of them myself, but it
feels like you're rolling your own framework/architecture whenever you need to
make the transition. Is that anyone else's experience, or is it just not
possible to codify best practices?

~~~
dmux
Currently in the process of separating several monolithic applications into
services. We've found success in identifying individual "resources" that are
present and common in each of the monoliths. Once identified, we implement
that resource end-point in our service and swap out the existing code with
calls to the service. Divide and conquer.

------
Randgalt
The "micro" in microservices is the issue. It reminds me of the "No-SQL"
movement. The truth is that EVERYONE has a multi-tiered architecture. The only
question is how many tiers you need. It's always more than 1.

~~~
dragonwriter
I think one thing that microservice and some other models correctly point to
is that "tiers" of the traditional 2-tier, 3-tier, and n-tier architectures
aren't universal, and that their are good reasons for some systems to have
components that are arranged in some manner other than stacked, front-to-back
layers.

------
aterreno
If you design your software with a bad architecture you will have problems, if
the services and their data are 'cut' in the wrong way you will get
performance (and other) problems.

That's valid for functions, state, api and service stores.

------
dorfsmay
The thing that nobody addresses and especially not the micro services gurus is
how do you know where and what to split into micro services.

When does splitting a service add enough value that it is worth the cost of
performance and added complexity?

~~~
parasubvert
We should grab lunch to chat about this.

Simply put, you split a microservice when you need to split teams.

Microservices aren't a solution to a technical problem, they're a solution to
a social/organization problem (described by Conway's law).

------
lkrubner
I think it is fascinating how an idea can emerge with a fuzzy meaning and, in
the space of 2 years, become rigidly associated with a narrow set of
technologies which will surely be much more temporary than the idea itself,
thus forcing people, after 3 or 4 more years, to come up with a new word for
roughly the same idea.

In the summer of 2013 I was working at Timeout.com and we were trying to
reinvent the architecture of the site. Timeout.com had spent several years
using the PHP framework Symfony to build a massive monolithic CMS, and the
thing was a disaster. It was shockingly slow. If you ssh'ed to inside the
datacenter and then tested the response time of the system, under ideal
conditions, from one computer in the data center to another computer in the
data center, then the average response time was 10 seconds!

This lead to a long internal debate. I advocated for what I called "An
architecture of small apps", because at that time none of us had ever heard
the word "microservices". I did not hear that word until March of 2014, when
Martin Fowler wrote his essay:

[http://martinfowler.com/articles/microservices.html](http://martinfowler.com/articles/microservices.html)

But back in the summer of 2013, with permission, I published the whole
internal debate that we had had at Timeout.com:

[http://www.smashcompany.com/technology/an-architecture-of-
sm...](http://www.smashcompany.com/technology/an-architecture-of-small-apps)

You will notice that you don't see the word "Docker" in my essay, nor do you
see it in Martin Fowler's essay. And in my essay, I suggest we use ZeroMQ to
bind our apps together.

But 2 years after we had our internal debate, I've noticed that more and more
people now associate "microservices" with a very specific set of
implementation details: Docker, Kubernates, HTTP and Service Discovery.

I acknowledge that these 4 technologies can be combined in very powerful ways.
I currently work at the startup incubator run by NYU, and I get to eavesdrop
on what the folks at lsq.io doing, since they sit next to me. And I get that
Pelly is a frighteningly smart guy doing extremely cutting-edge stuff. I
totally admire everything they are doing.

However, I personally feel that I'm following a microservices strategy, and
yet what I'm building is still a lot like what I described in my essay of
2013.

July 30th, 2013 [http://www.smashcompany.com/technology/an-architecture-of-
sm...](http://www.smashcompany.com/technology/an-architecture-of-small-apps)

------
sre_ops
Example of micro services that just work (tm): Internet.

------
datamiller
monolithfirst.com

------
datamiller
monolithfirst.com

fight the future

