
MonolithFirst - r4um
http://martinfowler.com/bliki/MonolithFirst.html
======
ecoffey
For me the big takeaway is this:

Refactoring across the call stack is orders of magnitude easier than
refactoring across a socket.

Sacrificial or not, you can still write the Monolith as "Service Oriented",
just that boundary is the call stack. Especially if you're comfortable with
IoC and DI.

Building on the latter I've had success with that. Stubbing out hardcoded
concepts that I know will come from a as-yet-unwritten service in the future.
Then you start pushing those hardcoded things "down and out"; e.g. it was
hardcoded in App A, but now App A requests it from App B, but App B just has
the hardcoded thing.

~~~
twerquie
> Refactoring across the call stack is orders of magnitude easier than
> refactoring across a socket.

That's assuming that things haven't become deeply coupled within the "call
stack". Separating things across a socket often forces a separation of
concerns, which does tend to make refactoring easier.

~~~
bad_user
In my experience communications across a socket does not force a separation of
concerns. It doesn't even encourage it.

Heck, because asynchronous communication in a non-deterministic environment is
so hard to deal with, my experience is that such communications encourage
shortcuts to be taken, so yes, I think it encourages tight coupling.

The only thing that somewhat encourages a separation is having different
people responsible for different modules and because people are selfish,
they'll fight for their components to have less responsibilities, not more. So
it becomes a territorial thing. But this happens only if you have seniors that
know what they are doing, otherwise rookies or less competent folks end up
cooperating to "get things done".

~~~
AnonJ
Weird reasoning, that. I hardly see how a selfish, "territorial" and
noncooperative approach could benefit anybody. Sure people might cooperate,
but that doesn't mean they will take shortcuts and mess things up.

------
krisdol
The companies/stacks Fowler encounters are generally in a problematic state.
Thoughtworks, like other consulting companies, is generally hired when things
are already going wrong. If they are hired to assess a broken monolith stack,
the refactoring to microservices is naturally going to yield positive results,
and it's easy to come to the conclusion that microservices are an improvement
when refactoring from a monolith stack. When they are hired to assess a broken
microservices stack, it's easy to come to the conclusion that starting with
microservices is broken. What they don't usually see is the hundreds of
instances of microservices (and monoliths) working as intended, as there is
typically no need to call in a very expensive consultant in those cases. How
many times in the case of badly-organized Microservices was the solution to
migrate to a single monolith? I imagine the solution is more often to re-
organize the services. If that's true, then one cannot say that microservices-
first fails -- just that badly-organized microservices-first fails.

Starting with microservices first, especially in a small team of developers
(but more than 1), helps things move much quicker than it did with everyone
sharing responsibility for the same codebase. Organizational structure tends
to reflect its products' structure. Microservices requires that the splitting
of responsibilities between teams or developers roughly matches the split of
responsibilities among services. Otherwise, you're just working on micro-
monoliths.

An actual study needs to be performed before deciding on a "Right Approach" as
this piece does.

~~~
room271
This. The key thing missing from Fowler's blog post is context. If you are a
small team/company then microservices is probably not sensible unless you are
highly skilled - the overhead is too high and the benefits less too.

If you are a bigger organisation though, with lots of teams, microservices are
required to decouple teams and enable agility/exploration in products and
services.

Lastly, it's easy to advocate the monolith ('only to start') when you're a
consultant, as you've left by the time it becomes a problem. Or, you get
called in down the line when it's all gone pete tong because the monolith
'prototype' has become a monster.

Note, I've spent the last two years working on microservices (which has
involved a big learning curve but now yielding benefits) and also old
monoliths (that have sucked up so much time it's unbelievable).

~~~
alxndr
OT: what does "it's all gone pete tong" mean?

~~~
DanBC
"It's all gone a bit wrong", but normally in a dramatic way.

[http://en.wikipedia.org/wiki/It%27s_All_Gone_Pete_Tong](http://en.wikipedia.org/wiki/It%27s_All_Gone_Pete_Tong)

------
mpweiher
I had great success converting a "microservices" architecture to a monolith at
the BBC back in 2003/2004\. The result was ~100-1000 times faster (speed was
an issue with the original), had a fraction of the code, used 1 machine
instead of a dozen, was more maintainable, had effectively zero failures over
several years versus several a day, was trivial to install (copy this jar over
here) etc.

~~~
k__
Wasn't Plenty of Fish run on two machines? And Github Pages?

If you're building a MVP, you're probably going better with the monolithic
approach. It's easier to develop and 99% of businesses can probably scale
enough with bigger boxes.

~~~
tim333
Plenty of Fish ran for the first 8 months on the guys home computer.

~~~
tim333
Also impressively in 2008 they were serving '30+ million hits per day, 500-600
per second' on 5 machines with one engineer. They were the '#13 website in the
United States' for hits at the time. I think their machine / engineer
situation has expanded substantially since - fancier matching algorithms and
instant messaging complicated things.

[http://blog.codinghorror.com/my-scaling-
hero/](http://blog.codinghorror.com/my-scaling-hero/)

------
lmm
I once worked for a company that was sunk by a microservice-first
architecture. The "architect" in charge was a big Fowler fan who would quote
him to justify every decision. Every developer knew the architecture was wrong
(and many said so - but the architect had the authority to overrule them,
which is maybe the real problem).

I guess it's good to finally see some acknowledgement, but this is too late
for that particular organization. Beware of hyped architecture bandwagons?

~~~
ubercore
Any specifics you can share about why the architecture was so obviously wrong?
This is a really interesting topic for me!

~~~
lmm
It really slowed down development because it made the feedback loops really
long - you might have to start up five different JVMs, which would each take a
while to register itself with Hazelcast before you could start the next that
depended on it, before you could test your change. The service boundaries we
had initially come up with weren't always correct (as you'd expect really),
but it was a massive effort to refactor something across a service boundary,
because each component was released and versioned separately.

Maybe there are ways to work in such an environment, but if so we didn't find
them and no-one had the experience to know them.

~~~
ecoffey
> because each component was released and versioned separately.

For SOA a mono-repo with synchronized releases is a huge help.

In one commit you can refactor a shared module and update all apps that depend
on it. And you know when you deploy to staging or prod that all machines got
the latest code

~~~
nostrademons
At what scale? At Google we had to version everything _within_ a commit, i.e.
files that were pushed to different binaries had to make sure they were
backwards- and forwards-compatible with the previous version. Your code had to
work even if it was talking to another server that was stuck on the previous
commit, or even several releases back.

The reason is that when you have 1000s of machines, the push _will_ fail for
some. Either the machine will be offline and out of service when the new
version is released, or its network connection may be down, or a cosmic ray
may flip a bit and make the process crash when installing, triggering a
rollback. Particularly when handling user data, you need to code defensively
around these and not assume that the server you're talking to has the same
code you just wrote in your commit.

Obviously these problems don't manifest when you're pushing to 1-3 machines,
but if your deployment is that small, why not run it all in-process with a
monolithic app anyway?

~~~
ecoffey
Great point. Different scale needs different things for sure. Even at the
scale that I've found this successful in you still have to consider that
backward/forward compatibility, albeit at a much coarser grain.

------
derefr
Alternately: design your application as a bunch of Service objects with clear
APIs that make (the moral equivalent of) RPC requests to one-another. Do this
all in the same process. Whenever a Service turns out to need separate
scaling, hoist the code for it out (no need to change it) and replace the
"null modem" RPC layer between it and the rest of your code with a real
socket-based gateway.

Hey! You've just invented Erlang!

~~~
taeric
That "with clear APIs" is where you are going to fall on your face. Hard. And
is the point. At the beginning, you often don't know where the actual useful
boundaries will be. You probably know where you want them, but ultimately that
is just an aspirational point.

~~~
tel
This is where types come in handy. With types you are _constantly_ declaring
APIs, publicly and with confidence, so it's easier when one starts to bear
real load.

~~~
taeric
Oddly, this is just another place this same problem can hit you. Sometimes,
cleanly defining your types is not easy at the outset.

~~~
dllthomas
_" Oddly, this is just another place this same problem can hit you. Sometimes,
cleanly defining your types is not easy at the outset."_

The problem ("you don't entirely understand the space your solution will
inhabit; some of what you build will be wrong") will hit you regardless of how
coupled or decoupled, typed or untyped. The question is, "do your tools help
you adapt", and having defined your interfaces with types your tooling will
quickly tell you - when you change the interface - everything you broke.
Without types, this is tremendously more painful (feeling this in Python
presently).

~~~
taeric
This only applies if the only tool you have is your type system. It is not
inconceivable to have a system that verifies you do not pass anything to a
function that can not satisfy the methods that you call on it. For all
functions you have in your system.

Is this close to 100% inferred types. Yeah. But that is ultimately the point.
Explicitly typing your code can be error prone. And, yes. So can leaving out
all types. Far as I am aware, nobody has found a silver bullet, yet.

~~~
dllthomas
Your comment is incoherent.

First, it applies so long as a type system _is an effective tool_ at telling
you this, and so long as that tool is not completely (or at least
substantially) _supplanted_ by other tools. In no way does it have to be "the
only tool" \- there are lots of tools I find useful to tell me when things are
going wrong in my code: types, tests, assertions, linting. Even with the other
three, types cover meaningful territory.

Second, as you half note, the system you describe _would be a type system_
(and sounds somewhat similar to core.typed in clojure, though my understanding
thereof is tremendously superficial).

The rest of your comment just seems not to follow.

Type systems aren't silver bullets for keeping agility in the face of changes,
in the same way that hammers aren't silver bullets for driving nails. You can
use them wrong, you can hurt yourself, but you sure want one (... but probably
not the weird shaped one with a loose head over there labeled "Java"...).

~~~
taeric
I was specifically referring to where you said "having defined your interfaces
with types your tooling..." The implication being that types are required for
this to work. I question that implication.

~~~
dllthomas
People are pounding nails with rocks. You say "hammers are just another thing
to smash your fingers." I point out that hammers help you smash your fingers
_less often_ , and you question my implication that hammers are the only way
of reducing finger smashing.

This is idiotic; I'm done with this thread.

------
noenzyme
Experience Report: We just went through the decision to build as a monolith or
via microservices. The original decision was to go with microservices as the
rest of our systems are designed that way.

As the time pressure mounted the microservices that communicated naturally
combined. The driving force was just the cycle time. Testing and deploying
microservices took longer. Mind you, not minutes vs hours. Just a few extra
minutes makes a difference if you do it often enough.

One decision that give us confidence we will be able to split the system back
out again was to use the stuartsierra/component library. By using DI we can be
fairly confident we don't build dependencies we aren't aware of. We simply
substitute a client that talks over the network in for the one that does the
calculation locally.

We are still in the stabilization zone but have already started to split out
services. Code velocity is the driving force for the splitting. Certain
components are well understood and fairly robust other still young and poorly
understood. We want to limit our ability to accidentally screw up something we
have already gotten right. So the components that haven't changed in a while
get spun out.

~~~
fmstephe
It must be a sign that I am getting old, but your story of considered
pragmatic compromise warms my heart. :)

------
meesterdude
I've wondered about how to grow a rails app. Really, you can get pretty far on
one server. I think people go SOA too soon, instead of just trying to throw
more resources at it (which is easier). I mean, basecamp isn't SOA and they
handle plenty of traffic. Not that everyone is them of course; but most aren't
amazon, either.

Really, I have no qualms with a monolith, but having 100 models to look at /
understand is not easy. Chunking them up in some way, even if only by name
(like product_sku, product_image) can really go a long way in understanding
how an app works; thats really what I found attractive about SOA; but I could
certainly do without the socket inbetween.

~~~
gterrill
I'm in the same situation. About to try the 'component based architecture'
approach: [http://teotti.com/component-based-rails-architecture-
primer/](http://teotti.com/component-based-rails-architecture-primer/)

~~~
meesterdude
That looks interesting - it does look like it takes some effort, and does lead
to additional maintenance, but that tradeoff might be worth it for the right
app.

Still, that's a stack extraction; which would work for something like an admin
section, but maybe not so much for things more entwined? But then you have to
adjust the component antytime you want to make a change...

I could see it working out. I Would be interested to hear of your results once
you have an assessment of it as an approach.

------
tel
There are two orthogonal concerns with microservices. First is the scaling
aspect. A microservice architecture has many potential scale point as each
independent service can be horizontally scaled independently. Unfortunately,
while achieving this is possible in a microservice architecture it's an
enormous added layer of complexity today.

The second aspect is, I think, more obviously compelling in that microservices
force large scale modularity boundaries into your application. These are at
some level entirely semantic boundaries, but the nature of microservice
isolation forces them to be complete boundaries involving isolation,
serialization, dirty checking, published APIs/interfaces, etc.

This, I feel, is unambiguously fantastic.

The trick is of course that this can be achieved in a "monolith" just the
same. It's merely often not because people take advantage off _too many_
features of monolithic development. Shared memory and shared effect space,
guaranteed communication causing you to weaken interfaces, fast response time
causing you to never be public about the interfaces to a particular submodule.
These together lead toward spaghetti code and the interlocking danger of
monolithic design.

So, avoid them and make a monolith. Breaking it apart later will be easy if
there are already logical cuts in your design. Don't rely on shared effects,
shared state, or "hidden" API layers. You can use REST even within a single
system.

------
spydum
I think a lot of people tend to over-estimate how "scalable" they need their
platform to be. Or worse, they spend so much effort on what they think will be
a bottleneck, only to find they spent optimizing something which could have
run on a single core.

I tend to think monolith first makes sense, unless you thoroughly understand
the problem-space.. even then, monolithic POC might be worth it just to
confirm things.

~~~
MrBuddyCasino
very related: [http://yourdatafitsinram.com/](http://yourdatafitsinram.com/)

------
vendakka
Conway's Law comes into play a little here. Microservices provide organization
level flexibility at the cost of operational and development complexity. For
small teams and especially at the start of a project, organizational
flexibility is usually not needed. Paying the extra ops and dev cost is
unnecessary and even dangerous.

In the above, when I say microservices incur ops and dev cost here's what I'm
talking about

* Debugging tools need to function across machine boundaries

* Deployments potentially need to be scheduled in multiple stages based on the service dependency graph

* Developer and test environments become more complex.

* The number of failure modes increases due to the network.

All this means we need more code, tools and processes. This investment is
worth it and even required for large organizations. They usually have the
infrastructure in place and the financial resources to invest. When I was at
Google building a microservices based system was so much easier with Borg,
Dapper, etc. This kind of tooling is only now emerging in a useable form in
the open source world.

EDIT: formatting, grammar

------
ianbicking
I think one of the essential reasons to start with a monolith is cultural.
When you have an existing project/product and you are making incremental
improvements to that project, there's a lot of shared understanding (everyone
knows what the product already does), and so a small group can go off and work
in isolation and get some efficiency from that.

When the product doesn't even exist, there is not shared understanding. It
takes constant communication to prioritize correctly, and to understand the
purpose of all the pieces, and to detect cases where an implementation is
diverging from the purpose. In a kind of inverted Conway's Law, the
architecture of the application will affect the communication structures of
the team. You don't want the team to communication like microservices
communicate. You want the team to communicate like a monolith, where everyone
is always in everyone else's business, where conflict is frequent, but
conflict resolution cannot be avoided.

I'm in the middle of a greenfield project where there's lots of little pieces
communicating to each other over message channels. It's not an architectural
preference, it's just how the environment works. Every single-page-app-style
website has at least two services with a message channel. It's interesting to
look at how this can be technically reminiscent of microservices, but
culturally completely different. Those message channels don't make my project
any less of a monolith (just a kind of annoying to debug monolith). But that's
because they are deployed together, developed together, no individual has a
responsibility that ends at one of those boundaries, there's no contract
across those boundaries, there's few principles applied to the design of those
communication channels (experiential and intuitive principles I suppose). That
team structure, and that shared relation to the code, feels absolutely right
for a new project.

------
EdSharkey
Seems like reality is settling in.

I mean, it makes sense to me: a team should keep its code well-factored but in
a single codebase that's integrated and ready to deploy (as one or more
deployable units) for as long as is tolerable/possible.

I think a trigger for splitting a module off from a monolithic codebase would
be when its value or resource utilization has become disparate enough from the
rest that it deserves its own infrastructure and/or its own maintenance crew.

~~~
artgon
The deciding factor probably isn't technological but rather human. If you want
to keep your teams to 5-10 people and you want to maintain autonomy for teams
as you scale, you'll need to split up your monolith.

~~~
EdSharkey
That seems reasonable. I can think of many human/organizational/rule reasons
that would force the breakup of a monolith representing many separate-but-
related concerns.

------
lobster_johnson
The article misses an important aspect of microservices: They're reusable!

Almost all of the swathe of microservices we've developed internally are
general-purpose. We've built a dozen or more user-facing apps on top of them.
If I wanted to build a new app today, I would typically sit down and write a
Node + React app with no backend code needed because I can just call our
existing services.

For example, we have a microservice dedicated to storing documents. With this
I can create a todo list, a blog app, a Reddit-type link aggregator with
comments, etc. If I need login, there's a microservices that mediates between
an identity data model and an OAuth account registry. If I need an
upvote/downvote system, we have a microservices optimized just for that. We
have services for sending notifications across different transports (email,
SMS); "followings" and sending digests about updates to things you're
following; organization trees; verifying email addresses and mobile phone
numbers and other verification sources; processing photos, audio and video;
collecting events for aggregating in an analytics store; etc.

This ability to "pick and mix" functionality you need is the real beauty of
microservices, in my opinion. It's a huge time saver. We just whipped up a new
site recently where 95% of the work was purely on the UI, since all the
backend parts already existed; the remaining 5% was just code to get data I to
the system from a third-party source.

This does require that you plan every microservices to be flexible and
multitenant from day one. It's a challenge, but not a big one.

~~~
sinzone
Out of curiosity, what management tool do you use to orchestrate all these
microservices?

~~~
lobster_johnson
Right now, a combination of Puppet and a simple custom-built tool written in
Ruby. The tool inspects the metadata from Puppet about which nodes should run
what apps, and the tool uses SSH connections to clone from Git, write config
files, restart daemons, etc.

At the same time, developers run a local Vagrant VM that is configured
identically to the production/staging clusters, and can run every single app
(controlled using the same deployment tool). We also made it simple to mount
an app you're working on, with code hotloading automatically taken care of. If
you want to work on a microservice or an app, you just modify the code on your
local machine and reload the page (or do an API request).

It works really well. It's not perfect, though. Since we're deploying directly
to the servers, every node has to install packages, compile modules, package
assets, etc., and everything has to wait on the slowest node; we'd like to
transition to a system where we build a "release tarball" (or even a Docker
image) once, and then push it to the cluster. Also, we're looking into using
something like Mesos to better orchestrate apps and move away from role-based
nodes; Puppet is way too rigid and too "opsy"; we'd like to use Puppet for
controlling the base OS, and let the microservice/apps world be more dynamic.

~~~
sinzone
Makes sense! Have you considered Ansible?

~~~
lobster_johnson
Ansible doesn't give us any benefits over our current system. (Salt would be a
better choice. Not a fan of YAML templating, to be honest, nor the tight
integration with Python.)

Configuration management systems like Ansible and Salt and Puppet are fairly
rigid; they are basically modular recipes for placing files and starting
services on remote servers. What you get from Mesos (combined with its
frameworks) is something that can run your services, keep them running, and
handle the several state transitions that affect any running application:
Transitioning from one version to another, for example, or restarting a
failing app.

We've tried using supervisord as a stopgap solution, but it, frankly, sucks.
Doesn't follow forks (no cgroups), isn't capable of cleaning reloading its
config, can't do master/worker replacement, doesn't support syslog in any
meaningful way.

~~~
sinzone
all those help with the infrastructure part, take a look at
[http://github.com/mashape/kong](http://github.com/mashape/kong) for helping
with the code.

~~~
lobster_johnson
I don't know Kong very well, but having look at the documentation, feels a
little too much like a framework to me.

The nice thing about Mesos etc. is that it makes almost no assumptions about
what your system is. You pick and mix the functionality you need. For example,
Marathon is basically just a process manager. You wire it up the way you want;
the application can be anything, from a web server to a one-off script.

Kong seems to be "Rails for microservices". Which is probably useful to
someone, of course.

------
bad_user
I do agree with this article. We are working on a project based on a micro-
services architecture and my experience matches.

First of all splinting functionality in multiple services is really hard and
the first version of your architecture is probably flawed. It's also really
hard to establish the responsibilities of each component, the boundaries,
whereas it is really easy to take shortcuts that invalidate the modularity or
the re-usability of those components.

Of course, these wouldn't be such big of a deal, except that refactoring
becomes really difficult, because refactoring now often involves changes in
how these services communicate and moving responsibilities around. Also, we
are often talking about teams of more than two or three people, since two or
three people will almost always choose to execute a monolith first - so in
such teams the responsibilities are often divided between people, with people
having an incomplete view of the whole system, so refactoring across the whole
stack becomes a real bitch - next to impossible actually if the management or
the clients are not acquainted with how software development works, as the
development of new features is always preferred over dealing with technical
debt (non-software folks do not understand technical debt).

Therefore I agree wholeheartedly with what is being said. It's not that micro-
services don't work, however you need very senior people that know how to
design such systems and you still have to throw away the first version of the
entire system. And if you think you're one of those people that think they get
it, but never had a failure, then have some patience, as you'll get there :-)

------
ShirsenduK
Micro services Architecture seems to be the new hotness. I feel it to be yet
another case of premature optimisation. :(. For me the best way has been
writing a monolithic rails app and then writing rack/Sinatra apps to break
them apart depending upon production bottlenecks. This has helped manage
performance as well as code.

~~~
mdpopescu
I don't understand why this is considered new; Roger Sessions described
something extremely similar in his "Software Fortresses" book back in 2003 -
[http://smile.amazon.com/Software-Fortresses-Modeling-
Enterpr...](http://smile.amazon.com/Software-Fortresses-Modeling-Enterprise-
Architectures/dp/0321166086/ref=sr_1_1?ie=UTF8&qid=1433350843&sr=8-1&keywords=software+fortresses)

~~~
crdoconnor
Clothing fashions are almost never truly "new" either.

------
rbanffy
Premature optimization is the root of all evil.

If you were able to accurately predict the future so you could know what your
pain points would be as you grew, you would be wasting your talent writing
software. You should be playing the lottery.

~~~
dpark
If you were utterly unable to predict any of the potential pain points in the
things you build, I would seriously question your expertise. The fact that you
cannot predict the future perfectly does not mean that the future is wholly
unknowable.

~~~
rbanffy
It's not that pain points are impossible to predict. It's that you'll get some
predictions wrong. False positives will cost you from the start and false
negatives will bring you problems when they surprise you.

Instead of planning for future predicted problems, plan for change. Embrace
the fact you know little about the future and that both the problems and the
way you tackle them will always be mutating.

~~~
dpark
What you're saying is basically "the only constant is change", which is fine
and dandy, but also kind of a useless truism. You are attempting to predict
the future by building a product/project. You are assuming that the thing you
build will continue to be useful and valuable into the future. By the very act
of building, you are asserting some knowledge about the future.

Sure, you shouldn't go crazy and build a complex and expensive solution you
don't need today and may never need, but it's disingenuous to claim that a
lack of perfect clarity into the future justifies a complete lack of planning
for the future.

------
jcromartie
Evolutionary design doesn't just work, it's the only kind of design there is.
Design is not done in a vacuum. We fool ourselves when we write software,
thinking that we're creating something ex-nihilo, plopping it down in Eden and
declaring that it is good.

~~~
vinceguidry
You're getting at the difference between top-down and bottom-up. Evolution is
bottom up, starting with what exists and looking to improve. Top-down is
starting with what you want to exist and moving things in that direction.

You need both. If all you do is evolutionary, then you lose the sense of why
you're even there in the first place. If all you do is top-down, then you'll
get deep into yak-shaving territory before you know it.

------
sinzone
This is so true in our case. We started building Mashape, the marketplace for
APIs in 2011. Two years later it become +100k loc of spaghetti Java code. We
started a big re-write/de-coupling phase in 2014 which also opened new
business opportunities since we were able to spin off some features as single
products.. and this was not expected but definitely created advantages not
only from the code/productivity side but from a business side too.

We have embraced Nginx and built KONG[1] as the main API Gateway for managing
our microservices. It made our transition much faster and easier since we were
able to orchestrate common functionalities across services, such as logging
and authentication, in a few lines of code.

[1] A month ago we've released Kong open source:
[https://github.com/Mashape/kong](https://github.com/Mashape/kong)

------
nijiko
Do whatever feels best for your company at the moment.

Monolith - Easier and straightforward, less thought on architecting the
puzzle, and more about solving the issue at hand. Later down the line, this
will cause pain points, should you be able to justify that pain with momentary
momentum then this is the option for you.

Multi-tier - Decoupled monolithic application, generally happens after second
iteration of a monolithic application.

Microservices - Modular, and requires more thought behind the interactions and
architecture of the system. More thought should be put into also the deploy,
and scaling of the system as well. Eventually you will have to do this. It is
very obvious this is the natural progression of things as something grows.

------
swanson
There was an interesting discussion on a recent Bikeshed Podcast episode about
Monolith vs Microservices, featuring DHH:
[http://bikeshed.fm/14](http://bikeshed.fm/14) \- worth a listen (you can skip
the bits about ActionCable if you aren't a Rails user!)

~~~
cdnsteve
Great listen!

------
perplexes
This fits in very neatly with Casey Muratori's "Compression Oriented
Programming" which is essentially Write your usage code first, keep YAGNI in
mind, then Refactor.

I was once asked in an interview whether, on a new project, I would start with
a monolithic app or some sort of SOA. ("Microservices Architecture" is the new
SOA)

I answered that I would start with a monolith because usually you're trying to
find product/market fit as quickly as possible, and having an SOA would likely
slow you down due to its upfront cost. (How many services? Any redundancy?
What do they do? How many databases? How do you keep them up? How do you
diagnose when they're failing? etc)

They responded that they always do SOA, that the benefits are clear, that
monolithic apps are idiotic, etc.

I didn't want the job... but I was confused about our differing opinions.

What I've learned since then is basically: are you constructing a building, or
making an art installation?

Construction is thousands of years old. Contractors have huge tables of how
long each part of the process takes, in what order, down to the quarter-hour
(in some cases) and are fairly accurate in their estimations. (They're still
hilariously wrong on occasion either in time or budget, building _anything_ is
pretty difficult)

In this case, architecting SOA would work out since you've done it before. You
know about how long it takes, what the pitfalls are, what support
infrastructure you need, etc.

When you're making art, making something new, with new materials, without a
manual, with only some best practices in mind, time becomes essentially
unbounded.

Upfront architecture in this case would be poorly suited to the situation -
you'll probably end up changing it a lot, and each time you introduce a bit of
"rigor" to the system it becomes a bit more difficult to change. Especially
over socket boundaries and different API versions.

I also feel like bad code has a survivorship bias - you only hear about it
because the company took off. To get the company to take off, the code perhaps
was necessarily rushed just so the company could stay around long enough to
make money.

"Ah, but if we could have done it right in the first place!"

You don't hear about the companies who die, no matter the quality of their
code.

------
tallerholler
As someone who is starting a new project and thinking about microservice first
(and first time in general), this is interesting. Im wondering if there are
any success stories so far for this case? I like the idea of having just a few
coarse services (e.g. users, content, gateway, message queue, web/client)

Another interesting thing is how to handle microservices orchestration,
development, deployment early on without significant investment of time. We've
been looking at docker/docker-compose and it seems like it should handle it
but also seems more geared towards multi-container single-service apps. I'm
wondering if anyone else is using same technology and has input? Maybe as
things develop it will handle build/mananage/orchestrate multi-decoupled
services

------
zefei
I really hate people advocating micro services/libraries because they just
migrated and "everything got much better". No, everything got much better
because the known/actual problem domain changed and system is re-adjusted
accordingly. When you start with very little knowledge of the problem domain,
any fine-grained architecture is premature optimization, and what you really
want is to rapidly expand your understanding of the problem.

Projects can fail in many ways, not trying to understand the problem better
and not trying to re-adjust after are typical pitfalls. Migrating from
monolith to micros is just a natural transition between SOME stages, and it
shouldn't happen until you hit those stages. You may hit those stages very
early, or sometimes never.

------
sago
I think there's an economic angle here.

"Bad" code (like a monolithic system - allow me to beg the question a little)
can be cheaper to write, in many cases, than good code. But it is more
expensive to scale, maintain, extend and debug.

But it means there's a lower investment of time to get a product out.

So I'd expect monolithic systems to have a higher success rate being converted
post-hoc, because the systems that get that far have proven their worth.

Investing more to build a system 'right' isn't necessarily a good move, if
you're not sure of the return.

Or, put another way, investing 5X in 5 cheap-hacky products, and then spending
Y >> X to make the one that works un-hacky is often a better strategy than
spending that Y up front.

------
joslin01
This is pretty much in line with my philosophy of functionality first _then_
infrastructure. The problems that inevitably sprout while building the
functionality will influence future infrastructure / plumbing decisions. Of
course, some care has to be taken to not entangle everything, but this
shouldn't be too hard if you take a simple services-oriented architecture or
even forego services and just store all your functionality in model classes.
Regardless of the approach, the first thing that should be written are the
tests that validate the functionality. Getting those passing is the highest
priority; _how_ they pass comes after.

------
andrewstuart2
I sometimes wonder if cloud computing would exist as it exists today without
relatively inefficient monoliths that _had_ to scale. Once they'd scaled, the
developers could afford to start peeling off the layers, thus only scaling
pieces that needed to, leaving them with spare hardware.

"Hey, let's rent this crap," said some guy. And cloud computing was born.

Obviously I have no evidence of this at all, but I do still wonder, since at
least two of the common vendors (Amazon and Google) were tech companies before
they offered hosting.

~~~
ksenzee
It would be a cool story if it were true, but in fact AWS existed before
Amazon was using it internally. They were selling their ability to run large
datacenters, not their existing spare capacity.

~~~
andreyf
Source? I'd also heard the "AWS was Amazon's extra capacity since they only
use it all on black Friday" story.

~~~
signifiers
It's a common misconception. Chris Pinkham and Ben Black created it, and it
never had anything to do with excess capacity.
[http://blog.b3k.us/2009/01/25/ec2-origins.html](http://blog.b3k.us/2009/01/25/ec2-origins.html)

------
cmaggard
We experienced similar to this at our company. When my coworker and I first
arrived, the prior engineers had built the system as a set of microservices
but it was completely overarchitected. Our first act was to pull all the parts
together into one application.

Now that it's grown, we're starting to look at the microservice approach
again, but it's been almost four years since we pulled everything together so
it makes much more sense given the load/functionality we have now relative to
then.

------
leighmcculloch
ModularFirst, defer the decision to create a monolith or a microservice
architecture. Start with a single application and focus on your software
design being modular. One way dependencies, single responsibility, and simple
interfaces make a big difference here. Breaking out microservices will be
simple if you need to, and if you stick with a single app you'll have an app
that will grow as a monolith well.

------
rcoder
I've been part of three teams who attempted monolith->microsystem transitions.
Two succeeded (though at wildly different costs in terms of engineering time
and delay) and the other was abandoned after person-years of effort.

The common aspect of the successful migrations was their incremental nature:
rather than "killing" the monolith all at once, there was a careful and
gradual migration of performance-critical sections into services running atop
dedicated machines/storage/etc.

Neither of the successful moves happened all at once, or indeed ever 100%
replaced the monolith.

It wasn't just a question of planning, either. The failed migration had a team
of three engineers spend ~six months writing detailed component specs,
migration plans, etc. The business simply couldn't stop (or even maintain the
status quo) to let them build the shiny new V2, so it kept getting pushed out
and restarted long enough that the plans and specs bit-rotted and the whole
thing got scrapped.

~~~
pm
Indeed, the mistake that gets made (and I've made enough times) is to think
that any kind of restructuring of code is required to be done in a gigantic
monolith.

------
abecedarius
Refactoring a system built of microservices is slow and costly, according to
the article; the recommendation follows from this. Why can't it be fast and
easy? Is it essence or accident? (Like, do skilled Erlang programmers agree?)
How does refactoring happen in the systems he's talking about?

------
merrua
This also seems like the idea of throw away the first one, you will anyway.
The experience of building it as a monolith shows you what micro-services you
need. Or maybe avoiding optimization before you need it.

------
spullara
I've seen many startups that begin as micro services and have no problem as
they scale up. Martin's company Tyrpesafe probably only works with companies
that find themselves in trouble.

~~~
saryant
Different Martin.

This article is from Martin _Fowler_ who works for ThoughtWorks. Martin
_Odersky_ is with Typesafe.

~~~
spullara
Ah ok, but I'm not sure that changes what I said — perhaps makes the bias even
stronger towards projects and companies in trouble. Thanks for the correction.

------
jaunkst
What about testing? And working with large teams? I imagine large software and
teams would be abstracted into separate interfaces. Is communication between
teams just more difficult to manage? Is reality different from theory?
Shouldn't each segment be testable and debug-able? How do you effectively
execute a large project? Is mashing it all together more of a proof of concept
than the final product? As an investor am I on the hook for more than I
bargained for? Is technical debt a non-issue?

------
mwcampbell
Taking the monolith-first idea even further, what percentage of web apps would
run just fine on SQLite, MySQL in embedded mode, or (for JVM-based projects)
H2?

------
shinzui
A hybrid approach is a better strategy once you have product/market fit. You
should build your core domain as a monolith but have auxiliary infrastructure
services built as microservices. The hybrid approach has the advantage of
safely investing in microservices architecture that would later allow you to
refactor your monolith once you truly understand your bounded contexts.

------
jasim
If you're a web developer working with Rails and have large monolithic
projects gone unwieldy or tending to go unwieldy, please give 'Growing Rails
Applications in Practice' a try. [https://leanpub.com/growing-
rails](https://leanpub.com/growing-rails).

(i'm not affiliated with the authors)

------
edpichler
I agree with Fowler, and this also fits perfectly with the central idea of the
Lean Startup movement we saw last years.

------
jebblue
I finally read something that Fowler wrote that I can agree with and that
isn't abstract (or overly abstract).

~~~
djhworld
Ah I see you're talking about the OverlyAbstract pattern.

------
jaunkst
Macro and micro is relative. A monolith is a kitchen sink and expensive. Take
a look at the API space for successful SaaS products especially when they are
oriented in business software. It's hard to pivot a monolith or even monetize
when it ignores the ecosystem it exists in. When it communicates well with
other micro services it has natural discovery as a solution to a problem that
the customer is looking for. Trello, Harvest, Basecamp, Pivotal, and tons more
are all successful because they communicate well outside of their problem
space and solve for the problem in their own scope. I do agree that you
shouldn't be over aggresive on abstractions off the start but you should also
consider the players in your space and ask if a segment of you application is
solving something of value to others or if your recreating a service that your
shouldn't compete with but cooperate with.

------
alrs
Yes, mostly.

If you are putting up an API that needs to be available 24/7 you need to have
the system sufficiently decoupled so that you can go read-only and make schema
and infrastructure changes without needing to stop the world.

------
wellpast
> 1\. Almost all the successful microservice stories have started with a
> monolith that got too big and was broken up.

This is pure survivor bias. The majority -- by far -- of 'failure stories' I
know of in the software industry are ones in which a company tried to take
their monolithic code base and modularize or microservice-ize it.

> 2\. Almost all the cases where I've heard of a system that was built as a
> microservice system from scratch, it has ended up in serious trouble.

This is not because microservices-first is inherently flawed. This is purely a
skill set issue. Microservices demand a greater skill set. A 'monolithic'
approach puts much less of a demand on the engineer's skill set.

For a simplistic but illuminative example, if all you know how to do is build
systems that operate on global mutable state (most newbie programmers), then
of course you will run a lot further in a monolith than if you are trying to
do this in a microservices pardigm.

This is a simplistic example because effective use of microservices requires
much more of a skill set than not using global variables. The truth is that
effective use of microservices (knowing how to build architecturally sound
software) is way beyond the reach of MOST smart, senior level engineers that
I've met in my career. If this sounds arrogant, it's not; there's a definitive
architectural skill set that CAN BE LEARNED but that is missing from most
software practitioners. (Our industry/academies need to learn how to teach
it.)

This is the elephant in the room in our industry--and why we spin so many
wheels talking about everything else. All of the energy we spend talking about
patterns, processes, languages, etc etc do so much less for our industry than
if we trained our practitioners in how to construct architecturally sound
software systems.

~~~
duggan
> This is not because microservices-first is inherently flawed. This is purely
> a skill set issue. Microservices demand a greater skill set. A 'monolithic'
> approach puts much less of a demand on the engineer's skill set.

> The truth is that effective use of microservices (knowing how to build
> architecturally sound software) is way beyond the reach of MOST smart,
> senior level engineers that I've met in my career. If this sounds arrogant,
> it's not; there's a definitive architectural skill set that CAN BE LEARNED
> but that is missing from most software practitioners.

If this skill set exists, and almost nobody can or does possess it, then your
argument is purely semantic.

Who cares if there's an ivory tower of geniuses out there spinning out
perfectly formed microservice architectures in novel domains? We're talking
about rules of thumb for practitioners here, not laws of physics.

~~~
wellpast
I never said genius. And "perfectly formed" is not the objective. Tractability
is. Go visit companies with medium-sized codebases and find me one that isn't
lamenting how difficult it is to grow and maintain it.

I wouldn't consider myself that smart. But I have spent 20 years in this
industry with what some might call a sick obsession for understanding how to
compose and evolve systems and I've acquired some skills that I believe (1)
very much matter to our productivity; and (2) can be mentored and taught, but
(3) are not being taught.

My guess is that academy can't teach it because they don't have enough real-
world experience behind their pedagogy. And industry can't teach it because it
takes quite a while to learn and master -- and what business can afford to do
that with their juniors who are just going to leave them in a few years anyway
:)

~~~
duggan
I've certainly never worked for a company that did not have a laundry list of
complaints about its software, all reasonably successful ones too.

And it usually only takes a couple of beers to draw the same laments from
people working in the sort of companies a lot of developers aspire to.

If you think you've acquired a teachable, repeatable set of skills that aren't
being taught then it sounds like you've got a gold mine on your hands :) As a
relative industry junior (about 10 years) I haven't been introduced to a
codebase that wasn't lousy. Certainly all were lousy to one degree or another
when I left them, but they served customers all while drawing down the ire of
developers (including myself).

~~~
wellpast
> I've certainly never worked for a company that did not have a laundry list
> of complaints about its software, all reasonably successful ones too.

Either this is a necessary problem (nothing to be done) or this is because of
a lack of tools and skills (i.e., our industry is still immature). I believe
very strongly in the latter.

> If you think you've acquired a teachable, repeatable set of skills that
> aren't being taught then it sounds like you've got a gold mine on your
> hands.

I might some day figure out how to teach this in a larger (non mentoring)
scale.

What the skill set looks like and how to acquire it is actually the lesser
problem. The harder part is proving to people, quantitatively, the _value_ of
the skill set. I know and I think we all know (on some level) the extreme cost
of architecturally-unsound software. But it takes a lot of work to learn &
master the skill set, so most people bail on doing the work it takes to learn
it. (The industry does not reward the growth or reward the mastery enough.)

Only when industry can connect the value of the skill set (and also be able to
hire to the skill set) will more people put in the work to acquire it. (The
value to industry is enormous but I think it would take some work, research,
creative thinking to make the value explicit & tangible and give industry a
way to test for the skill set.)

If someone builds me a house I can walk around and test the beams and come up
w/ a general idea how well they did and how well the house will withstand the
weather. In today's software world, two people with drastically different
skill sets can produce a code base -- and it's really hard for the people with
the money to see what they've been built and how sound it is.

~~~
nostrademons
In my experience, the reason all companies have shitty codebases is because of
risk-compensation. When a company does _not_ have a shitty code base, it
quickly throws all of its resources into improving the product and user
experience for its customers, which increases its market share, revenue, and
moat against competitors. This continues until all the new complexity makes
the code shitty again, where they stage a cleanup & architectural fixit to be
able to make forward progress. If the codebase is _not_ shitty, it means that
they're leaving money on the table, and will continue to get shittier until
code quality starts costing them money again.

Companies like Google, Microsoft, and Facebook actually do adopt (and often
discover) many of the industry best practices. Their codebase is still shitty;
the reason is that any cleanliness and rationality in their architecture is
quickly eaten up by new features and products that drive the industry forward.
As consumers, we see the benefits; as employees, we just muddle along as best
as we can.

The only way to break this cycle is for engineers to pay companies instead of
getting paid by them. I don't think this is what you had in mind. (Although,
some engineers do make this bargain. This is why Haskell/Erlang/Ocaml/Lisp
salaries are often lower than they would be for equivalently-skilled Java and
C++ engineers: you take a pay cut to work in a language that's actually
enjoyable to program in.)

Barring that, the way out is to learn some negotiating skills and at least get
paid for putting up with a sucky codebase. If you frame it as "employing my
skills will let your other employees implement the features you've been asking
for for a year, which will make you $XM in revenue", you can make quite a
pretty penny. I know some senior engineers that regularly get paid a couple
million in restricted stock for a couple years worth of work.

------
anonyfox
Try writing your app in elixir/phoenix in the first place, easy modularized
code like in monoliths, and scalable like a bunch of small separate services.
best of two worlds I'd say.

------
bsbechtel
Is this not the same as 1) make your code work, 2) refactor? The only
difference here is the author is talking about architecture instead of
application code?

------
elmin
I have a very different perspective. With tools like Heroku, building systems
as microservices is no more time consuming than building a monolithic system.
And it's much easier to iterate on and improve. Conversly, pulling apart a
monolithic system into services is not a fun task.

~~~
dasil003
What do you do when the interface between those components was wrong and you
have to refactor multiple services at once?

------
eshem
key takeaway: Although the evidence is sparse, I feel that you shouldn't start
with microservices unless you have reasonable experience of building a
microservices system in the team.

------
sailistices
tl;dr;

Microservices first is premature optimization.

------
oxalo
Microservices: here be dragons.

------
sailistices
tl;dr; Microservices first is premature optimization.

------
dreamfactory2
If my understanding of bounded contexts is correct, a bounded context
represents the smallest level of granularity when it comes to a service
component e.g. a 'customer' in one domain is not the same thing as in another
(and there is therefore no universal reusable 'customer' service, but instead
a much richer service representing a sales or support model in his example).

So going by the article, shouldn't the direction of travel therefore be from
monolith to bounded context (as each domain boundary emerges) - which could be
described more accurately as a macroservices architecture?

------
fleitz
Yup, get to market, incur technical debt, pay it off with cash.

------
jessaustin
I want to give him the benefit of the doubt and say the " _a:link ,_ " in the
following css is a mistake:

    
    
        a:link, a:visited {
          color: #94388e;
          text-decoration: none; }
    

...but I'm pretty sure it's not, and this guy really does want to break how
unvisited links are displayed.

~~~
reagency
He just loves purple (look at his logo) and fashion over function.

------
kraig911
My main problem with twitter and the stalled new user base.

1\. The onboarding social experience is just hard. Finding relevant
information of what I want is difficult I wish there was a pane of tweets
regarding interests ie gaming, politics etc. Everything in one thread is mind
numbing once you follow too many people.

2\. Context - finding anything is difficult. I don't know what a trending
hashtag is and the ones presented to me usually are gossip in nature for some
reason?

3\. Some people get so many @'s that they simply drop off the earth.

4\. Whats the point of favoriting a tweet? I still don't get it.

5\. Why is it so difficult to use the API since those changes in what 2012?

6\. 140 characters is just so dang hard for me. I can understand a limit but
just 140? I want just a little more space :(

~~~
cdelsolar
What?

