
Distributed big balls of mud (2014) - joeyespo
http://www.codingthearchitecture.com/2014/07/06/distributed_big_balls_of_mud.html
======
ryanelfman
Sometimes I feel like microservices is pushed by hosting providers to get more
money from all the additional deployments...

~~~
inthewoods
I'm not a computer programmer or scientist, but I work with a group of them.
The argument I've heard them say - and I may well be misrepresenting it here -
is that microservices are often used as a way to avoid getting better at
parallel processing/programming. They're working on a huge amount of data
processing using Go if that provides any context. I'd be curious what others
think of this idea.

~~~
felixgallo
I think it's likely that the recent faddishness of Go, which upon actual use
turns out to be difficult to use for anything /except/ microservices, itself
is causing microservice adoption.

~~~
patio11
If I can rephrase this without it being a criticism: Go encourages particular
patterns of use, much like any technology.

Rails, for example, strongly encourages programmers to keep a company's entire
operations in a single application and memory space. Many Rails shops
eventually discover that this is suboptimal for their needs, for example when
one _particular_ part of all of their operations need to be scaled up
substantially but scaling a "monorail" requires memory proportionate to the
_total size_ of all operations times the highest desired throughput of _any_
piece of the system. I'm aware of several Rails shops which needed to
retroactively decompose a monorail, and many of them rewrote the performance-
intensive part in Go, as Go is bugs-in-your-teeth fast for many common
workloads.

Just like Rails "wants" to be a monorail, Go feels to me like it wants to be a
collection of small, X00 to ~2k line programs, talking to each other via JSON
messages passed either over HTTP or a queueing system. (Use NSQ! It's
fantastic!)

Partly this is due to affordances in Go's design for e.g. deploying systems.
If you want to re-deploy, just compile (for free) and copy the binary
everywhere. Partly it is due to Golang not yet having much in the way of
community norms for building really big systems. Dependency management is a
very unsolved problem and gets worse the larger the individual pieces of your
system get. Golang also isn't very opinionated about project structure in the
way Rails is, which counsels keeping parts of your system bite-sized as a way
of imposing structure on top of it. (By comparison, you can drop any
intermediate Rails programmer into virtually any Rails program and say "Find
the login page. Find the $FOO business logic." and they'll be able to do it in
a few seconds.)

------
rtpg
> but the design thinking and decomposition strategy required to create a good
> microservices architecture are the same as those needed to create a well
> structured monolith

I don't think that this quite captures the utility of something like
microservices.

For example, if you split your frontend and backend into separate projects,
then the barrier for entanglement becomes so much higher. Instead of just
importing some frontend class for the backend, you're loading in stuff from a
totally different project and making 2 coordinated patches for them. Doesn't
that feel wrong somehow?

The appeal of microservices (much like functional programming) comes from
imposing restrictions that make dangerous things harder. Reducing the
possibility of breaking your own rules on clean programming.

(Disclaimer: I don't think microservices are worth it for smaller projects
though)

~~~
dietrichepp
> The appeal of microservices (much like functional programming) comes from
> imposing restrictions that make dangerous things harder.

But, don't microservices make safe things harder too? What's the point in
simultaneously making both safe and dangerous things harder?

------
threeseed
a) If the time and complexity to deploy 3,10,100 microservices as opposed to 1
monolith is significant then your build pipeline is flawed. Simple as that.

b) Maybe the commenter has only worked on small applications but I've worked
on enough million line code monstrosities to know that one change can have far
reaching consequences. It is simply too easy to couple components together
e.g. be reusing a shared utility or service class and your code coverage will
always be lacking in some way.

c) Microservices enable far more resiliency and scalability simply because you
can deploy more of the ones that are critical or non performant. A sensible
service discovery strategy can make this a one button, trivial exercise. Doing
so on a complex monolith is often far from trivial.

d) This idea that your entire system goes down if a microservice goes down
indicates your architecture is flawed. You should be able to still operate in
a degraded state or at least provide some functionality.

e) If you can't test your service independently then your testing strategy is
flawed. You should be able to trivially mock/stub out any dependencies and if
you can't then you need to rethink your API design and the nature of your
coupling.

~~~
zobzu
The reality of things is, though, that at many (most?!) companies the pipeline
IS indeed flawed BUT must work.

In other words for various reasons it's been built flawed. Speed, idiocy,
politics, whatever. At some point, you have various choices to pay your
technical debt:

\- actually pay it(50%+- work force full time on this, forever - hey, it works
long term, but only short term gains means bonus pay check, and making things
square isn't as fun as building new stuff)

\- patch shit up so it kinda works and hope to leave before it's too horrible
(what most do)

\- dissolve the company/product (eventually happens, almost inevitable over
long enough periods of time - specially when people got tired of the shit and
rebuilt everything anew - then the cycle starts again)

This blog post uses the 2nd solution (the most common).

------
aryehof
To me, microservices address for many the continuing need for structures that
hide complexity behind small, understandable interfaces. Black boxes if you
will. Unlike components, which have no standard for connecting to and among
each other, microservices do - in a relatively standard and easy to understand
way. That's why they are so attractive.

I find myself very much agreeing with the author, but being involved in
modeling complex business problems into object-oriented code, I'm more
frustrated that in a problem domain, we have no effective modular concept for
groups of collaborating objects. We can't walk up to an object graph and
immediately understand the functionality it performs and the services it
provides.

~~~
crdoconnor
>To me, microservices address for many the continuing need for structures that
hide complexity behind small, understandable interfaces.

You can have loosely coupled software with or without micro-services and you
can have tightly coupled (ball of mud) software with or without micro-
services.

All microservices do is multiply the problems caused by creating that ball of
mud.

If you want to hide complexity behind small, understandable interfaces that's
laudable goal, but it is not one that distributing your application across a
network is going to help you with.

~~~
DropkickM16
I think your first point is obvious, but I disagree with your second, at least
'at-scale'.

In the case where you have loose coupling but are representing multiple
entities that scale in different ways, microservices allow you to separate
concerns and separately scale those concerns relative to their requirements in
terms of memory/CPU/disk/network/etc. The best factored code running in a
single horizontally-scaled layer will be inefficient if 90% of requests are
manipulating entity A, and entities B, C, and D have a lot of intricate
business logic but are rarely touched (they are better off if separated and
scaled individually)

The overhead you allude to is definitely something to take into account. If
you're a 5-20 person startup without a serious need to scale up or lacking
people who have built the tools that make microservices easy, you should avoid
the issue for now. But ultimately, decoupling services so they can
horizontally scale independently is a huge win.

~~~
crdoconnor
True, but if you prematurely divide your services up based upon what you think
their performance requirements might be _you will be wrong_. That's premature
optimization, which is, as we all know, the root of all evil.

If you've loosely coupled your services until the point where it becomes
obvious that two parts of the code have markedly different performance
requirements and _then_ you decide to split them into two separate services
then yes, that could work, provided you understand the trade off you're
making.

I don't think that's typically what people mean by 'microservices' however.

There's a good chance it'll still be wasted effort, too. Hardware is cheap.
Developers are not. That applies to large businesses and small.

~~~
mfjordvald
I always saw the point of micro services as not having to figure out the
scaling part yet.

If you focus on grouping by purpose rather than what resource they might use
then you can keep them on small instances until you better understand what
kind of resource they require.

Once you learn their usage pattern you can adapt more quickly (if scale is
needed at all) and not have to first split up the code.

------
thrownaway2424
The "headline" of this submission is actually the final phrase of a comment on
a year-old blog post, way down at the bottom.

~~~
dang
Ok, we changed the title from '“We're leaving the idea of independently
deployable services and not looking back”' and added a 2014. It's not entirely
off limits to link to an interesting comment rather than the root article, but
in this case perhaps it doesn't add enough over the article itself.

There's certainly nothing wrong with an article being a year old if it's good.

