
Microservices and the migrating Unix philosophy - dsberkholz
http://redmonk.com/dberkholz/2014/05/20/microservices-and-the-migrating-unix-philosophy/
======
hibikir
Just like in Unix, we see the limits of composability: You can build some very
basic, generic tools that are composable and easy to use. the greps and finds
of the world. But there are relatively few of them out there, surrounded by a
bunch of ugly, hard to read, hard to reuse glue. This is naturally occurring,
because each piece of glue is not used many times, and its task isn't all that
easy to define.

So, taking this to Microservices: while we can build a bunch of little
services that will be reused everywhere. The large majority of a system will
be this kind of glue: You can't make every single piece of code you write be
as well defined as grep, but with a service interface. And while you will be
able to track what the little pieces do, the bigger command and control pieces
will always present a problem. We can't wish complexity away, no matter how
hard we try.

So designing a bunch of microservices and hoping most of your problems will be
solved is like trying to build something in Unix without perl and shell
scripts. But I see companies, today, that think it's a silver bullet. They've
not read Brooks enough.

~~~
pjmlp
> They've not read Brooks enough.

They never do. It is incredible how those mistakes keep being repeated.

~~~
joe_the_user
I don't think you can quite call these patterns of failure mistakes.

They are effective, reliable way to get certain things up and running in a
given time frame and in a decentralized fashion. If you have limited resources
and need to have things working in that time frame, decentralized services can
be the right decision even if they give you problems later.

Further, being a fairly reliable way to do things, they have appeal even if
your time frame is far enough ahead to see the problems.

~~~
pjmlp
I did not explain myself well, I was thinking in a more broader sense,
including the related project management issues.

------
DanielBMarkham
I blogged about this recently.
[http://www.whattofix.com/blog/archives/2014/04/f-mono-
agile-...](http://www.whattofix.com/blog/archives/2014/04/f-mono-agile-
architecture-devops.php)

Coming from a .NET background, I had an interesting path. I started with DOS-
based imperative programming, then databases, then OOP/OOAD, then finally
functional programming with F#.

Once I truly got on the functional programming bandwagon, I started asking
myself what was all this scaffolding for? Why didn't I just build composable
functions that passed formatted files around?

This is 180-degrees from the way I used to code, but damn, I like it. A lot. I
can use the O/S as an integration tool, and the entire deploy/monitor/change
cycle is a million times easier.

I wonder how many other OOP guys are going to end up in my shoes in another
10-20 years or so?

Note: I see other commenters are talking about how you can't solve your
problems simply by using micro-services. I'd agree with that, with one caveat:
if you've coded your solution in pure FP, you've solved your problem in a way
that's by definition composable. You can certainly decompose that solution
into microservices. I think the question is whether or not you have to "re-
compose" them into one app in order to make changes.

~~~
lifeisstillgood
This is my intuition too - we can go an awful lot further than might be
supposed with reusable components if we are sensible about the interfaces and
streams / lists.

~~~
DanielBMarkham
I think the missing piece here is pure FP.

If you're writing pure transforms, you're already creating the micro-services.
It's just a matter of where they live. But if you start to play fast and loose
with imperative programming, sure, you're going to need some industrial-
strength glue. Even then it's going to be a mess.

It'd be interesting to have a pure FP language where you could either compile
the entire code as one piece, or automatically split it up into chunks and
deploy separately. You could keep the code in one place and the only thing
you'd need to tweak would be the chunking. (You could also layer in some
DevOps on top of that where certain pieces would talk to other pieces on a
schedule, or across a wire, and that could be specified in the code. You could
even meld this into a puppet/ansible-style system where not only do you code
the solution, but you code the deployment as well. Neat idea. Somebody go make
that.)

~~~
lifeisstillgood
Erlangs OTP is getting close ...

------
programminggeek
The compensation for our failures, or sometimes overcompensation for our
failures leads to interesting outcomes. Perhaps the answer lies not in
monolithic vs micro approaches so much as knowing what the evolution between
the two over time looks like.

------
bch
> ...composability does not and can not exist everywhere simultaneously. It
> just won’t scale. Although the flexibility that a composable infrastructure
> provides is vital during times of rapid innovation, such that pieces can be
> mixed and matched as desired, it also sticks users with a heavy burden when
> it’s unneeded.

This is a good reminder. There is a lifecycle. Know when and where it starts,
and when it ends.

------
repsilat
> `cat file | sed | tail`

Quite beside the point, but:

\- You don't want to use `cat`, and

\- You probably want to pipe `tail` into `sed`, not the other way around.

This will be _substantially_ faster if `file` is large, because it lets `tail`
be clever about how it finds the last ten lines.

~~~
lgas
If you're going to nitpick a strawman this hard, you have to say this whole
command should just be replaced with "tail file" but presumably the author
intended it as an example where in real life there would be options supplied
to sed. Those options might cause the number of lines to change, necessitating
the tail being after the sed, not before the sed.

~~~
repsilat
God, I wasn't nitpicking -- I even said my comment was off-topic. I was
throwing advice into the void, for the author if they read it or the
commenters here.

And it's fairly important stuff for me on a regular basis. At work we generate
hundreds of gigs of logs daily, and doing things in the right order with tail
and grep etc is often the difference between a script working or not, or
between it taking seconds and taking minutes.

