
Microservices, the Unix Philosophy, and the Richardson Maturity Model - nkurz
https://medium.com/@chrstphrhrt/microservices-the-unix-philosophy-and-the-richardson-maturity-model-425abed44826
======
joesmo
Any microservices discussion that does not take into account both
infrastructure complexity and team size is incomplete. For small teams and
small/not complex infrastructures, it's always a bad idea. The premium that is
paid for switching to microservices is great indeed and it's paid in __much
__slower development and deployment times. This happens as soon as you start
breaking an application apart, so even with only a couple of "mini" services,
much of the price has been paid.

Since the author doesn't mention either, I can't conclude anything, but it's
very possible that the team he was advising didn't follow his advice because
it was stupid and didn't take into account the above two factors.

See obligatory reading:

[http://martinfowler.com/bliki/MonolithFirst.html](http://martinfowler.com/bliki/MonolithFirst.html)

[http://martinfowler.com/bliki/MicroservicePremium.html](http://martinfowler.com/bliki/MicroservicePremium.html)

EDIT: Spacing

~~~
qyv
The part I love about microservices, and something that I think doesn't get
enough attention, is that is strongly enforces separation of concerns across
micro-projects. API boundaries are very clear and very rigid, it takes very
little effort or oversight to prevent leaking across them. Essentially, you
are moving some of the software 'best-practices' up a level form modules to
applications themselves. This means that you don't need to spend a lot of
effort writing great, best-practices following, future-proof, software upfront
for fear of collecting horrible technical debt down the road. Got a service
that was rushed out the door and isn't maintainable anymore? Rewrite it! 2
weeks later you have erased a bunch of technical debt and since you kept the
same API(s) in place, you have not affected the rest of the services that make
up your application.

And that is the other thing that I think microservices are great at: They are
pragmatically easier to change. Delving into a huge, old, debt-riddled
monolithic codebase to make changes is hard and scary, and developers are less
likely to want to make changes to monoliths because of that. Microservices are
easy to understand because they are small, discrete systems with well
understood interface contracts. Making changes to them without breaking the
entire app is not scary because it is easier to throughly understand the
implications of what is being changed.

~~~
elliotlarson
I'm sort of playing devil's advocate here, but:

> Got a service that was rushed out the door and isn't maintainable anymore?
> Rewrite it! 2 weeks later you have erased a bunch of technical debt and
> since you kept the same API(s) in place, you have not affected the rest of
> the services that make up your application.

But that's also true of a monolith. Got a piece of your system that isn't
maintainable any more. Rewrite it! 2 weeks later you have erased... you get
the point.

I like the idea of microservices. I like the idea of the clean boundaries. As
a developer, the separation seems correct to me. But, unless you're actually
feeling the pain point of the monolith, it just seems like more work that
falls into the category of YAGNI.

You also said this:

> Delving into a huge, old, debt-riddled monolithic codebase to make changes
> is hard and scary...

I mean, it's hard and scary if you don't have a high level of test coverage.
If you have great coverage in the old monolith, you should be able to change
pieces of it with confidence, no?

~~~
qyv
Yes, you can always rewrite parts of a monolith, what I am saying is that it
is just way easier to do with the hard-boundaries and well-defined interface
contracts of microservices. You know that as long as you live up to the
interface contracts, any changes you make to the microservice app cannot
adversely affect other microservices. Just think about how much of software
development patterns and best practices are devoted to creating proper
encapsulation and avoiding tight coupling; All of that is simply built-in when
you use microservics.

Yes, of course if you have great test coverage making changes to a monolith
can be easier, but good test coverage and debt-riddled don't often apply to
the same codebase! Lets be honest, if you are happy with your monolith
codebase then there is no reason to rewrite for microservices. It is for those
projects that you are not happy with the monolith code, where maintenance of
the code IS scary, that the conversion makes sense.

------
fmstephe
Can anyone add practical experience with the API spec tools. This is a new
name for me.

I had a look at the websites for each of the three tools the article
specifically mentions

    
    
        http://raml.org/
        http://swagger.io/
        https://apiblueprint.org/
    

I get the impression that these are code generating tools built from a
specification file.

I have real world experience with two tools which fit that rough description.
SOAP and google app-engine end-points.

I would unambiguously describe both as awful. Really horrible experiences with
both. They tie your systems to large, and difficult to understand tool sets.
SOAP wasn't obviously buggy, but app engine endpoints are unpredictable and
the generated code produces some serious awkwardness.

I don't want to slap a label on these tools and dismiss them off-hand.

Anyone used them 'for the win'?

~~~
abraae
We've tried various ad-hoc approaches to documenting our APIs in the past.

When we switched to RAML (and json schema) for our API definitions, everything
got a lot better.

RAML defines the resources (uris, parameters, media types, etc.) while json
schema is used for more detailed information (what's the exact structure of
the application/json response to a call to GET /jobs).

While there are theoretical gains in generating code from the RAML, we don't
seek these out as its no big deal to hand-code Spring controllers, even a lot
of them.

The real value for us is in having a single, well-documented source of truth
about our APIs. Customers using the APIs, developers implementing them,
testers testing them, trainers learning about them, all go to one place.

And RAML has some handy tools (like raml2html) to create human-readable
documentation from the RAML.

We've also added our own tools to parse sample json responses against the json
schema which have greatly aided us in finding discrepancies.

In short something like RAML (or probably blueprint or swagger too) is, I
feel, essential if you have a lot of APIs and a lot of people, and if you have
aspirations of an increasingly automated toolchain.

FWIW, there's a line of thought that RAML is at odds with hypermedia and
HATEOAS. After all, why would anyone need a detailed API document, when
clients should be getting information from the hypermedia responses of
previous API calls? I don't agree with that, even if just because internally
your development team still needs to build out all those APIs.

------
mavelikara
OT: More than 10 years ago, Jim Waldo [1] gave a great talk on SOA. I remember
listening to an MP3 version of it then. I could find many references to the
talk online now (eg [2]), but talk itself seems to have vanished off the web.
Does anyone have the MP3 of the talk?

[1]:
[http://www.eecs.harvard.edu/~waldo/](http://www.eecs.harvard.edu/~waldo/) .
Also see a great paper he co-authored at
[http://www.eecs.harvard.edu/~waldo/Readings/waldo-94.pdf](http://www.eecs.harvard.edu/~waldo/Readings/waldo-94.pdf)

[2]: [http://www.gettingagile.com/2005/09/17/jim-waldos-talk-on-
so...](http://www.gettingagile.com/2005/09/17/jim-waldos-talk-on-soa/)

~~~
mavelikara
Here are the slides:
[https://web.archive.org/web/20051029050650/http://www.jini.o...](https://web.archive.org/web/20051029050650/http://www.jini.org/events/0505NYSIG/WaldoNYCJUG.pdf)

------
vemv
_Becoming encumbered by a monolithic centralized API that is at risk of
reaching its maximum performance and hosting limits._

Comically wrong. You can just fire up more instances (aka horizontal scaling).

Microservices have their merits, but they're often the (misguided) Dev answer
to Ops problems.

~~~
Florin_Andrei
> _You can just fire up more instances (aka horizontal scaling)._

(sigh) I wish that were always doable.

~~~
vemv
Obviously no technology will give you infinite horizontal scaling for any
sufficiently complex app.

There is more than one way to achieve scalability. Microservices is just one
of the possible choices. It annoys me that developers increasingly see it as
the only one.

~~~
dllthomas
There is more than one way to achieve scalability, but as far as I'm aware all
of them beyond something more beyond _just_ firing up more instances.

~~~
vemv
Designing an infrastructure which allows you to solve problems by firing up
more instances is non-trivial work, beyond the skillset of many (otherwise
apt) developers.

~~~
dllthomas
I'm not sure I disagree; but that seems to precisely refute your initial, bold
claim in this thread.

~~~
vemv
My original wording was too concise, sorry.

------
jessaustin
I like stuff like TFA that pulls in concepts from numerous sources even if all
the pieces don't fit together perfectly. For instance, I doubt Leonard
Richardson would be thrilled with the " _Ramses_ Maturity Model" (we couldn't
even coin a different initialism?) and the "Level 2.5" idea. Eschewing links
runs counter to Richardson's (and Fielding's) vision, period. If one must know
the form of representations ahead of time, one cannot consume them in a
completely maintainable way. One wouldn't notice this within the organization,
but rather at the edges where it must work with other organizations.

That said, Ramses and what it's described as doing is impressive.

~~~
chrstphrhrt
Thanks for the great feedback, you're completely right. I have switched up the
language a bit.

Regarding hypermedia: I really, really want to work with someone to add first
class support for it. I don't think spec is a replacement but rather a
compliment to hypermedia. The difference in benefit seeming to be static vs.
dynamic client generation and I fully agree that dynamic ought to be the
future. Right now static is easy to implement and still quite powerful despite
requiring tradeoffs about things like needing to version releases of client
libs rather than them being more durable in the dynamic scenario.

------
gioele
> Then there’s Level 3. It adds hypermedia links to the mix so that clients
> can dynamically browse resources à la HTML in a browser. This is where
> Ramses departs from all the awesome work that’s being done in the world of
> hypermedia.

This is the good ol' HATEOAS principle of REST:
[http://stackoverflow.com/a/9194545/449288](http://stackoverflow.com/a/9194545/449288)

