
Microservices? Please Don't - kellet
https://dzone.com/articles/microservices-please-dont?oid=hn
======
dsmithatx
I'd like to hear some background from the author on his situation(s).
Personally my current company is over 100,000 employees. We have a few 100
developers and many stacks, web apps, mobile apps etc. Our API is essential
and is currently a giant monolith written by devs who no longer work here.

Splitting API code into repos based on endpoints allows us to have small easy
chunks to deal with. It also allows us to drop those into containers and
monitor and scale much easier. It also means instead of one API team, other
devs can easily pickup a small code base and make changes for their stuff and,
submit a PR for the API team. ]]

This article seems a bit elementary to have been voted up high on HN to be
honest. It failed to acknowledge the true merits of doing microservices
properly and the huge gains. Not all code bases are meant for microservices.
However, huge monolith API's in general are a very good example where it's a
perfect fit.

~~~
StavrosK
Why can't you split up your code into libraries? API doesn't have to mean
"HTTP API", you're free to pick a different IPC layer.

~~~
eej71
I think the popular counterpoint has been - microservices _forces_
consideration of a good API because you are now on the network even if its
just localhost.

~~~
falcolas
I'd counter that they also force a lot of additional networked API
boilerplate, and all new code is a potential source of bugs.

Perhaps it's better to break it into libraries with well defined APIs, and
then break those libraries out into services when necessary. Jumping straight
to the latter is going to create more room for problems than it solves.

~~~
dsmithatx
Well we are rebuilding from scratch into micro services as we have the man
power and money to do it. Why not just go with libraries? Here are the top
five reasons off of the top my head.

The current monolith is requires vagrant and doesn't mimick production nearly
as well as smaller containers.

This design will be easier to understand and with dev turnover that means
bringing new devs up to speed faster. This is especially important when
working with outside development firms which we do.

Deployments and scaling will be much faster. If one endpoint goes down it wont
affect others. We are working to ensure one endpoint is not dependent on
another.

When change is required in a certain part of the application, only the related
service can be modified and redeployed.

No long-term commitment to a single technology stack

~~~
acdha
That sounds like most of the benefits you're describing simply due to
redesigning the system with the benefit of experience and the much deeper
knowledge about the problem which you learned building it the first time.
Almost every point on your list could be restated in the opposite direction:

> The current monolith is requires vagrant and doesn't mimick production
> nearly as well as smaller containers.

The decision to require use has nothing to do with microservice/monolith and
you'd get the same consistency benefits if you deployed a large single
application in a container as well, plus you'd have the added simplicity and
debugging benefits from not needing to run additional applications for things
like orchestration or service discovery.

> This design will be easier to understand and with dev turnover that means
> bringing new devs up to speed faster. This is especially important when
> working with outside development firms which we do.

Easier to understand is an aesthetic choice but it generally means things like
a clean design, organization, good documentation, etc. which can be done in a
monolith as well. Smaller services can encourage that but they don't
necessarily – I've seen plenty of gnarly interdependencies and under-
documented schemas – and they have a hard up-front cost that now all of your
new developers will need to be able to start, reload, and debug many separate
services and understand any interdependencies. If your culture and tooling can
manage that well, it almost certainly could maintain a clean larger codebase
following the same principles.

> Deployments and scaling will be much faster. If one endpoint goes down it
> won't affect others. We are working to ensure one endpoint is not dependent
> on another.

This is certainly the appeal but many places find that the scaling benefits
are less than predicted because the business logic requires things which are
harder than expected to separate or scale independently, and there's a cost to
separation if that means that you're now forced to implement something like
distributed transactions or locking on top of your own custom services.

Again, I think a team with the experience and support which can do this well
in one model is also likely to have similar results with the other.

> When change is required in a certain part of the application, only the
> related service can be modified and redeployed.

If you have a well-defined automated deployment process, this is a relatively
minor benefit in either case. Assuming you weren't just running a single
instance of the previous app, you need a way to handle things like blue-green
deployment, managing schema updates, etc. no matter which design philosophy
you pick.

> No long-term commitment to a single technology stack

This is one of the strongest arguments, but there is also a cost involved to
having to learn and support multiple stacks. Again, I think that's more a
function of culture and resources — I've seen places which were too
conservative until they couldn't find developers at any price as well as
places which had severe “Oooh, shiny” problems where every module was written
in the latest thing the "lead" developer had seen that week.

Again, I'm not saying one approach is right or wrong but rather that it's too
narrow a view to look at one practice in isolation. The microservice idea has
been around for decades but it hasn't become universal because quality of
implementation matters a lot and it's not optimal for all problems — I'm
pretty sure the team at Oracle which built the deeply horrible business
application I had to support about a decade ago used many of the same
arguments to pitch the benefit of their enterprise Java web service
architecture, which turned simple database queries into thousands of calls to
services on a dozen servers. It's not that the concept was right or wrong but
that it wasn't a silver bullet.

------
jasode
Here's an exchange between jedberg[1] and author Sean Kelly about a previous
incarnation of this microservices article (September 14, 2016):

[https://news.ycombinator.com/item?id=12509066](https://news.ycombinator.com/item?id=12509066)

And in that same thread, in another response, SK writes:

 _> "I do wish I had titled it better, as most people have (rightfully) dinged
me on this one ;)"_ \--
[https://news.ycombinator.com/item?id=12508941](https://news.ycombinator.com/item?id=12508941)

... and yet, here's another new article about microservices -- with another
clickbait title. The subtitle then has the breathless, _" Continue reading to
find out why."_

Serious non-snarky question to HN audience: do more than 50% really prefer
this style of writing?

[1] I singled out jedberg's comment as significant because he discussed the
costs and tradeoffs of microservices at 2 well-known companies (Reddit=no)
(Netflix=yes). The interview transcript:
[https://scaleyourcode.com/interviews/interview/11](https://scaleyourcode.com/interviews/interview/11)

~~~
lgas
No one cares if people prefer this writing, they only care if it's effective
or not.

~~~
StabbyCutyou
The accuracy of this statement is as depressing as it is apt.

------
nhumrich
The author never discusses the most important pieces of microservices. The
biggest one in my mind being organizational scalability. He talks about
scalability from a performance viewpoint but not an org standpoint. Same with
speed. Yes microservices add overhead and can be slower than a monolith, but
they can give you autonomy and business speed. Business speed is so much more
important than performance.

~~~
djsumdog
The author didn't discuss queues once. I work at a company that uses
microservices and queues. Right now we use Rabbit, which is terrible for some
reasons I won't get into, but the one good thing about it is that each message
has to be acked. You take a message, you process it and then you make that
you've processed it.

Things don't get lost. You can requeue messages that end up being dead-
lettered due to bugs. You can scale the number of services that read from that
queue. You can have services up independently if they just read and write from
a system that says, "here's a chunk of data to process."

You can loose data with Rabbit, and if you're starting from the ground up you
might want to look at something like Kafka instead. If you don't care about
message loss and just need throughput (like if you're doing real time logs or
sensor data), something like zeromq might be more appropriate.

The idea behind microservices is that you gain stability at the expense of
some redundancy.

~~~
UK-AL
We use queues + microservices. It's a good combo.

We also split the messages into different types, commands and events. A
command is to perform a specific action and is normally aimed at a specific
service. And event alerts any subscribed service that a action has happened.

So you send a CreateOrder command. After processing it would fire off a
CreatedOrder event which any subscribed service could pick up(Such as a
EmailService sending a confirm email). Allows you to create workflows, and
monitoring systems quite easily.

I'm interested in your experience with RabbitMQ?

~~~
djsumdog
So you're using event sourcing then? We're doing the same thing using akka-
persistence.

I'm not directly on the team that maintains our RMQ instances so I'm not the
best to comment on it. Our dev-ops team have a complicated set of shovels and
federation to deal with/minimize data loss, but I haven't had to delve into
those scripts too much yet.

------
StabbyCutyou
Hello all you nice folks on HN. Original author of the article here. A few
things I'd like to add, since this is essentially what feedback I get every
time this has made the rounds:

1\. I don't think microservices are bad, or that they don't work. Quite the
contrary. My argument is simply that there is a lot of additional complexity
in using this approach to your architecture, and to consider if you're able to
handle that before diving in.

2\. This was adapted from a lightning talk I gave almost 2 years ago, and is
almost by-design light on hard details or deep dives. It was meant to just
introduce the problems by talking about fallacies (which many think are
strawmen, but they're arguments that i've heard from people in my own
discussions on the subject).

3\. Even if you disagree with my points, I appreciate folks taking the time to
read and provide feedback.

Thanks!

~~~
ohstopitu
I would like to respectfully disagree with you on most of your points (in the
article).

1\. While network boundaries are not necessary, and in general add a little
more overhead (and more boilerplate), it's a necessary evil. It helps you to
think of each bit of software separately and in a way, helps you to find
errors when they occur. I have worked with monolithic projects and while
finding bugs is not hard, if it were a serverless micro-service, it'd be a lot
easier.

3\. adding discipline as an abstract afterthought is a lot harder than just
forcing discipline with micro-service projects.

4\. A not my problem approach might not be something to desire, but in my
opinion, it helps in 2 major ways:

i) people are a lot more responsible for their code

ii) they don't make assumptions and take things for granted (a lot of error
handling for example)

5\. Scaling micro-services (especially with AWS Lambda et al. is insanely easy
and an afterthought). That is not the case for a monolith where you either
have to keep worrying about scaling or re-write code.

I personally feel micro-services are the way to go. However, micro-services in
general need a bit more guidance - best practices and so on and we have a lot
more work to do before we get there.

------
sparrish
Fallacy #4: Simple for Engineers "leads to ‘not my problem’ syndrome.”

I've found the exact opposite happens with microservices. With well-defined
interfaces and APIs, I know exactly where things are breaking and can pinpoint
the service and code more quickly.

~~~
sp332
Right, but the problem is often pinpointed to someone else's team, and then
you have to go communicate with them and convince them to make it a priority.

~~~
coldcode
At my company this is the most crucial problem, people with different VP's
priority than your VP's priority causes arguments that never end with anything
being fixed. So you have to work around the problem which leads to hack upon
hack.

~~~
dasil003
At this scale though, you need service isolation just to operate with any
efficacy. Think how much worse this case would be if those two rank and file
developers under distant VP hierarchies were deploying the same codebase to
the same production machines. Basically it would be impossible to ship without
independent QA and Ops arms with heavy policy on what is required to deploy
_anything_. Now you're moving at about 1% of the speed of a Facebook or
Amazon.

------
beat
I have this thing I say sometimes... "Which kneecap do you want the bullet
in?" Microservices vs monoliths is a _tradeoff_ , and a partisan holy war "My
way is right and their way is wrong" argument fails to recognize that point.

Monolithic services benefit from integration and suffer from complexity.
Microservices benefit from simplicity and suffer from isolation. That's it.
That's what matters. If you build a monolith, sooner or later you wind up with
a big ball of mud that nobody fully understands, with careful consideration
required before any new functionality or refactoring. Likewise, if you build
microservices, you wind up not knowing exactly who or what is dependent on
whatever mistakes you might have made in the design in the first place, and a
lot of inter-team negotiation required to make significant changes in how the
microservices behave.

What I find interesting is the evolution. If you have a ball of mud monolith,
it can be very difficult to refactor due to complex internal dependencies.
Pulling a chunk of functionality into a - microservice - is a way of pulling
off a major refactoring without finding yourself up to your ass in alligators.

Likewise, if you find integration points and inter-team communication and
duplicated structures are killing you, you start merging microservices
together, building something more complex to solve complexities that exist
outside of any single service.

Software evolves. Deal with it.

------
Animats
We did a self-driving vehicle for the 2005 Grand Challenge as microservices.
This was under QNX, a hard real time OS with really good interprocess
communication. Same-CPU overhead for interprocess calls is very low, and calls
over the network are also efficient.

That architecture was mostly because there were lots of things running
asynchronously that were not in sync. The sensors (LIDAR, Radar, GPS, INS,
wheels, etc.) all had their own clocks and schedule. An interprocess call was
either "do this on your next cycle" or "what was the situation at time T"? A
request for "where were we at time T" returned an interpolated value from
GPS/INS fusion; T had to be at least 50ms or so in the past, so there were
fixes available before and after the requested time. This allowed asking for
position info more frequently than some sensors could respond. On each LIDAR
line scan, the vehicle position was interpolated and used for map updating.
Planning always ran a little in the past, so we could use interpolated rather
than extrapolated data.

This sort of thing is common in robotics, but may not be that helpful in web
services.

------
api
Microservices always struck me as a reinvention of service oriented
architecture from the 90s and early 2000s. This was done using RPC protocols
like Java RMI or CORBA, while microservices are normally done with http(s) or
event/pubsub systems.

Basically you're re-inventing linking and weak/lazy binding across a system or
container boundary. That's it. This is not a new idea. It has some advantages
in areas like scaling and modularity, but it also has disadvantages of
increased latency, complexity, and all the weird edge case failure modes
introduced by the fact that _the network is not reliable._

One of the fun things about being developer-old (over 25) is seeing the same
things get reinvented over and over and over again by every new generation of
developers.

~~~
jamestenglish
I am an older dev that did a lot of "SOA" work back in the 2000's and I
disagree that microservices are a "reinvention". In my opinion they are a
further refinement of those SOA ideas (which is a good thing to keep moving
forward as an industry).

A lot of the SOA work I did was putting REST/SOAP interfaces on various
monolithic APIs. This was generally a good step forward and decoupled things
(especially the front-end) but it was far from perfect. You still had the
monolith and a lot of the code-smell that gets built into a monolith of
spaghetti dependencies making it hard to write new features and move forward.

Now the argument is always "well if you and your team were just better coders
and more disciplined that wouldn't happen" but in the real world it is shown
to be incredibly difficult to actually have a team that is that disciplined
and microservices give a real hard barrier to enforce that discipline.

I very rarely see things that are truly reinvented, and I think that is some
serious ego bias essentially saying: back in my day we did it right and you
dummies are just figuring it out.

Instead I see every new generation of developers refining good ideas over and
over again which improves our industry and products.

~~~
lmm
> Now the argument is always "well if you and your team were just better
> coders and more disciplined that wouldn't happen" but in the real world it
> is shown to be incredibly difficult to actually have a team that is that
> disciplined and microservices give a real hard barrier to enforce that
> discipline.

I agree that you need enforcement. But by having a multi-module project in a
good build system you can enforce decoupling at build time, without the
overhead of a network bourdary.

~~~
jamestenglish
Very true, but decoupling is only 1 of the benefits of microservice. Polygot
environments, scalability, etc also come with microservices vs modules.

As with everything its about the right tool for the job and a lot of
organizations are more than happy to trade network overhead for the other
bullet points microservices offer.

~~~
lmm
Polyglot I've concluded is more trouble than it's worth - you always end up
wanting to reuse code between modules, so a good general-purpose language for
everything is much better than multiple languages. Scaling is a nice problem
to have - it makes sense to split out services when you need to do so to
scale, but it's not worth the overhead until you need it (particularly if you
have enforced decoupling already, because that ensures that you will be able
to cleanly factor out services when the time comes).

The more experience I get the more I think "right tool for the job" is bad
advice. It makes it sound like tool choice is always 50-50, whereas actually
in a lot of cases it's more like 95-5. I've worked at a number of
organizations using various architectures, but the one that was most gung-ho
for microservices was by far the least successful, I think as a direct result
of their approach to microservices.

------
lawik
Seems to me that every point is "well, it depends" and I didn't find it
particularly convincing.

I work in a microservice codebase with the realization that it might have been
premature to what we needed but it does roughly live up to our aspirations.
Products sharing infrastructure and common functionality (accounts, file
storage/management, notifications, email) and solid boundaries for easier
outsourcing, support for using different languages in different parts if
necessary.

I've found gains, I've found costs. The article is right in that microservices
are not a silver bullet, I think few people really believe they are.

~~~
amelius
How do you handle distributed transactions?

~~~
UK-AL
You don't particularly need distributed transactions if you design it right.

Generally try avoid them if you can. They have massive performance overheads.

~~~
amelius
The problem is that more often than not, you don't know the requirements
beforehand.

------
oppositelock
Where did the author get those fallacies to refute? They're more like straw
men.

There is no infrastructure panacea for crappy software engineering practices
or bad unscalable designs.

We've been running microservices in production for years. It is more difficult
to partition behavior into little chunks than to make a monolithic app. It can
be slower if you don't handle inter-service RPC properly, and as your servers
evolve, you have to stay on top of merging or splitting services as needed to
maintain clean internal RPC's. It's a whole lot of work actually.

Why do it? Because there are also benefits. In a large system, microservices
which are tested at the RPC boundary can be deployed independently of each
other, which is HUGE. If you have a bunch of people building a monolithic app,
you can only rev it as fast as the slowest of your teams can build and qualify
it - this has been the biggest win for us! It's easier to test these things,
because you can mock at RPC boundaries, versus mocking code, which is a more
realistic scenario which helps you find issues like inter-RPC auth problems,
which you wouldn't find in code-mocked unit tests.

I think of it this way - it's a whole lot of extra work which you have to do,
which buys you flexibility in deployment and testing. That's a big deal.

------
xg15
_You don’t need to introduce a network boundary as an excuse to write better
code._

This obviously _should_ be true (and for many programming languages it is) but
I feel thanks to the perfect storm of Java class resolution, gratuitous use of
reflection and Maven dependency rules, this isn't actually the case anymore in
the Java world.

If you work on a large Java project with lots of 3rd party dependencies, you
can experience some odd interactions of libraries that you'd assume should
know nothing of each other - e.g. because they pull in conflicting transitive
dependencies or perform classpath scanning to "simplify" configuration.

In cases like this, it's a huge help if you can keep the interactions in bound
by splitting an application into several different VMs or WARs.

~~~
lmm
Java is the one ecosystem that I think has this sorted, because Maven gets
releases/versioning right and makes multi-module projects easy. You need to
align yourself on consistent versions of your transitive dependencies, but how
many of your modules would not use most of your dependency libraries
(especially the all-encompassing frameworks that tend to have transitive
dependency issues) anyway? And of course all your modules have to depend on
the same versions of things, but that's a good idea anyway - even in a
microservice world using two different versions of a framework for different
modules will lead to a confusing debugging experience. Classpath scanning
would be an issue if done blindly (e.g. not scoped to specific packages), but
who's actually doing that? And if you absolutely have to have multiple
different versions of a dependency, OSGi, while a massive pain, is more
lightweight than an actual network boundary.

~~~
lnanek2
Not only that, but big iron level Java app servers like OracleAS have long
been able to have "hidden" dependencies. So your Hibernate dependency can
depend on one version of, say, commons-lang, and your Spring dependency can
depend on a different version. For many years this had to be done with tools
that change the package names, but in modern app servers you can just specify
this using configuration and they implement it with special class loaders.
Only really causes a problem if your dependency returns instances of its
dependency to callers and then your app needs a different version...

------
siliconc0w
I think the current best practice is 'monolith-first' \- but try to use a
framework that allows you to isolate domains/logic (like rails engines or
django apps). Then once you grow to a suitable size where it makes sense break
it up.

My experience was, with a smaller shop, it was kinda hard to automate all the
stuff you do when starting a new app - it may need stuff like: github repo,
permissions, airbrake api key, docker hub repo, added to the CI tool, added to
the static analysis tool, alerting/monitoring config, etc.

You also need a way to enforce global org requirements like, "all apps need to
log in JSON so it be easily parsed later for analysis." or "all apps should
expose a health check at /health-check" or "all apps need to use version X of
dependency Y because of a security issue".

Without this automation your apps begin to drift in standards and practices.
They accumulate old libraries and it becomes a lot harder to, say, move from
one CI tool to another.

------
KaiserPro
Like all things in computing sometimes it works, sometimes it does not.

An example of when its a stupid idea is this:

A set of microservices for generating transcriptions for videos. It consists
of the followings:

o a wrapper for ffmpeg to take a video from S3 and split the audio, put to a
new s3 bucket

o A service that take the audio URL and uploads it to google STT, spits out
text

Now, there is no requirement for keeping audio about, so this service is
basically an unreliable single step service with poor logging and monitoring.

It doesn't help that to make it all microservice-y all the videos are shuttled
about via S3. Which means they've re-created a really shit file system on top
of S3.

All it needed was a hot folder, and a commandline program to be triggered by
inotify.

but that wasn't new enough, despite it working and being a fucktonne faster
and easier to debug.

------
the_arun
Not sure I agree with the article. In a nutshell Microservices enable the
organization with rusted engineering practices to roll forward. Initially it
sounds like a hype, but people will reap the benefits once they start building
them.

~~~
eej71
Care to elaborate? Did you think the fallacies were straw man arguments? Logic
was flawed in some of them? Did any of the fallacies ring true for you?

~~~
the_arun
IMHO, I think some of the fallacies listed there are correct. Having said
that, we cannot say something is bad just by the fallacies. Here are some of
the advantages of Microservices :

1\. Increased Isolation / Reduced dependencies - Results in reduced code roll
outs 2\. Also helps in software upgrades. For. eg. we need to upgrade from JDK
(as example) from one version to another 3\. Smaller Blast radius 4\. A
service can be managed by smaller team adhering to a specific service contract
5\. Forces team to do test driven development around the service contract.

Of course, every engineering pattern comes with trade offs and they may not
serve all use cases. We need to pick & chose what works for the problem and
team.

------
hliyan
I've worked with microservices in the last year and a half, and I partially
agree with the author. My current approach is that of "logical microservices"
\-- running within the same process/runtime, but with a language-independent
API, pure JSON, for example.

------
peterwwillis
"When should I use microservices?"

When you _need to_.

There is a long history of applications being implemented poorly and then
switching their design in order to fix the implementation. _Stop that!_ Shitty
legacy codebases are a fact of life, but just changing the design without
considering why it _needs_ to be that way won't result in anything more than a
new shitty implementation.

At a previous employer, there was a mandate given down that all new services
had to be 100% API-driven. Of course, very few people followed this. But even
if they had, this would not have resulted in every single team's tools
performing better or more reliably. Some head honcho probably just wanted the
_potential_ for all of the benefits of services with APIs, but completely
ignored the fact that there's cases where you just don't need an API, or that
it would be effectively meaningless to its function.

You can design monolithic now, and split out micro-services later, or vice
versa. The former allows you to work faster and get shit done now. The latter
gives you the _idea_ that in the future you can tie together new services with
a minimum of effort, but not a promise. Add to that that it's the equivalent
of wrapping every system call in a shellack of IPC calls and you get into some
really kludgy stuff. And heaven forbid you ever need to call one of those
services 10,000 times more than you used to and suddenly wonder why there's a
big bottleneck in your system that didn't exist when the function was
monolithic.

------
cle
This article explains the caveats and pitfalls of microservices, which are
extremely important to understand.

In fact, when making any technical decision, there are ALWAYS pros and cons to
every choice. It's infuriating to me when projects, libraries, frameworks,
articles, etc. only communicate their advantages without describing their
disadvantages and drawbacks. I can't (and won't) pick a technology until I
understand what it doesn't do and when it doesn't work.

So yeah, this is a nice antidote to the microservices hype. In my experience,
these are some of the most important drawbacks of using a service oriented
architecture.

------
eikenberry
A problem I see with many Microservice vs Monolith discussions is that there
seems to be an assumption that it is a binary relationship instead of an
analog one. That is, there exists all sorts of architectures in between the
microservice model and monolith model that have different trade-offs. You have
programs like postfix which has a collection of small applications working
together via sockets, queue based separation like sidekiq, RPC systems like
protobuffers or thrift, etc. You don't have to use an HTTP/REST API to get
many of the benefits of microservices while controlling the complexity.

------
hipsterrific
We split our monolith into microservices. We did so to make it easier to
reason about but morever we did it so that we can apply different data
persistence strategies for each entity. A monolith would have us bound to a
single persistence type (e.g., SQL) but microservices frees us which makes
reasoning the application easier. The monolith didn't work for us so we moved
to microservices but we didn't start with a microservice, we started with the
monolith and grew and evolved it as we needed too.

------
newscracker
This article and the points made in it looked very familiar to me.

On a related note, Martin Fowler's talk on this topic [1] has a good enough
description (meaning vague, because that's what this is practically)
description, the advantages, disadvantages and other nuances.

The key takeaway for me was that the team and organization have to be ready
for the micro services approach.

[1]:
[http://www.youtube.com/watch?v=wgdBVIX9ifA](http://www.youtube.com/watch?v=wgdBVIX9ifA)

------
brightball
Rather than responding to each of these points, which are valid points, here's
a blog post I wrote for Codeship a while back on microservices with Heroku
that dives into a lot of the tradeoffs from an organizational and
infrastructure standpoint.

[https://blog.codeship.com/exploring-microservices-
architectu...](https://blog.codeship.com/exploring-microservices-architecture-
on-heroku/)

------
kefka
We have guidelines from long ago.. (edit: Linky here
[http://homepage.cs.uri.edu/~thenry/resources/unix_art/ch01s0...](http://homepage.cs.uri.edu/~thenry/resources/unix_art/ch01s06.html)
)

    
    
        Rule of Modularity: Write simple parts connected by clean interfaces.
    

What's simpler than a container that has clean API in and out?

    
    
        Rule of Clarity: Clarity is better than cleverness.
    

Container Foo does foo action. Not Bar's action.

    
    
        Rule of Composition: Design programs to be connected to other programs.
    

If everything's an API call, calling to it is trivial.

    
    
        Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
    

The interfaces can be their own container. Now updating the Interface can't
bring down the whole system. And the people in charge of the interface
needen't have permission to change other containers (unless on that team).

    
    
        Rule of Simplicity: Design for simplicity; add complexity only where you must.
    

I know that a container does its job and does it well. Any complexity is
contained, and cannot pollute other systems.

    
    
        Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.
    

Or... The combined system that you run _is_ the big program. Because a big
problem is almost always lots of small problems that look big.

    
    
        Rule of Transparency: Design for visibility to make inspection and debugging easier.
    

Hence the APIs. The only thing I can think of adding, is a debug tag that
allows logging to a logging server if DEBUG = On. Then, it doesn't matter what
server the data is climbing through; you can see it. I personally like Apache
NiFi, for the provenance. There's nothing like it I've found.

    
    
        Rule of Robustness: Robustness is the child of transparency and simplicity.
    

A container makes it evident if it is the result of bad programming. You'll
see good data go in, and bad come out.

    
    
        Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.
    

Because you never know what the next dev will think about your code. That
clever method fails if the method is deleted...

    
    
        Rule of Least Surprise: In interface design, always do the least surprising thing.
    

That's arguable, if you're trying to scam users with dark patterns.

    
    
        Rule of Silence: When a program has nothing surprising to say, it should say nothing.
    
        Rule of Repair: When you must fail, fail noisily and as soon as possible.
    

Absolutely. I've dealt with way too many things that fail silently, or fail
with "OK", or are chatterbots on good data.

    
    
        Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
    

This plays right in to containerization. Compute/storage is cheap.

    
    
        Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
    

Yeah, LISP and FP still hasn't taken off.

    
    
        Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
    
        Rule of Diversity: Distrust all claims for “one true way”.
    
        Rule of Extensibility: Design for the future, because it will be here sooner than you think.
    

Hence containers. It's a ball of privilege separation, data segregation, clean
interfaces, and easily maintainable.

~~~
lmm
> What's simpler than a container that has clean API in and out?

The problem is that network APIs still suck. The boundary between components
is the place where strong typing is most vital, but microservices are still
largely based around stringly-typed JSON.

> If everything's an API call, calling to it is trivial.

Calling a library API is a lot easier than calling a web API

> The interfaces can be their own container.

No they can't. That doesn't make any sense. What do you even mean?

> I know that a container does its job and does it well.

Maybe. I'm never clear on what exactly the job of the "container" is supposed
to be - it seems to involve process isolation (I'd think ordinary processes
should do that?), and maybe doing funky network namespacing and PID
namespacing and a few other things like that, only then other people are
saying you shouldn't do that?

> A container makes it evident if it is the result of bad programming. You'll
> see good data go in, and bad come out.

Only if you can tell good data from bad data. Which is much easier to do
within the language, where you have a type system to help you.

> This plays right in to containerization. Compute/storage is cheap.

Compute/storage is cheap, so you probably won't ever have to make your system
distributed. The biggest problem with microservices is that they slow
development by making it much harder to run the whole system. Anything that
makes your development feedback loop slow is a big problem.

> Yeah, LISP and FP still hasn't taken off.

FP is taking off. It's a lot bigger than it was 5 or 10 years ago.

> Hence containers. It's a ball of privilege separation, data segregation,
> clean interfaces, and easily maintainable.

On the contrary, containers are being pushed as the one true solution to
everything, and seem to aggregate a bunch of unrelated concerns.

~~~
clintonb
> The problem is that network APIs still suck. The boundary between components
> is the place where strong typing is most vital, but microservices are still
> largely based around stringly-typed JSON.

You don't have to use a RESTful API and JSON. If you feel that configuration
is lacking you can use a binary format and a non-RESTful API.

~~~
lmm
> You don't have to use a RESTful API and JSON. If you feel that configuration
> is lacking you can use a binary format and a non-RESTful API.

Sure, and that improves things a lot. I do use Thrift (via Scrooge) sometimes.
I get the sense that isn't what the microservice advocates are talking about
though.

But even then, why introduce a network boundary when you don't need to? All it
does is make your system slower, add overhead converting back and forth from
DTOs, and give more opportunity for calls to fail.

------
cs02rm0
Like virtually everything else in software, it's a tool. Become familiar with
it, know its advantages and disadvantages and use it when you think
appropriate; always and never are probably not optimal.

------
chaohaus
Microservices, IMO, are more about scaling engineering organizations than they
are about scaling actual code, or producing better code. Treating each part of
the product as a self-contained service (a product in itself) with well-
defined customers, and defined contracts with those consumers will often allow
different teams within an organization to operate with more autonomy.

------
jamestenglish
Some counter points, especially with fallacy #1:

 _> Fallacy #1: Cleaner Code_

 _> “You don’t need to introduce a network boundary as an excuse to write
better code.”_

I see this all the time as the argument against microservices: "but if you are
just disciplined you don't need them". It seems pretty clear to me that a
large body of programming history has shown that it is incredibly difficult
for a whole team to be disciplined all the time. If everyone was perfect
drivers we wouldn't need seat belts. Some artificial "barriers" can be very
helpful to tackle people working in the real world.

 _> Fallacy #2: It’s Easier_

 _> “Distributed transactions are never easy.”_

This argument I'll buy, but there is a lot of really good work going into
orchestration like kubernetes which makes a lot of these concerns much lower.

 _> Fallacy #3: It’s Faster_

 _> “You could gain a lot of performance in a monolith by simply applying a
little extra discipline.”_

Again with the discipline... But another point he makes really misses the
mark:

 _> Additionally, many of these stories about performance gains are actually
touting the benefits of a new language or technology stack entirely_

When you have microservices it allows you to use the best tool for the job for
each individual service. If you have a Java monolith and you want to add new
functionality that say GoLang would be better suited for, chances are you are
still going to develop it in Java because it has to fit with the rest of the
monolith. His example counters his claim.

 _> Fallacy #4: Simple for Engineers_

 _> “A bunch of engineers working in isolated codebases leads to ‘not my
problem’ syndrome.”_

This is pretty subjective and I really think depends on culture and buy-in.
Almost all these same things could be said about a well architected SOA
monolith.

 _> Fallacy #5: Better for Scalability_

 _> “You can scale a microservice outward just as easily as you can scale a
monolith.”_

Again his example, seems to counter his claim. Yes you could segregate the API
calls to your monolith but you are still going to have the overhead of those
un-used APIs. Unless you are doing some really clever dynamic loading that
code is still going to be loaded into your executable package at the very
least redundantly increasing RAM load, making for wasted resource utilization
(which converts to real dollar and cents in the VM world). Additionally the
author completely ignores any potential start-up costs of needing to load a
larger monolith over a smaller microservice.

~~~
usaar333
> This argument I'll buy, but there is a lot of really good work going into
> orchestration like kubernetes which makes a lot of these concerns much
> lower.

Can you explain more how kubernetes help with distributed transactions? I was
aware they are more of a management system for microservices, not something
that handles db transactions.

I've found splitting a monolith into microservices very painful because it
prevents you from easily having all-or-nothing db commit semantics. It's a lot
easier to wrap your entire RPC/API endpoint in a single db transaction than it
is to issue multiple commits (e.g. an RPC to a microservice that writes) and
worry about the large number of states that can result if your endpoint
crashes mid-way.

~~~
jamestenglish
This was poorly worded on my part, Kubernetes does not in anyway solve this
problem.

Instead I meant to use Kubernetes as an example of the work being done to
standardize and abstract these _sorts_ of microservice issues. Kubernetes
solves a lot of orchestration issues and I expect there will be greater work
with other projects into solving other problems like distributed transactions
in the future.

The current thoughts on microservices is that you should really re-evaluate
how critical strong transnational guarantees really are to your application.

~~~
usaar333
> The current thoughts on microservices is that you should really re-evaluate
> how critical strong transnational guarantees really are to your application.

Are there some examples of what else one can do?

I'm generally fine with service 2 eventually being called if service 1 is, but
then I need that to happen.

e.g. I have a payment service and an internal "subscriptions service". If user
requests a refund, I have to both issue refund to payment service and cancel
internal subscription

------
dragthor
I can hear the Jet.com folks screaming right now.

~~~
Something1234
Why are they screaming? Did I miss some post about them?

