
Microservices - otoolep
http://basho.com/posts/technical/microservices-please-dont/
======
rdsubhas
You need to be this tall to use [micro] services:

* Basic Monitoring, instrumentation, health checks

* Distributed logging, tracing

* Ready to isolate not just code, but whole build+test+package+promote for every service

* Can define upstream/downstream/compile-time/runtime dependencies clearly for each service

* Know how to build, expose and maintain good APIs and contracts

* Ready to honor b/w and f/w compatibility, even if you're the same person consuming this service on the other side

* Good unit testing skills and readiness to do more (as you add more microservices it gets harder to bring everything up, hence more unit/contract/api test driven and lesser e2e driven)

* Aware of [micro] service vs modules vs libraries, distributed monolith, coordinated releases, database-driven integration, etc

* Know infrastructure automation (you'll need more of it)

* Have working CI/CD infrastructure

* Have or ready to invest in development tooling, shared libraries, internal artifact registries, etc

* Have engineering methodologies and process-tools to split down features and develop/track/release them across multiple services (xp, pivotal, scrum, etc)

* A lot more that doesn't come to mind immediately

Thing is - these are all generally good engineering practices.

But with monoliths, you can get away without having to do them. There is the
"login to server, clone, run some commands, start a stupid nohup daemon and
run ps/top/tail to monitor" way. But with microservices, your average
engineering standards have to be really high. Its not enough if you have good
developers. You need great engineers.

~~~
StavrosK
That's pretty spot on. I once made an abbreviated flowchart of the above for
my microservices article:

[https://www.stavros.io/posts/microservices-cargo-
cult/](https://www.stavros.io/posts/microservices-cargo-cult/)

I urge everyone to use it to decide whether they really need microservices or
not.

~~~
cmdkeen
I know it is a bit tongue in cheek but if you're working somewhere where Data
Segregation can be solved by "a simple cascading delete" you aren't operating
anywhere close to where any service architecture is relevant.

------
throwaway13337
Microservices, like nosql databases, and complex deployment systems (docker)
are very important solutions to problems a very small percentage of the
development community has.

It just so happens that the portion of the community is the one most looked up
to by the rest of the community so a sort of cargo cult mentality forms around
them.

A differentiator in your productivity as a non-huge-company could well be in
not using these tools. There are exceptions, of course, where the problem does
call for huge-company solutions, but they're rarer than most people expect.

~~~
pbreit
My company built from the ground up with micro-architecture and it is an
unmitigated disaster. Totally unnecessary, mind-numbing problems unrelated to
end-user features, unpredictability at every step, huge coordination tax,
overly-complex deployments, borderline impossible to re-create, >50% energy
devoted to "infrastructure", dozens of repos, etc.

The whole thing could be trivially built as a monolith on
Rails/Django/Express. But that's not exciting.

~~~
johns
We were also built from the ground up with microservices and had the exact
opposite experience. Faster shipping (more value to end users), more
predictability (APIs designed/behaved similarly across functions despite
polyglot tech), much less coordination overhead (deployed dozens of times per
day with a < 10 dev team, pre-release backends well in advance of the user-
facing parts), etc. We had to invest a lot in infrastructure, but that was
worth it for many other reasons as well. Dozens of repos is annoying, but not
for a technical reason (a lot of SaaS like Bugsnag and GitHub used to charge
by project).

The biggest downside is it makes shipping an on-prem version nearly
impossible. The infrastructure and the software are so inextricably linked
that it is not portable in the least bit.

~~~
parasubvert
One of the ways we managed to get a microservices system on prem (vSphere or
Openstack) is to configure it with a system that can handle the whole
provision/build/configure/deploy as a single unit:
[http://bosh.io](http://bosh.io) .. though the learning curve was steep.

~~~
ukoki
BOSH has a famously steep learning curve:
[http://i.imgur.com/4UpbgJm.png](http://i.imgur.com/4UpbgJm.png)

------
phs318u
I'm a little long in the tooth so aren't as up to date with every new fangled
technique to land in IT. Some of you may find this anecdote interesting and
somewhat pertinent. _Many_ years ago the electric utility I worked at had a
home-grown set of batch-run Pro-C and PL/SQL programs that ran various
metrology operations on large volumes of meter data. These things were
interdependent, ran single-threaded and created a real "peak-CPU-demand"
problem for our compute hardware (the irony was not lost). Our industry was
facing an explosion in data due to the switch to smart metering. What to do?

Our apps all depended on an Oracle DB. Oracle had recently introduced Advanced
Queuing. So I figured I'd de-batch and decouple these things using AQ. Every
program (C++) was broken into "atomic", _stateless_ business tasks. Every task
was fed by a "task queue". Tasks would take a work-item off a queue, do their
thing and depending on the outcome, would look up a destination queue
(destinations could only be "business state" queues; task queues could only be
subscribed to state queues (topics)), dropping the task outcome onto the state
queue. Being stateless and callback driven by AQ, we could run these things
together and ramp them up and down as demand required.

The overall structure and dependency of the various tasks was externalised
through the data-driven queue network. The resulting solution was far more
maintainable, provided "free" user-exits (by virtue of being able to plumb new
tasks to existing "business state" queues), and was eminently horizontally
scalable. In hindsight this was definitely not state of the art. But we were a
pretty conservative business with a bunch of pretty unworldly C and PL/SQL
programmers. None of us had used Java at that point. But with this approach
were able to cope with a massive increase in data volume and make use of _all_
our expensive Sun cores most of the time.

No Java, no REST, no HTML, no SOAP. But we called these queue micro services
:-)

~~~
roman_sf
What would happen to the system when Oracle rolled out a new version or patch?

~~~
phs318u
Haha. We _wished_ we got patches. This was back in the day when the prevailing
mantra was, "Patches? We don't need no stinking patches!" (Though we did
convince the DBAs to apply a necessary AQ related patch. Rare.) Having said
that, Oracle were pretty good with backwards compatibility regarding their
DBs. There was talk of this thing called RAC in the next version. What a
dream! That's what I would have gone with to achieve "zero" downtime upgrades.
Never got the chance. We used very small patch windows where all boundary
processes would stop.

------
mey
From personal experience Microservices enforce a clear interface and isolation
pattern. This is achievable many ways, but having discrete deployed code makes
it very hard to violate rather than being disciplined.

Licensing costs can go drastically up as most modern licensing is node/core
based. As can deployment procedures get more complicated.

I would love to understand how this article believes that the modules in a
monolithic system can be scaled horizontally if they are actually a single
code base in a single system. Either the system isn't monolithic, or it they
have never really done it. Sticking a load balancer in front of a micro
service and scaling based on measured load requires tools and technologies,
but is very scalable. It also allows you to do rolling deployments of
draining/rotate out/update/rotate in that allows you to get near no planned
downtime.

Distributed transactions are the devil, but you don't need to do them in a
microservice design. It requires design work on the front end to clarify what
the system of record is, but if each service has a domain it controls, and all
other services treat it as the truth, it's rather simple. I say this having
researched doing payment transactions across geographically diverse colo's and
we treated that as a sharding/replication/routing issue very successfully.

Ninja edit: Starting with a microservice design is most likely overkill for a
lot of systems, but either way, clear interface/boundaries in your system are
good and healthy

~~~
lgunsch
Why would you not be able to scale a monolith? You can apply the same
principles to it: "[Stick] a load balancer in front of a micro service and
scaling based on measured load"?

A microservices allows you to scale up very particular components of an
architecture, but there is nothing stopping a monolith from being horizontally
scaled in just the same way. In AWS, I would make the monolith deployed with
an AMI in an auto-scaling group with a load balancer in front.

~~~
sbov
Yeah, scaling app code is generally a solved problem: shared nothing with some
load balancers. It doesn't work for every problem, but the above has been
standard in my circles for at least a decade by now.

Databases are trickier though.

~~~
robotresearcher
Multiple instances with shared nothing is the opposite of a monolith, by
definition.

~~~
anamexis
Not if they're all running the same code.

~~~
robotresearcher
OK, I looked up the definition of monolithic application in a couple of
places, and the meaning is not quite what I thought it was. I thought it was
'one app does everything for everybody' but it's more like 'one app does
everything for somebody', so the horizontal scaling applies, and my previous
comment is wrong.

It's a shame that a 'monolith' application doesn't just mean a genuine
singleton, though, as that would be the perfect name for it. A bank of load-
balanced monoliths should be a polylith.

~~~
sbov
I'm not sure if it's a useful distinction, because when you think you're
building a 'monolithic' app you might actually be building a 'polylithic' app.
Beyond that, in a lot of instances you have to make very few tradeoffs to go
from 'monolith' to 'polylith'.

By your definition, most rails/django apps are probably polyliths.

------
blackoil
My approach is to design like microservices and develop like a monolith.
Thinking about microservices will force you to define module, their boundary
and interfaces. A monolith will simplify deployment, refactoring. Once your
code matures, you'll know if any microservice has to be taken out and deployed
seperately.

~~~
hliyan
My conclusion as well. In my mind, I call them 'air-gapped modules'.

[https://www.linkedin.com/pulse/maintainable-software-
archite...](https://www.linkedin.com/pulse/maintainable-software-
architectures-microservices-vs-air-liyanage)

------
oelmekki
Instead of microservices, I split my projects in tons of libraries and think
of them as products, enforcing a well thought of and consistent api (usage
api, not http one). I call that an atomized monolith.

I get the cool things about microservices: properly isolated functionalities,
ability to assign a team on it, simplicity of code and considering each
feature as important, not just "that thing in the codebase".

But it also have all the good parts of monolith: easy deployment and local
setup, aggregation made easy, and ability to run integration tests.

For my rails projects, geminabox was of great use for me to achieve this, as
it allowed me to host private gems. Lately, I've done a lot of golang, and was
surprised to see how it's a natural pattern with go packages.

Only hurting part for ruby projects: keeping dependencies up to date in all
those libs (since they all have their test suite, it means that I at least
have to update them for test dependencies). To solve this, I've built some
tooling that will update all my project automatically and create merge
requests for them, running from a cron task.

~~~
naasking
> Instead of microservices, I split my projects in tons of libraries and think
> of them as products, enforcing a well thought of and consistent api (usage
> api, not http one). I call that an atomized monolith.

There's already a term for that: modularity.

~~~
oelmekki
A word which totally not carries lessons learnt from microservices vs
monoliths. But using that name is fine by me, I'm not trying to define other
people world, just sharing mine.

~~~
naasking
I think it does carry those lessons, but most languages are incredibly bad at
enforcing modularity. Monoliths make it far too easy to cheat, and dynamically
typed languages are typically worse here.

Microservices then just forces on you the modularity your language should have
already given you.

------
sr228822
A interesting article with some good points. I think the important takeaway is
understanding that monoliths are probably better for smaller companies, with
less total code, and fewer total engineers. At small scales, the "costs" of
microservices (network overhead, distributed transaction management, RPC
complexity, dev-environment complexity) outweigh any benefits. A monolith lets
you develop quickly, pivot, easily build cross-domain features, and is more
efficient up to a point.

That said, I believe there is a point where monoliths begin to break down.

First, It is tough to keep code well structure in a monolith, and eventually
things bleed between domains. That means, as mentioned, engineers must
understand the entire codebase. This isn't practical for 100k+ LOC codebases.
Strict boundaries, in the form of interfaces, limit the scope of code that
every engineer must understand. You probably still need gurus who can fathom
the entire ecosystem, but a new eng can jump into one service and make
changes.

Second, deployment is a mess with any more than a few hundred engineers on a
given code base.

Third, it becomes increasingly difficult to incrementally upgrade any part of
your tech stack in a monolith. Large monoliths have this tendency to run on
3-year-old releases of everything. This has performance and security
implications. It also becomes difficult to changes components within your
monolith without versioned interfaces.

Fourth, failure isolation is much harder in a monolith. If any portion of code
is re-used between components, thats a single point of failure. If your
monolith shares DBs or hardware between components, those are also points of
common failure. Circuit-breaking or rate-limiting is less intuitive inside of
a monolith then between services.

TLDR; start with a monolith, migrate to micro-services when it becomes too
painful.

------
fishtoaster
There's some good points here and some I disagree with. One area, though,
where I think he misses the point, is:

> Additionally, many of these stories about performance gains are actually
> touting the benefits of a new language or technology stack entirely, and not
> just the concept of building out code to live in a microservice. Rewriting
> an old Ruby on Rails, or Django, or NodeJS app into a language like Scala or
> Go (two popular choices for a microservice architecture) is going to have a
> lot of performance improvements inherent to the choice of technology itself.

Languages and tech stacks generally have tradeoffs. Considering Rails vs Go,
you could consider the (massively over-simplified) tradeoff to be that rails
is better for prototyping and iterating quickly, while Go is better for
performance. In an ideal world, you'd write your webapp in Rails, but put the
performance-intensive stuff in Go. You'd need to communicate between the two
by, say, http. Suddenly you have services.

The performance gains of using a new stack aren't orthogonal to services–
they're actually one of the key selling points of services: you can use
whatever stack is most appropriate for the task at hand without needing to
commit the entire project to it. You can use postgres for the 99% of your app
that's CRUDy and, I dunno, cassandra for the 1% where it makes sense. It's
difficult (although not impossible) to do that cleanly within a monolith.

~~~
StabbyCutyou
One of the downsides of the blog post was that I adapted it from a lightning
talk, so it was meant to be a little content-light, but to put ideas in
peoples minds around how to think about the tradeoffs.

For example, your point about Go vs Rails is an apt one - I would only add
that I made that comparison because...

A: It was originally a golang meetup where I gave the talk B: Go is
increasingly becoming popular as a choice people move to off of Rails, for
performance sensitive code (Scala being the other popular choice I see), and
also for building "microservices" themselves.

I could have, and maybe should have, gone a little more in depth at that part,
but the idea wasn't to be fully exhaustive (for better or worse).

But the main takeaway about the performance gains was that the idea of putting
the word "micro" in front of something magically made it more performant
without appreciating why. It's a response to folks simply parroting
information without understanding it.

Thanks for the feedback.

~~~
aikah
> Go is increasingly becoming popular as a choice people move to off of Rails

If they moved from Rails to Go, these people didn't need Rails at first place
given how bare bone Go is. That's the same issue with micro-services, choosing
a tech or architecture because hype instead of understanding requirements.
Micro-services are something that should be an exception yet it is pushed as a
rule by many influential developers, who won't be their to clean up the mess
when it becomes obvious it wasn't the right choice.

------
jedberg
This could be titled "If you do things wrong it won't be good".

A lot of his examples are of people doing things poorly or incorrectly. I
could make the same arguments about object oriented programming my saying it's
bad because someone makes every function a public function.

For example, microservices are absolutely more scalable if done correctly with
bulkheading and proper fallbacks and backoffs, and proper monitoring,
altering, and scaling.

But those things are hard to do and hard to get right.

~~~
StabbyCutyou
Hi, author here - thanks for taking the time read it.

You're not wrong in that this article is meant to point out the pitfalls of
the approach, and to advocate for understanding before diving into a
particular architecture.

It's meant to give people things to consider before deciding breaking things
into "microservices" is the right thing for their engineering org at that
time.

I attempted to note several times that my intention was not to say
"Microservices are bad", but rather "Please don't dive in before you consider
the trade offs". It's not as simple as some folks might have you believe, so I
felt it was valuable to have a "lessons learned" type retrospective coming
from someone who has been involved in both approaches.

Thanks for the feedback.

~~~
jedberg
Got it thanks for clarifying. My suggestion would be to make that a bit more
explicit -- I didn't get that impression reading the article.

------
BillinghamJ
Raises some good points, but I think the title isn't really correct. It's not
"don't use microservices" \- it's more about making sure you understand the
implications of having a microservice architecture, and making sure it's not
an excuse for not writing a monolith (or SOA) properly.

~~~
StabbyCutyou
I do wish I had titled it better, as most people have (rightfully) dinged me
on this one ;)

~~~
johnnydoebk
I'll go against the majority (or vocal minority?) and say: I like your title,
it's good because it attracts attention (I wouldn't read the post if it were a
"understand the implications of having a microservice architecture" or
whatever others recommend).

------
TickleSteve
“You don’t need to introduce a network boundary as an excuse to write better
code”

Absolutely this!

microservices is just decoupling by another name.... and you do not need a
network-boundary to enforce this.

Monolithic code can also be nicely decoupled too.

~~~
dragonwriter
> microservices is just decoupling by another name.... and you do not need a
> network-boundary to enforce this.

If code is decoupled enough that it can be separated into independent
processes communicating over a network, that creates additional freedom into
how the components can be deployed to (real or virtual) hardware, which is
itself a kind of decoupling.

If you have processes communicating by local-only IPC methods or, even moreso,
components operating within the same process, there is a form of tighter
coupling than exists when the components are separate networked components.

~~~
naasking
> If code is decoupled enough that it can be separated into independent
> processes communicating over a network, that creates additional freedom into
> how the components can be deployed to (real or virtual) hardware, which is
> itself a kind of decoupling.

It also introduces additional failure modes.

------
ivan_gammel
Cannot agree more with this based on experience of small startup. Let's say,
you want to develop a mobile app and REST API for it hosted somewhere in
cloud. There's so much hype about it, so you want to do it "right" (it's right
indeed, but for some distant future until which your startup needs to
survive). So, the possible solution is to take some common stack, like Spring
Cloud, and build a number of microservices with service discovery, config
server, OAuth and API gateway.

It appears, it's not so easy: 1\. First, documentation as always is not the
best, and you'll have to spend time figuring out how to wire together
different parts of the system and build various configurations of it for local
development, CI build and production. 2\. Then, there's debugging issue. Once
you've figured out how to work with Docker (good news, it's really easy
today), you may want to do some debugging in IDE, but it becomes really
painful to launch everything correctly with attached debugger if the services
interact with each other. 3\. Finally, it's production deployment setup and
associated costs. Besides the complexity of deployment, do you really want to
pay for 14-20 EC2 instances at the time of the launch of your service and burn
the money on 0% CPU activity? It will take months, probably years to get user
base sufficient for utilizing this power.

The better approach is to develop single server app with future scalability in
mind. You can still have separate components for each part of domain, you just
wire them together at packaging time. This server app still can scale in
cloud, with correctly set up load balancer and database shared between nodes.

Fortunately, we spent not much time on building microservices (about 1m/w to
figure out the costs and benefits) and were able to refactor the code to
simpler design, but many developers should not care about them at all at early
days of their company.

------
reacharavindh
I for one, feel the same way when someone tells me they are building
"microservices" for a small application that they don't ever plan to scale to
that levels. IMO, amongst us, there is a wide-spread issue of "Here's the new
cool thing - My application/system has to do it". The other day, a friend was
talking on and on about setting up a Hadoop cluster for what I saw as a one-
time use batch script.

~~~
fapjacks
And? Did you try to talk your friend out of it?

~~~
reacharavindh
Yes, I sent him this to read, [http://aadrake.com/command-line-tools-can-
be-235x-faster-tha...](http://aadrake.com/command-line-tools-can-
be-235x-faster-than-your-hadoop-cluster.html)

and he changed his mind :-)

------
mahyarm
You use microservices when your project expands beyond the monkeysphere number
where everyone knows everyone else.

It allows teams to work in their own world without having to coordinate as
much with other teams or people.

Microservices are good for large companies. If you're small you don't need
them.

~~~
eternalban
> You use microservices when your project expands beyond the monkeysphere
> number where everyone knows everyone else.

A layered architecture can give you the same.

Microservices, imo, address organizational/industry deficiencies in the design
and evolution of domain models. You're basically trading analytical pain for
operational pain. As the top comment in this thread (with the excellent list)
concludes, you will need "engineers".

> Microservices are good for large companies.

And this has nothing to do with number of developers. It has to do with
inherent complexity of a unified domain model for large organizations. As an
analogy, consider microservices as scripting to layered architectures compiled
language.

~~~
mahyarm
Microservices also enforce boundaries significantly more strongly. A layered
monolith can still easily have random people cut across boundaries without you
knowing because there are hundreds of engineers all working in the same
system.

Large companies don't have problems throwing more engineers at a problem. But
they will always have a problem in coordination costs.

Microservices also allow you to use different tech stacks for different
purposes more easily.

Maybe use java for one involving hadoop or some GIS library. Use erlang for
some message management service, use golang for some simple API service, use
nodejs for some frontend web server, etc.

Overall the advantages of microservices come for social reasons, not for a
particular technical reason.

~~~
eternalban
Layered systems do not have to be 'monolithical'. Note that we're both at this
moment using layered systems to have this conversation.

> A layered monolith can still easily have random people cut across boundaries
> without you knowing because there are hundreds of engineers all working in
> the same system.

I appreciated your final word regarding "social reasons" and I think we're in
strong agreement in that regard.

In the final analysis, it seems accurate to say that Microservices approach
permits runtime operational [micro] payments towards organizational and
analytical debt [1].

The hypothetical system(/straw man?:) you posit above is indicative of
organizational, not architectural, failure/deficiency.

[1]: in the 'technical debt' sense.

------
jsmith0295
One suggestion I would make if you are going to use microservices is to
consider using gRPC rather than REST. You can save yourself a lot of the
hassle involved in the communication that way AND make things quite a bit
faster.

------
jimjimjim
like all things, one size doesn't fit all. not everything is a nail regardless
of how shiny the hammer is. Having said that when the situation is right,
microservices are great.

some of us have been through this all before with soa or in my case with com.
Each individual component is simpler but the documentation between the
components becomes absolutely vital.

we ended up keeping a copies of the interfaces in a central location (with
documentation of all changes per version) so that everyone would know how to
talk to all the other systems.

and don't think that the interfaces won't change. they will. and often across
many systems/components. like a ripple.

------
partycoder
The problem is to define the scope of each service. And it is still possible
to create spaghetti out of how the services interact and how coupled they are
with each other.

If done poorly it is like trading one problem with another problem.

~~~
lgunsch
Yes, I have definitely experienced this. At the company I work at we have
dozens of super tiny services that could have been replaced with a single
class, or module in a code-base.

Each of the dozens of microservices gets it's very own dedicated AWS load
balancer, RDS instance, and Auto-Scaling Group in multiple regions. Just the
infrastructure management alone is monumental.

 _Edit: punctuation._

------
bunnymancer
As someone working with this setup right now, coming from what is fondly
referred to around here as the God-Monolith of our 1.0 version, I couldn't
disagree more....

But as always, this is an artform, writing and designing, not laying down
pavement.

There's no "right" way, and any blanket statement about anything is false.

Don't use microservices where they don't make sense, make educated decisions,
and choose the best option for your situation.

It made sense in our situation, because all our services have very very very
specific rules and boundaries and there's no overlap anywhere.

------
deathanatos
> _Fallacy #5: Better for Scalability_

> _However, it’s incorrect to say that you can only do this with something
> like a microservice. Monolithic applications work with this approach as
> well. You can create logical clusters of your monolith which only handle a
> certain subset of your traffic. For example, inbound API requests, your
> dashboard front end, and your background jobs servers might all share the
> same codebase, but you don’t need to handle all 3 subsets of work on every
> box._

This makes little to no sense to me, and feel like we're bending the
definition of "monolith" to mean "microservice" so that we can tick the bullet
point. How, exactly, do I achieve this, when my code is mashed together and
all running together?

I have a monolithic app today: an internal website, which is so small that it
could be served (ignoring that this would make it a SPoF) from a single
machine. But it's so closely bound to the rest of the system, it is stuck
alongside the main API. So, it gets deployed _everywhere_.

If it were discrete enough that I could run and scale that internal service
separately, I wouldn't be calling it a monolith. At that point, they're
separate executables, and scalable independently — that's practically the
definition of microservice. And I can't do this if (where they need to) they
don't talk over the network (one of the earlier bullet points).

~~~
happychappy
If you could separate the inbound traffic to either the website or the API,
then you could do this. You'd need something in front of the code you're
deploying though

My team has a 500k monolith written in java 1.6. I don't really want to invest
in fixing it, I'm migrating stuff to the new system. So a way to keep the old
one going risk-free is to create three load balancer pools, and have apache
send some traffic to the three based on URL pattern

* /users goes to pool one * /dashboards goes to pool two everything else goes to pool three

That guarantees that /users and /dashboards can be kept to certain level of
performance - by adding more machines, not by diving into the code and trying
to fix stuff.

The benefit is that its the same deployable in all cases, so its very easy to
push.

------
abglassman
Stabby! Several thumbs up to the point #1, that interface boundaries needn't
be coincident with service boundaries. In my experience, the benefit of
breaking out microservices is the decoupled deployment. A heuristic is, if you
have fixes/features that are waiting to be pushed to production until
unrelated code is passing/QA'd, you've got a good candidate for a separate
service.

------
yandrypozo
Nice compilation of fallacies on micro-services, something that we cannot
ignore; but after be reading a little about kubernetes I think much of those
problems may be resolved using kubernetes and some of common sense.

[https://kubernetesbootcamp.github.io/kubernetes-
bootcamp/ind...](https://kubernetesbootcamp.github.io/kubernetes-
bootcamp/index.html)

------
mavelikara
@StabbyCutyou, how does Basho's choice of Erlang as the primary language
affect it choice. My (naive) understanding is that Erlang forces one to build
a single-process system as if it were a multi-process system from Day 1. Does
this make the monolith -> microservices switch easier for Erlang systems than
it is for others?

~~~
nivertech
Erlang doesn't do incapsulation very well. Even if I divide code to clean OTP
Applications, each with it's own public interface, nothing in compiler stops
people from referencing internal modules and their exported (i.e. "public")
functions. The problem is that many OTP behaviors' internal callbacks need to
be exported, so they are exposed publicly.

I guess one need to use xref tool, to find all references outside the OTP
application.

------
AdieuToLogic
While this and many other writings about microservices are largely concerned
with network-based environments, there exists another microservice exemplar
specific to the JVM world:

OSGi[0][1]

I mention it mostly to assit those wanting to explore the concept of
microservices itself, as opposed to assuming a network transport is always
involved. Being JVM specific, "kicking the tires" on it naturally requires
that environment. Perhaps, though, some of the writings discussing it would be
of benefit to those using other tech stacks.

Of course, OSGi does not preclude distributed processing (and often is
employed for such).

0 - [https://www.osgi.org/](https://www.osgi.org/)

1 - [http://www.theserverside.com/news/1363825/OSGi-for-
Beginners](http://www.theserverside.com/news/1363825/OSGi-for-Beginners)

------
vhost-
It doesn't have to be messy. I've worked in monoliths that are a complete
disaster. I've worked in micro-architectures that are a complete disaster.
It's the same kinds of people and management practices making these disasters.

I will say the only clean systems I've worked in have been microservice
oriented. All monolithic systems I've worked on never scaled properly and
always had bugs with 1000 function deep stacktraces.

I've talked to people who have worked in excellent monoliths (rails and
django). I know they exist.

Moral is: do it right and have good development practices.

------
euske
I heard about microservices about a year ago, and now it said the hype has
ended before I even noticed? Admittedly I'm not in the loop, and it's hard to
track all the trends from outside.

------
roman_sf
You need almost none of that scary list to start building microservices.
Lambda functions can be created in minutes, even in UI console. And they have
almost everything from that scary list by default.

Lots of people are still in denial regarding microservices...

------
josh_carterPDX
My thought is that this comparison between a monolithic code base vs a
microservices code base is a bit subjective. If you're starting out chances
are your code base hasn't even gotten to the level of being monolithic. So
those thinking about how they're going to architect their platform may begin
to think that a microservice setup could help for future changes to their
code. It really depends on each team, their background, and how they want to
think about their platform in the future. To list out the pros and cons of
both to draw a conclusion that one is better than the other is certainly
setting a bias that I believe to be a bit unfair. Just look at Netflix and
their container services. It's a platform adopted by a ton of companies
including Nike. So for some a microservices approach makes a lot of sense.

~~~
StabbyCutyou
You have a good point in that in the early stages, any app is likely not to be
a "monolith", but it's less about size/LOC and more about the design ethic of
the architecture itself.

If you build your codebase internally with service level abstractions in mind,
you can gain a lot of benefit without the cost of the network or the
additional errors it can introduce.

Thanks for reading!

~~~
josh_carterPDX
I guess, in my head it's also eliminating a lot of code bloat using
microservices. Just my opinion though.

------
ai_ja_nai
Why would someone push for those "5 thruths"? The point of microservices is to
ease Ops life, so that deploying is less of a "big bang"-like event and more
geared towards incremental and local evolutions.

------
ironee
[http://basho.com/posts/technical/riak-quick-start-with-
docke...](http://basho.com/posts/technical/riak-quick-start-with-docker/)

------
jsnoder
If your product or project is not or has services, then it does not need such
a thing called microservices.

------
ScottAS
If this person used microservices perhaps their site wouldn't be down right
now...

------
seibelj
I swear, the HN front page algorithm is easily gamed, this gets a few points
quickly and it rises straight to the front page. I don't know if HN is
accounting for vote rings but some penalizing should be implemented.

~~~
cbernini
AFAIK the content is curated. So content with a little ammount of upvotes can
show up if some admin wants to.

~~~
striking
It's not curated. People used to be asked to repost their content if the
admins thought it was interesting, but this no longer occurs. (This turned
into a system that could automatically repost content considered interesting;
I've stopped posting stories as often, so I'm not sure if it still happens.)

Furthermore, there are protections against vote rings. If, for example,
someone votes directly on the URL for a story, or if the referrer is often the
same, those votes are discarded.

However, you're right that the algorithm has evolved over time. Visit
[https://news.ycombinator.com/classic](https://news.ycombinator.com/classic)
to see the previous algorithm in action.

~~~
cbernini
I remember some people complaining about it some time ago, thanks for pointing
that out then. :D

------
gjolund
The title should be "I've never implemented microservices properly, so you
should avoid them."

~~~
jsmith0295
I think _a lot_ of people don't implement them properly. And if you're an
individual or a small team trying to build something it's usually overkill.

~~~
exabrial
Agree 100% with both of these statements. A general rule seems is to divide
them on transactional boundaries and business concerns.

