
Building Resilient Microservices from the Fallacies of Distributed Computing - austingunter
https://datawire.io/using-fallacies-of-distributed-computing-to-build-resilient-microservices/
======
jwatte
"Defensive Programming is a well-regarded technique in software engineering."

No.

No, it's not.

Fail early, fail fast, crash loudly, and then react. That, plus deep
validation testing, is how you build robust systems. Trying to soldier on when
things are wrong just propagates bad data and bad behaviour into a larger
surface area that needs cleanup.

Assert everything, even in production. Capture all failures and action each
one (turn a 500 crash into a validation failure 400, etc)

Monitor all logs for unexpected/new anomalies.

Be vigilant around testing. Failure cases are part of the spec, too!

That's how robust systems are really built!

~~~
dsp1234
In the section that you mention, the author of the article is comparing
techniques for dealing with local function calls vs microservices over the
network, specifically in regards to the fallacy that the network is reliable.

With respect to local functions, I think most developers would agree that
defensive programming is well-regarded (otherwise this browser would crash on
half the web pages I visit).

With respect to distributed system, basically the rest of that section agrees
with you, concluding with:

"In fact, using a middleware or services layer that forces engineers to think
about their resilience strategies in the face of network failures is quite
valuable. After all, the engineers are the best people to decide how a system
should behave when things go wrong."

So you basically just re-iterated what the author wrote.

~~~
HillRat
Also, isn't Hystrix a good example of defensive programming in a services
environment? Seems to work well for Netflix.

------
jorgecurio
I find that microservice is just splitting a problem into multiple small
problems while the overhead of fixing each problem is uniform, so you end up
with a huge technical debt, and eventually fall into dependency issues.

Distributed computing is going to be less efficient than centralized system,
now you have multiple vectors susceptible attack...

I find the best architecture is a detached standalone-tenancy meaning a copy
of the web application distributed as an image running on a different server
assets distributed across different web host providers. 1 clone of your app =
1 domain = 1 customer

This way DDOS attack requires knowing all of your customer's domains which
runs your web application, and it dramatically increases the cost of launching
a successful and prolonged DDOS attack. The attacker even with a huge
bandwidth rate now has to spread it thin across hundreds of your customers
website.

Sure your own website hosted on amazon s3 could take a hit but your customers
are still able to run their business without drama from foreign state actors
or xbox players.

~~~
merb
Microservices are good, when you have a big team. Microservices are bad, if
you have a small team.

Microservices is about management, not about code.

------
doublerebel
The reality of microservices, is that we are all forced to use them whether or
not the local app is a monolith. Any app dependent on a 3rd party library
(analytics, database, webhook) is typically subject to the article's listed
network issues. I've seen the issue in 3rd party libraries from each of these
categories. So if our app is already architected to defend against errors in
3rd party libs, it's not much of a stretch to apply the same techniques to a
couple local microservices.

As another commenter mentioned, Netflix designed Hystrix to guard against this
specific scenario. There are important lessons here regardless of our local
app design.

