

Microservice Prerequisites - adamnemecek
http://martinfowler.com/bliki/MicroservicePrerequisites.html#

======
Animats
The article is totally buzzword-compliant. It's an important subject being
addressed poorly.

Facebook is internally architected something like that. Each Facebook page
display involves about a hundred machines. The internal message passing isn't
REST, though; it's a real RPC system. Many of the components are written in
PHP, and Facebook has a PHP compiler to speed things up.

Security is really tough in such systems. Exactly who's trusting whom, and who
authenticates what, is a tough issue. Vunerabilities are usually of the form
"A cannot do X, but B can do X, and A can talk to B. Can A induce B to do X?"

More fundamentally, we're still not very good at inter-process communication.
There's REST/JSON, which means a lot of parsing. SOAP is generally considered
too clunky. Google uses "message buffers" internally, which requires running
code through special pre-compilers that understand message definitions.
Microsoft, of course, has several systems of their own.

Most OSs don't support message passing well. What you usually want is an
inter-process subroutine call. What the OS usually gives you is an I/O
operation. (QNX gets this right, but only real-time programmers care. Message
passing came late to the UNIX/Linux world, and is decidedly an inefficient
afterthought there.

Whether this has much to do with how your development teams are organized, or
whether you go in for the "DevOps" mentality (which is usually an excuse for
not having a competent operations staff), isn't clear.

~~~
mickeyp
Well, microservices _is_ the new buzzword. Sadly most people who venture down
this path do it using technology they already know: HTTP and REST with JSON
even though, once you have a lot of services, that adds up to a lot of
overhead.

But ignoring the protocols used for a second, another major problem I see with
a lot of microservices architectures are the complete lack of transparency to
the operations team or developers once you spin up the system.

What happens if a request gets stuck somewhere in the pipeline? Who will know
about it? If it's HTTP REST and the request times out owing to poor request
timeout tuning then you have to cascade that failure up the propagation
"chain" to the original caller -- something a lot of projects fail to do
properly.

Security, as you mentioned, is another bug bear. Sure you log in to the "login
service" but do you pass a token around with each request -- again, a lot of
teams just superficially add this to the front-end facing service(s) only.

Debugging and logging is really difficult to pull off well. The best I've seen
was traceable logging that carried over between microservices by fastidious
use of "log namespacing" and timestamping to ensure that server time drift
didn't screw up the ordering over time.

Bleh. It's just hard to get right and working but because it's such an easy
thing to get started with most don't notice these things until it's too late
and you're too invested.

------
yummyfajitas
This article very much on point, and it also emphasizes why you should stick
to monolithic architecture for as long as possible.

------
ljosa
Does anyone have experience with using the microservices style for
development, but deploying them bundled into a larger unit (e.g., a single
Docker image or virtual machine)?

~~~
hartror
Why would you do this? It throws out one of the major advantages micro-
services have over monoliths, that of fine grained small deployments.

~~~
noelwelsh
Performance. The network is the slowest part of most applications.

~~~
JoachimSchipper
It's true that a datacenter network is slower than RAM, of course, but if
you're already dealing with internet latencies, an extra roundtrip within the
data center is hard to even measure - see "Latency numbers every programmer
should know",
[http://www.eecs.berkeley.edu/~rcs/research/interactive_laten...](http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html).

(I _really_ don't like this fact, aesthetically, but it's usually good
business to acknowledge it.)

~~~
noelwelsh
Sure, for most people it isn't an issue. But sometimes it is. E.g. if you're
hosting some kind of product that collects or inserts data into customer's
websites.

If you are really aggressive about performance you'll have data centers within
100-200ms of all your customers. If you do this, and you buy into the
microservices, it doesn't take many inter-service calls to exceed the network
latency. On the TechEmpower benchmarks most frameworks struggle to return a
static bit of JSON in <50ms. Now imagine 10 services communicating to fulfill
a request...

------
borski
I think it's somewhat of a requirement to also secure your microservices.
There are a ton of ways to do it, but bidirectional TLS is often overlooked:
[https://www.tinfoilsecurity.com/blog/securing-your-
microserv...](https://www.tinfoilsecurity.com/blog/securing-your-
microservices-via-bi-directional-tls)

~~~
keithba
I agree that securing the services is important.

I don't think bidirectional TLS is enough in many cases. Defense in depth is
required. You need to ensure that when services access other services, they
aren't granted wide open privileges because you (hopefully, still) own them.

I would add a reasonable authentication and authorization model to this list
of prereqs.

------
michaelt
I can understand wanting fast provisioning of new services if you adopt an
architecture that requires lots of new services.

But why would this architecture require 'DevOps Culture' any more than any
other architecture?

~~~
mianos
Because you will need to deploy a lot more than a single monolithic
application of a single version.

