

The Granularity of a Micro-Service - scresswell
http://guidesmiths.com/blog/the-granularity-of-a-micro-service/

======
chatmasta
I really like the idea of microservice architecture, but my gripe with it is
the one this post explicitly neglected to deal with. That is, deployment and
service orchestration is a nightmare.

I recently experimented with a micro service architecture using Flask, but
after wasting time trying to figure out how to properly deploy and orchestrate
all the services (especially when running tests), I reverted to a simple
backend/frontend model. It's still technically a "microservice" architecture,
but the "services" are an API and the web-app. They're ~1000 lines instead of
100.

I get the sense that this kind of architecture is over optimization for a lot
of projects. Martin Fowler himself describes it as an "architecture for
_monolithic_ applications." I suspect that for smaller applications, where
services are not spread across multiple departments and locations, a
microservice architecture is more trouble than it's worth. The deployment is
simply too much of a headache (again, especially for testing). You're moving
all of your maintenance work from maintaining one big service to maintaining
the orchestration and deployment of dozens of small ones.

Again, I really like the idea behind this kind of architecture. But I had a
lot of trouble with a solid deployment and orchestration mechanism. I want to
be able to be able run tests quickly. And that doesn't just mean unit tests
for each service, because necessarily when you use this architecture,
integration tests become far more important.

If anyone with experience deploying microservice architectures using non-Java
components wants to write a blog post on your experience, I would love to read
it.

~~~
scresswell
Hi chatmasta,

Thanks for you're comment - we agree that the deployment of micro-services is
more complicated, although we disagree it has to be a nightmare. Not wanting
to jump the gun on my next blog post too much, the deployment solution for
Campaign Manager was:

1\. A co-ordination project, capable of setting up the development
environment, starting services, running tests, building artefacts and
deploying them. This sounds grand, but it was really a set of scripts
(JavaScript), that relied on a consistent naming convention.

2\. A file defining which services were deployed to which host per environment

3\. A shared list of endpoints for service to service communication,
datastores and the ESB.

The build scripts used the CI build number to version the artefacts and
created an SMF manifest. The deploy scripts (also written in javascript)
iterated over all hosts and performed the following steps...

1\. Put the host into maintenance mode, causing the load balancer to remove it
from the pool

2\. Upload the new / updated service artefacts (tar.gz)

3\. Stop services not supposed to be running on the host

4\. Install the new / updated service artefacts (i.e. unzip them)

5\. Import the SMF manifest (similar to an init.d script)

6\. Take host out of maintenance mode

7\. Delete services, retaining the last 5 versions

This is not vastly different to what I would have done if deploying a single
application to multiple hosts. We didn't have any need for orchestration as
the significant service-to-service communication was via asynchronous
messaging. We've improved on this in other projects, e.g. deploying docker
containers instead of tar.gz files, using AWS tags instead of a file to define
where services are deployed and even adding a service dependency graph so that
when one service becomes unavailable the dependent services can take
appropriate action.

Will write all this up in more detail and post to HN.

