
Challenges of micro-service deployments - weitzj
http://techtraits.com/microservice.html
======
manigandham
I've yet to see a big reason for this whole SOA / microservices architecture
other than in 2 specific instances:

1) massive projects with lots of developers/teams that work on well-defined
functionality

2) massive projects that need to have precise capacity at each service level
rather than the application level

Other than these situations, monoliths (both in architecture and deployment)
will likely be faster, easier, more reliable and more productive.

~~~
joshwa
#1 describes most software projects of any business significance.

~~~
pbreit
But describes near zero projects in startup world.

------
nathancahill
> A micro-services architecture does force you to be more conscientious about
> following best practices and automating workflows.

This 100x. With a monolith, you can get away with SSH'ing into boxes once in a
while to fix or debug stuff. Micro-services must be automated.

~~~
majormajor
But if ssh'ing into boxes every once in a while is all you need to fix or
debug stuff, why is it "best practice" to build out a lot of infrastructure to
replace that?

~~~
gedrap
I don't think that this is what he/she meant.

If you ssh into the box, check some logs or something, find wrong
configuration option and then roll out the new configuration in an automated
way then that's all good.

The problem comes when you change the configs manually over ssh and call it a
day. A few fixes and a few months later, no one really knows how to set up
some service because of those undocumented fixes (infrastructure automation
serves as documentation). Now image that you have a dozen of services, some
people quit, some people join, a year or two of manual fixes passes... And you
have an awful mess where no one knows how to set up the services and just hope
that you won't need to setup any new servers or migrate to different ones.

~~~
nathancahill
Nailed it. Micro-services are not necessarily always the answer (good devops
is), but they encourage doing the "right thing".

------
voodootrucker
The only good way I've found to do it is to have each service define it's
logical (network) dependencies in a formal way, just like it does with it's
GAV dependencies (including versions). This way the graph can be statically
analyzed, cycles and transitive conflicts can be identified, deployment and
roll back can be automated, the graph can be output to graphviz and
visualized, and an automated tool can set up QA environments with a given
vector of versions.

All that being said, the above setup only addresses about 1/2 the problems
mentioned in the post.

~~~
nathancahill
Any good resources for doing that?

~~~
voodootrucker
No, for us it was all custom tools built in-house :(

------
ascotan
> Only using optional fields and coding for missing fields helps us ensure our
> services are resilient to version mismatch.

I've seen this before and I've always been suspicious of it. So if version 1
has field r,g,b and version 2 has fields r,g,b,a and i use version 1 in a
version 2 stack any data on alpha field is ignored. O.k. so you didn't get a
stack trace, but is that working software?

~~~
voodootrucker
Totally agree. I think the microservices thing is actually a distraction from
the real concepts at play. As a good thought exercise, imagine if the same
thing happened within a single process.

If code was written that expected version 2, and a version 1 object was
provided, the static type checker would catch it at compile time.

But with microservices, there is no static type checker and you're essentially
coding as if you were in a dynamically typed language.

Hopefully you've at least set up integration tests where you can test the
service you're about to deploy against the others, but I think in many
microservice situations the only integration testing that happens is in
production.

------
brightball
Totally agree. Actually went in depth on this with Heroku for an article a
while back.

[https://blog.codeship.com/exploring-microservices-
architectu...](https://blog.codeship.com/exploring-microservices-architecture-
on-heroku/)

------
tkfx
Re Distributed Debugging / Centralized Monitoring, Logging and Alerting, this
is exactly the kind of problems that our team at Takipi (www.takipi.com)
tackles. It's a new way to get all of the information you need (source, stack,
and state) to understand what's going on in a large distributed deployment in
production - without relying on logs

~~~
bschwindHN
You should probably mention its a JVM-only technology (from what I can tell)

~~~
tkfx
Correct, JVM only

------
voodootrucker
Tons of good stuff in there. Best deployment post I've ever read.

------
pbreit
Key point: don't use microservices in small teams or on v1's.

~~~
simonhorlick
Microservices can work great for v1, but you absolutely need a common rpc
framework and a solid way to deploy, test and monitor. Most teams don't have
the right building blocks. This will change with time.

~~~
voodootrucker
You can do this in a company that is already big and mature. If you're doing
this from day #1 in a startup environment, then you aren't very lean, and you
better have lots of funding and an expert team with experience doing this.

~~~
thwarted
If you want to help ensure success, having an expert team with experience on
day #1 is going to have more positive influence than having a few, or a fleet,
of inexperienced people banging on it.

~~~
voodootrucker
Much of the world lives outside of the bay, and isn't backed by lavish VC
funding. You gotta do what you have to in order to survive. Many exciting
innovations have come from the duck-tape and bailing wire community.

~~~
thwarted
I assumed you were talking about those with VC funding and in the bay area
when you referred to "startup environments". And that's exactly who I'm
ragging on: SV startups who hire a fleet of inexperienced fresh grads because
they are cheap. I agree that you're not going to end up with a solid SOA
setup, or anything really, unless you're having experienced experts doing it
from day #1. I think you have a greater chance of ending up with an
impenetrable majestic monolith if a bunch of inexperienced people are working
on it.

