
Ask HN: Microservices testing best practises? - surprised_dev
Hello fellow developers, architects, tech leads,<p>Two years ago In my current company we started migration into microservices. The whole project happen without any plan in place. microservices were created ad-hoc, when someone wanted to create something new or just migrated a part of monolith API. We end up having around 15 coupled services. The problem we are facing now are the tests. As we don&#x27;t have client libraries - project were e.g. User microService is implemented, mock its methods during unit&#x2F;functional test. Mocks are part of the project consuming this service. Integration tests are done manually on staging. Because of that if something in the given microservices changes we are unable to detect the problem (unless someone spots it on staging). It happens often as developers do not update mocks in depended projects. In future we will run integration testing in docker using endpoints not mock but first we have to migrate our build environment to support docker.<p>Few questions:<p>#1 If we writing integration testing to test flow that depends on e.g. 5-6 Microservices where this test should be stored, in the code base of the client applications or separated repo?<p>#2 Should we use client libraries that e.g. provided mocks for given microservice so if there is a change all depended projects will fail running functional tests?<p>#3 How you track releases between multiple projects, e.g. branch xxx from this website can only work with microservices that are using branch yyy?  E.g. in case you would like to rollback website to the previous release. And at the time given microservice could change few times.<p>I would appreciate any feedback you can provide on microservices at your workplace.<p>Thanks!
======
maramono
Crazy idea: stop usng mocks completely where possible, and install a formal
currency process for keeping the existing ones up to date. Mocks are notorious
for diverging from the code they are mocking and should be used _very_
carefully.

It also sounds like lack of discipline and management are the root of the
problem that needs to first get resolved.

------
danielbryantuk
I wrote a blog post a couple of weeks ago that may answer some of your
questions:

[https://www.specto.io/blog/recipe-for-designing-building-
tes...](https://www.specto.io/blog/recipe-for-designing-building-testing-
microservices.html)

In answer to your specific questions

#1 If you are testing system-level (end-to-end) functionality the separate
repo definitely. If you are testing service (component) level functionality
then store tests in the same repo as the service and virtualise/mock dependent
services

#2 Opinions in the industry are split here - Adrian Cockcroft often suggests
using client libraries (as do the Datawire.io team), but I don't always
because of the developer maintenance cost of these libraries (and forget it if
you are using multiple/polygloy languages in your stack).

If I understand your question correctly I think you would benefit from reading
about Consumer-driven Contracts, as this will help you detect contract/API
breaking changes

#3 You can either entrust your build pipeline to manage this (I've used the
Jenkins promotion plugin to do this kind of thing), or you can use yaml config
files to specify service runtime dependency requirements (kind of like Maven
or Ruby Gems etc, but at runtime), or if using REST/HTTP you can deploy two
service versions at any given time and user the 'Accept' request-header to
specify which versions of services can communicate with each other

Feel free to DM me if you have any further questions!

------
rdli
Have you considered a strategy of:

\- engineering resilience against integration bugs directly into your
architecture \- setting your infrastructure up so developers can deploy
develop versions of new services into production, in a way that only
developers can access?

At some level, the complexity of microservices makes a traditional integration
testing strategy fairly expensive in terms of infrastructure and process.

------
salmanjamali
Not sure how I'd break my suggestions against your questions, but some of this
might help...

I like to think of these microservices as startups, where each exposes their
abilities via some endpoints and ideally at least one language specific client
implementation. Next, I'd identify ownerships such that there's a
team/individual responsible for the stability of the latest released client
artifact/version (i prefer semver.org). It's okay for a team/individual to own
multiple services.

The respective owners should maintain a suite of integration tests against
their clients as they are being promoted from local boxes to staging servers
and finally to production environments. The promotion via CI must be blocked
if these integration tests fail to pass. Finally, once a service team releases
a new version, they owe it to the consumers a key piece of information: the
new version's backward compatibility. So for as long as a consumer isn't
upgrading to a newer client version, they should not have to worry about a new
release breaking their integration. Besides that, the consumers are free to
upgrade to a newer client whenever they want to.

To sum it up, the owners of the services should ALSO own its reference client
impl along with the integration tests. Consumers should only focus on UI tests
and should not have to worry at all about the stability of the client, except
for graceful handling of a broken client/microservice.

------
Ryanb58
Unittest the crap outta your application. If you are going to do any
integration tests.. I suggest using selenium.

