Similarly the benefits they do tie explicitly to microservices like scoped errors and tests can also be achieved in monoliths with simple filters on the errors and tests being run, without all the work of splitting out the service.
1. Independent releases
2. Easier for on-boarding new engineers
3. Scale micro-services independently
The rest of the article is about how to use the breakout opportunity to make major changes to the code and the data-structure. The goal is for the users of the service to have an aha moment of "Search is so much faster after the breakout!".
If each team are given their own microservice to manage the way they choose, things can work smoothly.
- Multiple execution environments
- Complexity in communication between services
- Managing the versioning of each service and it's dependent services.
Although services are easily to scale and change independently, they are hard to reconfigure globally, which means the application as a whole is hard to change...
Often switching to microservices means reducing the complexity of communication between teams at the expense of increasing the complexity of communication between services.
There is common release process, but with proper infra in place, that shouldn't be a problem.
Suppose you'd like to push code change to production for your monolith - how do you know that someone else's change 1) is ready to to production together with yours 2) does not affect your module. Typically these can't be answered easily - and so release process is converted to manual testing and scheduled (and often slow) releases.
But if those things are constantly in flux, it really isn't a good candidate for splitting into a microservice in the first place.
Otherwise it's probably the opposite, takes a lot longer for new engineers to understand the full system. Perhaps even reduces the bus factor, since less people understand all the moving parts.
But search/merchandising is a small fraction of the team. We have 2 dedicated backend engineers on the search team.
We are growing rapidly. We are aiming to double the size of the eng org in the next 8 months.
Micro-services to us roughly translate to k8/kubernetes deployments. By splitting the monolith into separate deployments we had more granularity in responding to web traffic vs inventory processing pipeline traffic.
In our case we did happen to have a new repository for it, but the same could have been achieved with better separation of concerns and refactoring within a monorepo/monolith.
But if you are already separating concerns to that level (e.g. only some parts of a service's code base are allowed to talk to X data store and other parts are only allowed to talk to Y data store), then why not break them out into separate services.
You get follow on benefits of the service separation including different teams can worry about fewer moving parts and can more easily work independently. Having well defined service boundaries implies properly versioned APIs. This is in stark contrast to having implicit APIs internal to a single service that often result from object oriented access patterns.
It does seem like they had a good reason for splitting out search though: "We needed to release search updates independently and changes were frequent."
But yeah, I'm not sure why they are attributing the perf gains to the microservices. Very curious...
HTTP makes sense from the standpoint that if you want to have a lot of flexibility over how and where your services are run, they can be addressed the same way whether they're running on one box in your living room or distributed across several data centers.
A lot of the cost savings were because we could decrease the size of our elasticsearch cluster. This was mostly due to the flattening of the elasticsearch document schema, which improved performance dramatically but also allowed us to run significantly fewer ES nodes.
Let's share advises about it:
Different teams should have enough room to not step on each other toes.
Soft boundaries tend to be overruled over time, because of human laziness.
Sure, if you start over you may we’ll get there faster but you’ll probably repeats some of the same mistakes all over again.
I am a proponent of the disposable-code philosophy though: no system should be considered "off limits", and a healthy codebase should probably be completely rewritten every couple of years. It should just be done incrementally, in a modular way in most cases rather than throwing out the whole thing and starting again.
I’d be interested to know how long this all took and how many developers were involved.
I was wondering when the author would get to the punch line, but they just kept solving problems not creating them.