Hacker News new | past | comments | ask | show | jobs | submit login
Faster, cheaper, and better: A story of breaking a monolith (zepworks.com)
148 points by seperman on July 17, 2019 | hide | past | favorite | 45 comments



It seems like most or all of the benefits listed in this post could have been achieved just as easily by changing code in their monolith. Like the ElasticSearch optimizations and the decoupling from the relational database.

Similarly the benefits they do tie explicitly to microservices like scoped errors and tests can also be achieved in monoliths with simple filters on the errors and tests being run, without all the work of splitting out the service.


I do not disagree with your comment. As I mentioned in the article the main reasons to break into micro-services are:

1. Independent releases 2. Easier for on-boarding new engineers 3. Scale micro-services independently

The rest of the article is about how to use the breakout opportunity to make major changes to the code and the data-structure. The goal is for the users of the service to have an aha moment of "Search is so much faster after the breakout!".


Microservices are almost completely opposed to agile development. They require a strong architecture before implementation and are much harder to reconfigure regardless of versioning. The running system is also hard to manage.


Microservices are typically an organizational rather than technical feature.

If each team are given their own microservice to manage the way they choose, things can work smoothly.


Not sure if I agree with you regarding the agile part. If each team owns their "micro-service", then they can have their own "sprints".


Yes in very large systems that's true. But in the other systems, there's huge overheads which restrict systems from changing.

- Multiple execution environments

- Complexity in communication between services

- Managing the versioning of each service and it's dependent services.

Although services are easily to scale and change independently, they are hard to reconfigure globally, which means the application as a whole is hard to change...


There are pros and cons to the micro-services architecture. To your point it is not a black or white solution. Our systems are fairly large and over-engineered.

Often switching to microservices means reducing the complexity of communication between teams at the expense of increasing the complexity of communication between services.


What is the difference between each team owning a microservice and owning a part of the monolith?

There is common release process, but with proper infra in place, that shouldn't be a problem.


It all comes down to coordination and regression, which are quite complex in monoliths.

Suppose you'd like to push code change to production for your monolith - how do you know that someone else's change 1) is ready to to production together with yours 2) does not affect your module. Typically these can't be answered easily - and so release process is converted to manual testing and scheduled (and often slow) releases.


I was just disagreeing with "Microservices are almost completely opposed to agile development". You can be agile in both monolith architecture and micro-services. I don't see why microservices can be opposed to agile.


I think the only reason it might be hostile to agile is if your interface keeps changing (api other things contact it with, or what it contacts others via)

But if those things are constantly in flux, it really isn't a good candidate for splitting into a microservice in the first place.


How is it easier to on-board new engineers? You mentioned but did not explain this point.


Easier to on-board in the sense of limiting what new engineers can see, only giving them smaller pieces of access.

Otherwise it's probably the opposite, takes a lot longer for new engineers to understand the full system. Perhaps even reduces the bus factor, since less people understand all the moving parts.


Exactly. I could not have explained it better than you.


That can be done with libraries.


We do use libraries as much as possible. However we avoid putting business logic into libraries.


How many engineers do you have and what's your growth in manpower like?


Our entire product, engineering, data science, design org is around 80-90. Engineering is about 50-65% of that.

But search/merchandising is a small fraction of the team. We have 2 dedicated backend engineers on the search team.

We are growing rapidly. We are aiming to double the size of the eng org in the next 8 months.


Cool, thanks for sharing. So is your micro-service architecture you described in your article for the 2 engineers or the 40? Asking because 40+ doubling to 80+ seems like a great fit for Micro-services, but 2 people, eh, not so much, IMO.


I don't disagree.

Micro-services to us roughly translate to k8/kubernetes deployments. By splitting the monolith into separate deployments we had more granularity in responding to web traffic vs inventory processing pipeline traffic.

In our case we did happen to have a new repository for it, but the same could have been achieved with better separation of concerns and refactoring within a monorepo/monolith.

But if you are already separating concerns to that level (e.g. only some parts of a service's code base are allowed to talk to X data store and other parts are only allowed to talk to Y data store), then why not break them out into separate services.

You get follow on benefits of the service separation including different teams can worry about fewer moving parts and can more easily work independently. Having well defined service boundaries implies properly versioned APIs. This is in stark contrast to having implicit APIs internal to a single service that often result from object oriented access patterns.


Totally.

It does seem like they had a good reason for splitting out search though: "We needed to release search updates independently and changes were frequent."

But yeah, I'm not sure why they are attributing the perf gains to the microservices. Very curious...


The performance gains of the API were not the by-product of cutting into micro-services. Basically this article is about "We had some issues with the monolith, we cut it into micro-services and also did these other changes along the way that saved us money and gave us performance boost. Changes that we could have done in the monolith but they were less risky and easier to do in the micro-service in comparison to doing them in the monolith."


It's titled faster, cheaper, better:... microservices, implying they were all due to splitting into microservices. But as usual it turns out refactoring was the saviour and microservices achieved jack-squat apart from having something nice to put on his CV.


Lol, Mirco-service architecture is a vehicle that can make it easier to achieve those goals. It is not a black or white solution. There are pros and cons.


Did they explain what was stopping them from releasing search updates quickly from a stable branch of the monolith? Not that I disagree with the approach, deployment units are the best place to put application boundaries, but it doesn't seem like a good enough reason to re architect an application.


Looking for some buzzy headline maybe ?


So far it has worked I guess!


Why is microservices so much more popular right now than, say, distributed processing via the actor model? It feels like splitting things up with HTTP boundaries is a lot of work, less flexible and precludes a lot of re-use. Is it just because of advances in tooling lately like Kubernetes and docker?


That is a great question. Kubernetes and docker and the recent addition of service mesh layer definitely do make things easier for micro-services. However as another commenter mentioned, Microservices are typically an organizational rather than technical feature. We do use pubsub pattern for distributed processing widely. Regarding the http boundaries point that you brought up, we use gRPC instead of HTTP for most services.


Conceptually microservices is about building loosely-coupled components. Client agnostic interfaces are a part of that, but using HTTP is just an implementation detail.

HTTP makes sense from the standpoint that if you want to have a lot of flexibility over how and where your services are run, they can be addressed the same way whether they're running on one box in your living room or distributed across several data centers.


Interesting article, though I wish there were more details on why this was cheaper, and where the hundreds of thousands of dollars saved annually came from.


Mainly saved in infrastructure costs: Elasticsearch and Kubernetes resources.


Also from Fair here.

A lot of the cost savings were because we could decrease the size of our elasticsearch cluster. This was mostly due to the flattening of the elasticsearch document schema, which improved performance dramatically but also allowed us to run significantly fewer ES nodes.


Gotcha, thanks for reaching back!


I went straight searching for advises on how to avoid a distributed monolith, but no luck, looks like most of the micro-services now days are just distributed monolith.

Let's share advises about it: https://medium.com/unbabel/your-distributed-monoliths-are-se...


This is cutting it close to say microservice, it’s just two services and they don’t fan out or have a lot of the complexity that causes the down sides; such as fan outs that require tracing to debug, requires too many services to startup on your laptop for dev, load testing and tuning, common libraries that cause systemic failures or require world rebuilds when modified, service to service authentication...


This is a very simplified version of our architecture just for the purpose of this article. We have exactly 83 micro-services currently.


Writing new code is faster than understanding existing code.

Different teams should have enough room to not step on each other toes.

Soft boundaries tend to be overruled over time, because of human laziness.


I’m not so sure, I’d rather start from what is there, understand it and go from there.

Sure, if you start over you may we’ll get there faster but you’ll probably repeats some of the same mistakes all over again.


I think the temptation to "rewrite the whole thing" is often strong with a large legacy project, and often doesn't pan out as well as it seems because while it is effective at eliminating a lot of the cruft and architectural missteps of the legacy codebase, it also means throwing out all of the subtle behaviors and edge-case-handling which has been built up over years of iteration.

I am a proponent of the disposable-code philosophy though: no system should be considered "off limits", and a healthy codebase should probably be completely rewritten every couple of years. It should just be done incrementally, in a modular way in most cases rather than throwing out the whole thing and starting again.


Can you view Microservices in the same light as multiple companies? Just like when your monolith calls on a third party API that does the payment for you - ain't that also a microservices that happens to involves 2 different individual companies?


Hmm, not sure if I'm completely following... Yeah you can consider them separate entities if they are truly decoupled.


Good read, thanks!

I’d be interested to know how long this all took and how many developers were involved.


It took over a quarter of the year with one engineer 100% dedicated to it and between 1 to 3 other engineers involved part-time over different stages of the project.


“Faster, cheaper, better” is a trigger phrase for Australians, given that it was the three word slogan used by the Liberal Party to whitewash their sabotage of our national Communications infrastructure.

I was wondering when the author would get to the punch line, but they just kept solving problems not creating them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: