Hacker News new | past | comments | ask | show | jobs | submit login
Questions to ask before adopting microservices (medium.com)
135 points by ebineva 15 days ago | hide | past | web | favorite | 74 comments



I recently did some reading on microservices, since I had to create a multi-service deployment.

I really don't understand how people are able to manage the complexity when they split every single small domain of an application into its own service. You have to re-invent layers that are normally provided to you by the database, you have to deal with a lot of extreme edge and failure cases and you also lose any ability to statically figure out how data flows through your application (codebase becomes harder to understand, fixing bugs will probably take longer).

In our case, we have each application as a separate service. On top of that, we have some smaller supporting services to make certain things easier, but they're limited and their use makes sense outside of creating complexity for sport.


Microservices are for companies with 500+ engineers.

They help teams manage complex interdependencies by creating strong ownership boundaries, strong product definition, loose coupling, and allow teams to work at their own velocities and deployment cadences.

If you're doing microservices with 20 people, you're doing it wrong.

It's when you have 500 people that the average engineer doesn't need to know about the eventually consistent data model of your active-active session system. They just need your service API.


>If you're doing microservices with 20 people, you're doing it wrong.

Wrong for the project and your employer most likely, but for you personally? Your CV for the next job will massively benefit from lines about leading initiative to reimplement a monolith as microservices, compared to old boring reliable technologies. I'm only half joking.


CV-driven development, the worst methodology.


> If you're doing microservices with 20 people, you're doing it wrong.

Like all generalisations, this rule will be wrong sometimes. I think you're imagining a single coherent "product" that all those people (presumably engineers) are working together on.

But imagine a consultancy (which I have witnessed) where projects are just one person, or occasionally two people, and only last a few months. Project outputs usually involving building a new component and slinging it together with one or two existing ones. In practice relatively few of these components get reused in future projects, but it's very hard to predict up front which ones will turn out to be key. In this case building microservices makes a lot of sense because the abandoned components don't take up mental bandwidth to maintain but they're always available if they do turn out to be useful later (even if in they need some updating, being self contained with a clear interface gives a head start). The multi process aspect is certainly a pain, and in principle it could be done with carefully curated libraries instead, but then the temptation for tangled interdependencies would be there and you'd be more tied to a specific language.


I think this is a good use case for libraries.


>They help teams manage complex interdependencies by creating strong ownership boundaries, strong product definition, loose coupling, and allow teams to work at their own velocities and deployment cadences.

You just described library boundaries. You did not describe why the interface has to be a socket.

>the average engineer doesn't need to know about the eventually consistent data model of your active-active session system

People can create unnecessarily connections between microservices too. It's only slightly harder than punching an extra hole through the public interface of an in-process API. Is that what we're spending so much time and money on.


Yeah. I think services are mostly needed when you need to scale things independently. If you just need to develop things independently and deploy them on different schedules, you can use a plugin architecture (hotswap dlls or jars into your big app) instead of services. That lets you avoid sockets and use local synchronous APIs for communication. I've worked on a large app with many teams building their own plugins, it was nice.


I like the rubric of, "If you have a people scaling problem or a performance scaling problem then you maybe need a microservice" Meaning, if there are too many cooks in the codebase and it's slowing down delivery, maybe use a microservice (which this reply is effectively calling out). The latter, if there is a hot place(s) in the code that needs to scale independently, maybe use a microservice. Any microservice can be represented as a library or dependency, design the code well and breaking things out shouldn't be terrible. See Moduliths https://github.com/odrotbohm/moduliths

What do you do if you have ~20 people and a decade old million line monolith that you're now bumping up against the limits of?


You could consider splitting certain large, distinct pieces into separate services, maybe. You don't need a dichotomy of one monolith vs. dozens of microservices. You could have a monolith plus 3 or 4 services.

Or only do that going forward; e.g. for future feature requests that would otherwise require adding another huge chunk to the monolith.


This is pretty much my thinking.


Common sense, no?


Seems the solution would be to improve the thing rather than reimplement it as microservices.


I think there are some things we can legitimately improve quite a lot under the current structure, and it seems there are other things that make certain improvements just untenable.

Upgrading a major version of an ORM for instance. It seems there are certain key improvements that boil down to "the only way this can be improved would result in a big-bang release affecting the entire of the system and take a dedicated team quite a long time to do". It's just not palatable for anyone.

For the parts we carve out as microservices we are essentially able to erase that technical debt. That comes at the cost of increased ops complexity. We're not ready or capable of having 50 microservices from an ops perspective, but we are capable of handling more on the order of ~5.


Every additional microservice adds complexity to the system, due to having to factor in many failure situations not present if everything is instead in one process.

There are other options in your case, like isolating any dependencies in a module with a well defined API instead of having it all in a separate (micro-)service.


Or split out an individual domain, particularly one you’re having pain with, into its own “micro”service.


I think this is going to be the approach we wind up taking.

In reality there are maybe 1 ~ 3 hotpots in the domain that reveice many more transactions than the rest.


It's still possible, and arguably good design, to write a monolith as a cluster of separate cooperating internal logical services. That way you can keep the benefits of decoupling and ownership boundaries without all the deployment complexity.

In Java, OSGi was particularly good for this, but you could easily build it around a fairly simple DI framework in pretty much every major language.

The most important thing though is to keep your interfaces well-designed and well-communicated. They are the principal source of bugs and misunderstandings.


Probably keep it as a monolith.

You don't need to define rigid ownership.

Though some pieces will move slower than others (and some things will never change), a single deploy cadence is probably fine.

There's also the danger of an incomplete microservice migration where you've now got hobbled together half-service weirdware that you have to support forever.

Don't do microservices until engineers working on their thing break your thing.


> an incomplete microservice migration where you've now got hobbled together half-service weirdware that you have to support forever.

Ugh I have to deal with this at my current job, except it's not an incomplete microservice migration it's by design and comes with a weird hobbled-together event queue implemented as a table in the DB and that has a known race condition that pops up every couple of weeks.


I'd say you'd first need to see if the limits can be fixed in the current setup.

If you really need to build a microservice to replace a part of your monolith. You need to hire a few devs that can act as a dedicated team.


I definitely think a reasonable degree of performance could be gained under the current structure by streamlining data access and maybe getting a bit smarter with caching. That still leaves us with non-scaling related problems of being unable to upgrade some very core dependencies which are stuck on some quite outdated versions.

Perhaps the strangler application approach could work to resolve that aspect over time.


Enumerating those limits and doing a boring risk benefit analysis of approaches to remove them. Sure microservices could fit in, but you need to get things pretty clean to split off a chunk in the first place


The whole monolith won't be a bottleneck. Work out what parts need to scale and factor those out.


20 people wouldn't make a million line in a decade. That's more like the output of a few hundreds people.


I work on a monolith. It was written by never more than ten developers, mostly fewer, in maybe 15 years. It contains just more than a million lines of code (1,074,010).

This figure excludes other code bases those developers worked on (for instance, at the peak of those ten developers includes several who were working on other discontinued products part time - even for most of their time).

It also excludes test data, schemas etc.

And there are several developers who took great pride in deleting code - I know I used to have more - lines than + lines in my few years.


I work on it every day. The earliest commit to the repo is 2010. The engineering team on average over that time is circa ~20 people.

So... Yeah. Thanks for denying my reality I live every day.


Haha, I love comments like this.

We've written over 10 million lines as 20-40 people in the past 15 years. I'll be sure to tell everyone we're not supposed to be moving so fast.


Funny enough, you wouldn't be able to work fast at all in a 10 MLOC codebase. Working with and navigating that amount of code is a bummer.

Imagine for a minute, if there were 50 developers (ignoring managers and administrative overhead) trying to understand 500 lines a day (a medium sized module in most languages), they still wouldn't be able to go through half the codebase in a whole year. So basically most of codebase can't be read or maintained.


We have a 4 million LOC ~15 year old monolith made by about 10 people that would like to have a word with you.


Tell me more about it? What industry is it, what does the software do?

Does it have a dictionary of 3M words to support spell check? :D


But what are the limits that warrant even considering microservices (a bit more detail please)? (Serious question)


We need to support an order of magnitude more daily active users than we currently do. I'm not exactly sure how close the current system would actually get to that, but my gut feel is it wouldn't hold up. It does OK as is, but only OK.

It's a combination of three different problems working against us in concert.

1) compute layer is multitenant but the databases are single tenant (so one physical DB server can hold several hundred tenant database with each customer having their own).

2) We're locked into some very old dependencies we cannot upgrade because upgrading one thing Cascades into needing to upgrade everything. This holds us back from leveraging some benefits of more modern tech.

3) certain entities in the system have known limits whereby when a customer exceeds a certain threshold the performance on loading certain screens or reports becomes unacceptable. Most customers don't come near those limits but a few do. The few that do sometimes wind up blowing up a database server from time to time affecting other clients.

For most of the domain stuff, to be honest I'd like to fix the performance problems and deadlocks by just making data access as efficient as possible in those spots. I think that could get us quite a bit more mileage if we took it seriously and pushed it.

For the single tenant database situation, I don't really know how to approach fixing that. I don't see us having enough resource to ever reengineer it as is. Maybe it's possible for us as a team, maybe it's not. The thinking is that for the parts of the domain we're able to split out, we could make those datastores multitenant.

There's also a bunch of integration code stuck in the monolith that causes various noisy neighbor problems that we are trying to carve out. I think that's a legitimate thing to do and will be quite beneficial.

But yeah... It's a path we're dipping our toes into this year in an effort to address all of these problems which are too big for us to tackle one by one.


Sounds like you would do best solving the problem you have clearly identified first - the need for a multi-tenant database solution with decent performance. To limit the re-engineering work necessary you could look to separate a coherent functional area as a service where query performance analysis shows heaviest contention, create a microservice for just that, then use best practice like adding a tenant_id key and a system that shards intelligently on that, e.g. citus.


That's reasonable. Also consider going the other way: keeping per-tenant logical databases, and splitting up some or all of the compute layer to have single tenancy or bounded tenancy. For example, if your compute layer is a web server, making multiple sets of webservers with something in front of them routing requests to a given set of servers based on a tenant identifier can chunk up your multiple-noisy-neighbors problem into at least multiple noisy "neighborhoods", with the (expensive) extreme of server-per-tenant. If your compute layer is e.g. a service bus/queue/whatnot worker, the same principles apply: multiple sets of workers deciding what to work on based on a tenant ID or per-tenant/group topics or queues. You can put the cross-cutting/weird workloads onto their own areas of hardware, as well.

I propose this because I think having database instances split up by tenant (even if multiple DBs share the same physical server) is actually a pretty good place to be, especially if you can shuffle per-tenant databases around onto new hardware and play "tetris" with the noisiest tenants' DBs. Moving back to multitenant-everything seems like a regression, and using (message|web|request) routing to break the compute layer up into per-tenant or per-domain clusters of hardware can often unlock some of the main benefits of microservices without a massive engineering effort.


>That's reasonable. Also consider going the other way: keeping per-tenant logical databases, and splitting up some or all of the compute layer to have single tenancy or bounded tenancy. For example, if your compute layer is a web server, making multiple sets of webservers with something in front of them routing requests to a given set of servers based on a tenant identifier can chunk up your multiple-noisy-neighbors problem into at least multiple noisy "neighborhoods", with the (expensive) extreme of server-per-tenant. If your compute layer is e.g. a service bus/queue/whatnot worker, the same principles apply: multiple sets of workers deciding what to work on based on a tenant ID or per-tenant/group topics or queues. You can put the cross-cutting/weird workloads onto their own areas of hardware, as well

This pretty much describes exactly where we are right now. We've been able to migrate the big customers to a new, less overloaded database server. We could continue to do that. I believe it's what you call a "bridge" architecture, so the compute layer is stateless and can serve any tenant. It's also got a queue/service bus to offload a lot of stuff that the web servers shouldn't be doing. That stuff is all on autoscaling but even that's not a panacea.


> I really don't understand how people are able to manage the complexity when they split every single small domain of an application into its own service.

Most places using microservices use them because of the extra complexity and management overhead, as it gives them a reason to hire more engineers to deal with it, write blog posts about how “cool” they are and how they deal with the (self-inflicted) problems and downsides of microservices, get more funding to pay said engineers, etc.

This reminds me of a company I used to work at. The product is just an e-commerce app, but somehow they now need 3 different languages, a Kubernetes cluster, and hundreds of engineers for something that used to be able to run on Heroku just a few years ago with no functional changes to the product. Yet somehow they keep getting funded despite bleeding money for over a decade so I guess the strategy must be working since investors are guzzling the kool-aid and are happy to pour more money for the engineers to be able to play around with new shiny toys.


I thought you were talking about my last company until the "hundreds of employees" part. Makes me wonder if this is a common situation.


Microservices are there because of Conway's Law.

If you have a small team that takes care of all aspects of the system, microservices will cause you plenty of headache. It is usually better to keep things simple, as a monolithic system built using technologies that all the developers are familiar with.

If you have a large, multidisciplinary team, microservices will allow you to split the team into smaller squads focussed on individual components and develop and deploy them independently from each other, choosing whichever technologies will provide the most benefit.

Like any other architectural pattern microservices are not there for their own sake; they are solving a specific, concrete problem, which might be even more organisational than technological. If you don't have that problem, don't use microservices.


Home usage example, probably not that relevant. Time span is 20 years.

I used to host everything on one machine. It got complicated quickly.

I then split everything between VMs. Still difficult to maintain.

I finally switched to docker where 70% of the containers are from Docker Hub, maintained by enthusiasts or the devs of the product. I trust them blindly, knowing they will do a better job than I do.

30% of the services are mine. Tiny images, tiny specialized code easy to maintain because there are few dependencies.

Again, this is a home setup with a tolerant family who can spend a night without lights when I am struggling to roll back "obvious improvements"


For me I’ve never been able to create micro services without running into what you’ve just said about the complexity from the tools themselves. As of right now I’m biased towards writing my sloppy code and seeing what part of it falls over. To me that’s the best way to not rabbit hole while debugging these micro service problems.


The whole myth that micro services are simpler needs looked at.

The way I look at it, a micro service application is basically a monolith with extra points of failure because of all the network connections between services. Each service may be simpler, but the complexity just gets pushed to the Devops layer.


You described the distributed monolith. All the problems of a monolith and all the problems of a distributed system.

Services make sense when they map 1:1 with teams.


What are the problems of a (well designed) monoloith?


I've had good luck with more of a hybrid architecture: one central monolithic-ish service and numerous supporting microservices to perform specific (usually asynchronous) tasks. The microservices tend to be things that are easily separable from the core of the system and can be turned on or off without affecting core functionality (they also tend to be long-running and/or resource-intensive tasks). This is especially helpful in instances where we might not want to even deploy some capabilities depending on the environment.

I think you get some of the benefits of both approaches here: A single monolithic service that encapsulates the functionality needed for a minimal working system, and the ability to carve off and maintain clearly non-core functionality into separate programs to avoid bloating up the main service.


I agree with this. Microservices have their place. So do monolithic, centralized services. Using both in the same system is certainly not a sin, can can certainly be correct.

I'm currently reading through Juval Lowy's latest book, Righting Software. Juval is widely credited as one of the pioneers of microservices [1], and he states in the book that even he doesn't appreciate the term. He illustrates his point by comparing the small, eight-inch water pump in his car versus the huge, eight-foot city water pump that moves water to his home. Both perform an unarguably essential service. But just because the pump in his car is orders of magnitude smaller, does not make it a "micropump". A service is a service no matter how big or small it is.

Should you use microservices? As is the answer to any sufficiently interesting question: it depends.

[1] https://en.wikipedia.org/wiki/Microservices#History


I think that if the question you ask is: Should I use microservices or not? Then you have already failed. The question you should be asking is: Where should the application be split?

Nowhere, or into two unequally sized chunks, are both valid answers. If you can't split a component without resorting to weird coupling shenanigans, then don't split it.


Finally someone acting reasonable in this discussion. When I tell people my startup (We're two developers, a few thousand LoC, a pretty simple web service) is built as a monolith people look at me as if I'm crazy. Many people don't seem to realize that microservices are are A tool, not THE tool.


My advice is favour places with engineers who both have the knowhow and the power to make decisions based on cost/benefit analysis. There is no right or wrong without context. Beware of places where engineering is driven by hype and/or dogma.


You mean the engineers that have rolled out multiple monoliths and microservice architectures across different teams with different backgrounds, all the while keeping careful track of expenditures?

I'm sorry, these people do not exist. People are going to eyeball and wing it, based on personal preference and their gut feeling. Dress that up with good-sounding arguments and maybe even some metrics and you have all you need to convince the stakeholders - who almost certainly don't understand the issue at a deeper level, nor do they care to.

It's also made more nuanced by the fact that people talk about different things when they mean "microservice". Anything can be split up into arbitrarily small services. This is attractive from a management standpoint, because you can assign engineers in a simple manner without too much crosstalk. Most likely, nobody will ever bother to check if you're spending 10x the engineering resources to do the job.

In essence, I can wholly recommend microservices to engineering managers who want to take full advantage of Parkinson's Law.


Hah, this also reminds me of a company where we went back on a decision that had been made before I joined, and moved our basket (containing highly normalization, non-transient data) out of MongoDB and back into a SQL one. No one had any idea why Mongo was chosen for the task, but a very likely reason was that it was a period where NoSQL was very much en vogue.


> where engineering is driven by hype and/or dogma.

That's the whole industry pretty much.


Do not apply rules dogmatically would be a good rule... The willy-nilly breaking up of monoliths into microservices without a good reason to do so other than 'everybody else is doing it too' or 'microservices are cool' are going to result in a lot of busywork and ultimately will not serve your business interests.

If and when the time is right to break out some part of a monolith into a service is probably going to give some kind of net benefit. But breaking up a monolith into > 10 services without having done a proper analysis of the downsides and the benefits of doing so will result in wasted effort.

Monoliths aren't bad per-se, neither are services or even microservices. Knowing when and why you choose each and what the drawbacks of these are for any given situation is something the reason why you have CTO's and architects. Way too often the decisions about this level of granularity are taken at the wrong level with all kinds of predictable results.

Asking these questions is a good start. An even better start would be to ask yourself whether you are the person best positioned to answer these questions, and if not what individual or group of people is.

Most companies (SME's, < 500 employees < 100 devs) do not need microservices but some of them might use them to their advantage. Some problems you won't solve without, some are going to become needlessly complicated if you use them. In most cases either a well architected monolith or a couple of services with a unified front end will work as well or better than either a monolith or microservices. Some environments lend themselves particularly well to microservices (Erlang, Elixer), others are next to impossible to put together without including half of a new operating system.

Use common sense and err on the side of caution. A very good indication that you are using microservices wrong is when they start to call each other horizontally, effectively re-creating RPC or function calls across process boundaries.


I was one of two developers who built a large micro-service system from scratch. It was the correct way to do it given the requirements and limitations. The dev-ops overhead was HUGE. Handling multiple repositories, versions, version-dependencies, documenting/coordinating changing APIs between the micro-services.

Even something as simple as having 12 different repositories meant 12 different Bitbucket pipeline yaml files and there was no way for repos to share pipelines. So one change meant 12 changes.

It was good experience, and HAD to be done in that type of architecture, but keeping it all organized was a challenge.


Can you elaborate why it had to be done that way ?


I'd also be interested to hear why it was the right choice still? But, you hit on a good point with all of the bootstrapping which needs to be repeated. And then with each subsequent change you make to your pipelines (e.g. testing frameworks) either becomes inconsistent or you have to apple the same changes in N places instead of 1.


One of the major overhead for me was writing tests. I used to get so frustrated because there are so many, so many things to mock / stub.

Do you have any suggestions for testing (Mostly service API testing) ?


I'm only probably a month ahead of you but with my research and initial testing I've liked https://stoplight.io/open-source/prism/ so far. There are a lot of other options out there too if you are willing to make a swagger/open api spec


12 repositories between 2 developers? Why?


Seeding the N databases for each microservice is quite the pain, too.


Monorepo is the answer


A lot of these things do come down to manager or mid-level dev types who just want to use hot tech, either because it's a shortcut to sound like an expert, or they want to do it to gain experience in it.

The problem is, the tech is hot because google etc do it, and they're using it to solve problems light years from what yours might be.

The results are absurdly engineered monstrosities. Simple blog sites with 10 different frameworks in them, running on some absurdly complex cloud infrastructure.


My team's currently building a proof-of-concept microservicey replacement for a chunk of our company's platform, based on solid understandings of the problems the existing one has, what it'll need to do in the future etc. The architecture offers some very compelling solutions to these problems.

But we based this on evidence, we're testing everything to destruction, we're embracing tooling to help us manage everything (we could not do this without Terraform or something of similar capability)... but actually, our domain maps to microservices extremely neatly.

It's probably the first place I've ever worked where that's the case.

I like this article, because it draws attention to very real issues and problems with these architectures, and there is an overall danger and enormous risk in using an architecture "because it's good" or "because it's the way of the future" rather than the only correct reason to choose an architecture: because it's the right fit for your problem.

Oh also, because our thing is still a proof-of-concept, we may yet throw it all away and build something else. While initial results are promising, we haven't properly proved the concept yet.

But what a joy to work somewhere that lets us do this.


> based on solid understandings of the problems the existing one has

This. It strikes me that developing a clean monolith with well separated concerns and then deciding to pull off chunks into microservices, as being infinitely smarter than the reverse - finding your microservices to be higher interdependent, and trying to glue them back together. YAGNI should be a over-arching principle when starting out a project (and possibly, a business).


I'm mostly a solo developer now, and I'm working with microservices all the time. It's very good to focus on a single domain while developing a single part of the system, just like coding modules or classes.

That said, I'm always wondering why we are conflating so many different things when talking about microservices: deployment and scalability, multi-repo and code structure, integration tests and distributed transactions, and so on. I mean: you can definitely build your application in a monolithic system with a microservices architecture. It's just separation of concerns all the way down and a proper abstraction for the communication channel. You don't need to embed your system architecture in your repos and your code. These are "framework" problems. Just use one that abstracts all the deployment details until it's time to push the code to prod and you have to split your code in different containers. For example, I'm now settled on Moleculer (https://moleculer.services/). It just works.

(edit: grammar)


Working on a microservices project currently. It is very well designed. Although i think this is not because of it being microservices. It is because of the team of skilled engineers.

Using microservices is complete overkill here. Lots of DevOps, hard to get the mental picture for new people. Also we share a lot of code between the repo's. That shared code is in a separate repo. Splitting of code from one repo and putting it in another repo. Refactoring tools dont like it that much. The quality of the code base is what matters and that is quite alright here.

I liked this line in the article: _But you should always remember; a well designed codebase can always be switched to microservices, as and when you approach the threshold._

I will remember that when I design a new monolith: every "application" in that monolith should be able to stand on its own. Every PR will be reviewed like that.


Isn't a monolithic service just kind of a single service micro-service system?


Not really, in the same way that a single person is different from a "one-person team". Having to communicate across service boundaries is the main downside of microservices and that is not required in a monolith.


I dear to say that a well designed monolith has a lot of mini micro-services build in.


Yeah I agree with that


Yep, and people have been splitting their code in all sorts of ways long before anyone called anything microservices.


We only has 8 services and already see the pain of microservices. We are trying to merge some of them.

I think micro service is a useful technique, only when use properly at a certain scale.

I wouldn't bother doing microservice when team is small and product isn't mature because we still haven't figured out exactly what each service has and may eventually has to duplicate some functionality.

With that being sad, I think thing like auth, push notification is absolutely should be their own services.


> We replaced our monolith with microservices so that every outage could be more like a murder mystery.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: