I really don't understand how people are able to manage the complexity when they split every single small domain of an application into its own service. You have to re-invent layers that are normally provided to you by the database, you have to deal with a lot of extreme edge and failure cases and you also lose any ability to statically figure out how data flows through your application (codebase becomes harder to understand, fixing bugs will probably take longer).
In our case, we have each application as a separate service. On top of that, we have some smaller supporting services to make certain things easier, but they're limited and their use makes sense outside of creating complexity for sport.
They help teams manage complex interdependencies by creating strong ownership boundaries, strong product definition, loose coupling, and allow teams to work at their own velocities and deployment cadences.
If you're doing microservices with 20 people, you're doing it wrong.
It's when you have 500 people that the average engineer doesn't need to know about the eventually consistent data model of your active-active session system. They just need your service API.
Wrong for the project and your employer most likely, but for you personally? Your CV for the next job will massively benefit from lines about leading initiative to reimplement a monolith as microservices, compared to old boring reliable technologies. I'm only half joking.
Like all generalisations, this rule will be wrong sometimes. I think you're imagining a single coherent "product" that all those people (presumably engineers) are working together on.
But imagine a consultancy (which I have witnessed) where projects are just one person, or occasionally two people, and only last a few months. Project outputs usually involving building a new component and slinging it together with one or two existing ones. In practice relatively few of these components get reused in future projects, but it's very hard to predict up front which ones will turn out to be key. In this case building microservices makes a lot of sense because the abandoned components don't take up mental bandwidth to maintain but they're always available if they do turn out to be useful later (even if in they need some updating, being self contained with a clear interface gives a head start). The multi process aspect is certainly a pain, and in principle it could be done with carefully curated libraries instead, but then the temptation for tangled interdependencies would be there and you'd be more tied to a specific language.
You just described library boundaries. You did not describe why the interface has to be a socket.
>the average engineer doesn't need to know about the eventually consistent data model of your active-active session system
People can create unnecessarily connections between microservices too. It's only slightly harder than punching an extra hole through the public interface of an in-process API. Is that what we're spending so much time and money on.
Or only do that going forward; e.g. for future feature requests that would otherwise require adding another huge chunk to the monolith.
Upgrading a major version of an ORM for instance. It seems there are certain key improvements that boil down to "the only way this can be improved would result in a big-bang release affecting the entire of the system and take a dedicated team quite a long time to do". It's just not palatable for anyone.
For the parts we carve out as microservices we are essentially able to erase that technical debt. That comes at the cost of increased ops complexity. We're not ready or capable of having 50 microservices from an ops perspective, but we are capable of handling more on the order of ~5.
There are other options in your case, like isolating any dependencies in a module with a well defined API instead of having it all in a separate (micro-)service.
In reality there are maybe 1 ~ 3 hotpots in the domain that reveice many more transactions than the rest.
In Java, OSGi was particularly good for this, but you could easily build it around a fairly simple DI framework in pretty much every major language.
The most important thing though is to keep your interfaces well-designed and well-communicated. They are the principal source of bugs and misunderstandings.
You don't need to define rigid ownership.
Though some pieces will move slower than others (and some things will never change), a single deploy cadence is probably fine.
There's also the danger of an incomplete microservice migration where you've now got hobbled together half-service weirdware that you have to support forever.
Don't do microservices until engineers working on their thing break your thing.
Ugh I have to deal with this at my current job, except it's not an incomplete microservice migration it's by design and comes with a weird hobbled-together event queue implemented as a table in the DB and that has a known race condition that pops up every couple of weeks.
If you really need to build a microservice to replace a part of your monolith. You need to hire a few devs that can act as a dedicated team.
Perhaps the strangler application approach could work to resolve that aspect over time.
This figure excludes other code bases
those developers worked on (for instance, at the peak of those ten developers includes several who were working on other discontinued products part time - even for most of their time).
It also excludes test data, schemas etc.
And there are several developers who took great pride in deleting code - I know I used to have more - lines than + lines in my few years.
So... Yeah. Thanks for denying my reality I live every day.
We've written over 10 million lines as 20-40 people in the past 15 years. I'll be sure to tell everyone we're not supposed to be moving so fast.
Imagine for a minute, if there were 50 developers (ignoring managers and administrative overhead) trying to understand 500 lines a day (a medium sized module in most languages), they still wouldn't be able to go through half the codebase in a whole year. So basically most of codebase can't be read or maintained.
Does it have a dictionary of 3M words to support spell check? :D
It's a combination of three different problems working against us in concert.
1) compute layer is multitenant but the databases are single tenant (so one physical DB server can hold several hundred tenant database with each customer having their own).
2) We're locked into some very old dependencies we cannot upgrade because upgrading one thing Cascades into needing to upgrade everything. This holds us back from leveraging some benefits of more modern tech.
3) certain entities in the system have known limits whereby when a customer exceeds a certain threshold the performance on loading certain screens or reports becomes unacceptable. Most customers don't come near those limits but a few do. The few that do sometimes wind up blowing up a database server from time to time affecting other clients.
For most of the domain stuff, to be honest I'd like to fix the performance problems and deadlocks by just making data access as efficient as possible in those spots. I think that could get us quite a bit more mileage if we took it seriously and pushed it.
For the single tenant database situation, I don't really know how to approach fixing that. I don't see us having enough resource to ever reengineer it as is. Maybe it's possible for us as a team, maybe it's not. The thinking is that for the parts of the domain we're able to split out, we could make those datastores multitenant.
There's also a bunch of integration code stuck in the monolith that causes various noisy neighbor problems that we are trying to carve out. I think that's a legitimate thing to do and will be quite beneficial.
But yeah... It's a path we're dipping our toes into this year in an effort to address all of these problems which are too big for us to tackle one by one.
I propose this because I think having database instances split up by tenant (even if multiple DBs share the same physical server) is actually a pretty good place to be, especially if you can shuffle per-tenant databases around onto new hardware and play "tetris" with the noisiest tenants' DBs. Moving back to multitenant-everything seems like a regression, and using (message|web|request) routing to break the compute layer up into per-tenant or per-domain clusters of hardware can often unlock some of the main benefits of microservices without a massive engineering effort.
This pretty much describes exactly where we are right now. We've been able to migrate the big customers to a new, less overloaded database server. We could continue to do that. I believe it's what you call a "bridge" architecture, so the compute layer is stateless and can serve any tenant. It's also got a queue/service bus to offload a lot of stuff that the web servers shouldn't be doing. That stuff is all on autoscaling but even that's not a panacea.
Most places using microservices use them because of the extra complexity and management overhead, as it gives them a reason to hire more engineers to deal with it, write blog posts about how “cool” they are and how they deal with the (self-inflicted) problems and downsides of microservices, get more funding to pay said engineers, etc.
This reminds me of a company I used to work at. The product is just an e-commerce app, but somehow they now need 3 different languages, a Kubernetes cluster, and hundreds of engineers for something that used to be able to run on Heroku just a few years ago with no functional changes to the product. Yet somehow they keep getting funded despite bleeding money for over a decade so I guess the strategy must be working since investors are guzzling the kool-aid and are happy to pour more money for the engineers to be able to play around with new shiny toys.
If you have a small team that takes care of all aspects of the system, microservices will cause you plenty of headache. It is usually better to keep things simple, as a monolithic system built using technologies that all the developers are familiar with.
If you have a large, multidisciplinary team, microservices will allow you to split the team into smaller squads focussed on individual components and develop and deploy them independently from each other, choosing whichever technologies will provide the most benefit.
Like any other architectural pattern microservices are not there for their own sake; they are solving a specific, concrete problem, which might be even more organisational than technological. If you don't have that problem, don't use microservices.
I used to host everything on one machine. It got complicated quickly.
I then split everything between VMs. Still difficult to maintain.
I finally switched to docker where 70% of the containers are from Docker Hub, maintained by enthusiasts or the devs of the product. I trust them blindly, knowing they will do a better job than I do.
30% of the services are mine. Tiny images, tiny specialized code easy to maintain because there are few dependencies.
Again, this is a home setup with a tolerant family who can spend a night without lights when I am struggling to roll back "obvious improvements"
The way I look at it, a micro service application is basically a monolith with extra points of failure because of all the network connections between services. Each service may be simpler, but the complexity just gets pushed to the Devops layer.
Services make sense when they map 1:1 with teams.
I think you get some of the benefits of both approaches here: A single monolithic service that encapsulates the functionality needed for a minimal working system, and the ability to carve off and maintain clearly non-core functionality into separate programs to avoid bloating up the main service.
I'm currently reading through Juval Lowy's latest book, Righting Software. Juval is widely credited as one of the pioneers of microservices , and he states in the book that even he doesn't appreciate the term. He illustrates his point by comparing the small, eight-inch water pump in his car versus the huge, eight-foot city water pump that moves water to his home. Both perform an unarguably essential service. But just because the pump in his car is orders of magnitude smaller, does not make it a "micropump". A service is a service no matter how big or small it is.
Should you use microservices? As is the answer to any sufficiently interesting question: it depends.
Nowhere, or into two unequally sized chunks, are both valid answers. If you can't split a component without resorting to weird coupling shenanigans, then don't split it.
I'm sorry, these people do not exist. People are going to eyeball and wing it, based on personal preference and their gut feeling. Dress that up with good-sounding arguments and maybe even some metrics and you have all you need to convince the stakeholders - who almost certainly don't understand the issue at a deeper level, nor do they care to.
It's also made more nuanced by the fact that people talk about different things when they mean "microservice". Anything can be split up into arbitrarily small services. This is attractive from a management standpoint, because you can assign engineers in a simple manner without too much crosstalk. Most likely, nobody will ever bother to check if you're spending 10x the engineering resources to do the job.
In essence, I can wholly recommend microservices to engineering managers who want to take full advantage of Parkinson's Law.
That's the whole industry pretty much.
If and when the time is right to break out some part of a monolith into a service is probably going to give some kind of net benefit. But breaking up a monolith into > 10 services without having done a proper analysis of the downsides and the benefits of doing so will result in wasted effort.
Monoliths aren't bad per-se, neither are services or even microservices. Knowing when and why you choose each and what the drawbacks of these are for any given situation is something the reason why you have CTO's and architects. Way too often the decisions about this level of granularity are taken at the wrong level with all kinds of predictable results.
Asking these questions is a good start. An even better start would be to ask yourself whether you are the person best positioned to answer these questions, and if not what individual or group of people is.
Most companies (SME's, < 500 employees < 100 devs) do not need microservices but some of them might use them to their advantage. Some problems you won't solve without, some are going to become needlessly complicated if you use them. In most cases either a well architected monolith or a couple of services with a unified front end will work as well or better than either a monolith or microservices. Some environments lend themselves particularly well to microservices (Erlang, Elixer), others are next to impossible to put together without including half of a new operating system.
Use common sense and err on the side of caution. A very good indication that you are using microservices wrong is when they start to call each other horizontally, effectively re-creating RPC or function calls across process boundaries.
Even something as simple as having 12 different repositories meant 12 different Bitbucket pipeline yaml files and there was no way for repos to share pipelines. So one change meant 12 changes.
It was good experience, and HAD to be done in that type of architecture, but keeping it all organized was a challenge.
Do you have any suggestions for testing (Mostly service API testing) ?
The problem is, the tech is hot because google etc do it, and they're using it to solve problems light years from what yours might be.
The results are absurdly engineered monstrosities. Simple blog sites with 10 different frameworks in them, running on some absurdly complex cloud infrastructure.
But we based this on evidence, we're testing everything to destruction, we're embracing tooling to help us manage everything (we could not do this without Terraform or something of similar capability)... but actually, our domain maps to microservices extremely neatly.
It's probably the first place I've ever worked where that's the case.
I like this article, because it draws attention to very real issues and problems with these architectures, and there is an overall danger and enormous risk in using an architecture "because it's good" or "because it's the way of the future" rather than the only correct reason to choose an architecture: because it's the right fit for your problem.
Oh also, because our thing is still a proof-of-concept, we may yet throw it all away and build something else. While initial results are promising, we haven't properly proved the concept yet.
But what a joy to work somewhere that lets us do this.
This. It strikes me that developing a clean monolith with well separated concerns and then deciding to pull off chunks into microservices, as being infinitely smarter than the reverse - finding your microservices to be higher interdependent, and trying to glue them back together. YAGNI should be a over-arching principle when starting out a project (and possibly, a business).
That said, I'm always wondering why we are conflating so many different things when talking about microservices: deployment and scalability, multi-repo and code structure, integration tests and distributed transactions, and so on. I mean: you can definitely build your application in a monolithic system with a microservices architecture. It's just separation of concerns all the way down and a proper abstraction for the communication channel. You don't need to embed your system architecture in your repos and your code. These are "framework" problems. Just use one that abstracts all the deployment details until it's time to push the code to prod and you have to split your code in different containers. For example, I'm now settled on Moleculer (https://moleculer.services/). It just works.
Using microservices is complete overkill here. Lots of DevOps, hard to get the mental picture for new people. Also we share a lot of code between the repo's. That shared code is in a separate repo. Splitting of code from one repo and putting it in another repo. Refactoring tools dont like it that much. The quality of the code base is what matters and that is quite alright here.
I liked this line in the article:
_But you should always remember; a well designed codebase can always be switched to microservices, as and when you approach the threshold._
I will remember that when I design a new monolith: every "application" in that monolith should be able to stand on its own. Every PR will be reviewed like that.
I think micro service is a useful technique, only
when use properly at a certain scale.
I wouldn't bother doing microservice when team is small
and product isn't mature because we still haven't figured
out exactly what each service has and may eventually
has to duplicate some functionality.
With that being sad, I think thing like auth, push notification is absolutely should be their own services.