Most of the time, I've found a push to microservices within an organization to be due to some combination of:
1) Business is pressuring tech teams to deliver faster, and they cannot, so they blame current system (derogatory name: monolith) and present microservices as solution. Note, this is the same tired argument from years ago when people would refer to legacy systems/legacy code as the reason for not being able to deliver.
2) Inexperienced developers proposing microservices because they think it sounds much more fun than working on the system as it is currently designed.
3) Technical people trying to avoid addressing the lack of communication and leadership in the organization by implementing technical solutions. This is common in the case where tech teams end up trying to "do microservices" as a way to reduce merge conflicts or other such difficulties that are ultimately a problem of human interaction and lack of leadership. Technology does not solve these problems.
4) Inexperienced developers not understanding the immense costs of coordination/operations/administration that come along with a microservices architecture.
5) Some people read about microservices on the engineering blog of one of the major tech companies, and those people are unaware that such blogs are a recruiting tool of said company. Many (most?) of those posts are specifically designed to be interesting and present the company as doing groundbreaking stuff in order to increase inbound applicant demand and fill seats. Those posts should not be construed as architectural advice or _best practices_.
In the end, it's absolutely the case that a movement to microservices is something that should be evolutionary, and in direct need to technical requirements. For nearly every company out there, a horizontally-scaled monolith will be much simpler to maintain and extend than some web of services, each of which can be horizontally scaled on their own.
I also wrote https://adamdrake.com/enough-with-the-microservices.html as a way to communicate some of this, including some thoughts on when and how to structure a codebase (monolith) and when it might make sense to start moving towards microservices, etc. There are cases where it's reasonable (even advisable) to move towards microservices, but they are rare.
Microservices are a cult, but SOA is Amazon's conerstone. Most services are the right size to fit a team (so, not micro) and implement separation of concerns.
Some Amazon teams have multiple services of varying sizes. What Amazon and similar companies get right is by (generally) insisting on good engineering/business/regulatory reasons for splitting out services.
> Business is pressuring tech teams to deliver faster, and they cannot, so they blame current system
And are very often right about it. Delivering monoliths can require such amount of bureaucracy and needless coordination that it slows everything down. I've seen it.
> Inexperienced developers proposing microservices because they think it sounds much more fun than working on the system as it is currently designed.
It doesn't matter who proposes something, if its good idea, do it. And in my experience it is indeed more fun (in addition to other benefits).
> Technology does not solve these problems.
Using microservices is a matter of organisation, it has nothing to do with technology. It is non-technical solution for non-technical problem. Any effect it may have on a technology is secondary to main goal.
> In the end, it's absolutely the case that a movement to microservices is something that should be evolutionary
Nothing you said before contradicts that.
> and in direct need to technical requirements
How people are organized is not a technical requirement.
> For nearly every company out there, a horizontally-scaled monolith will be much simpler to maintain and extend than some web of services,
This sounds sanctimonious and rife with the exact same inexperience and overly brittle purist attitudes you are criticizing.
It really is true that huge monolith legacy systems might prevent dev teams focused on product growth from even being capable of doing their jobs, let alone meeting aggressive deadlines.
It doesn’t always mean microservices or heavy re-architecture is the right choice, but sometimes it absolutely is.
The places where I’ve seen the most value to pivoting away from existing monoliths often have benefited a lot from microservices.
I was part of a group that split a huge tangled mess of search engine and image processing services in a monorepo into separate smaller web services, and by further separating them into distinct repos per project, we could migrate things to new versions, convert some legacy Java services into Python to take advantage of machine learning tools that fundamentally do not exist in jvm languages, all in more careful, isolated ways that monorepo tooling just simply doesn’t support, and lots of other things that would not have been possible if we tried to steadily change portions while preserving their co-integration in a single large project that attempted to support modularization in ways that were simply just bad.
Your language seems to betray the fact that you personally associate the entire concept of microservices with being intrinsically dogmatic.
Typically only dogmatic people feel that way, in my experience. But either way, there’s nothing inherently dogmatic about a microservices approach.
To be fair, here is a direct quote from the parent:
»In the end, it's absolutely the case that a movement to microservices is something that should be evolutionary, and in direct need to technical requirements.«
I would argue that what you did is exactly that. Perhaps with the caveat that it should have been done earlier.
I'm not reading tha parent arguing that one should stick with a monorepo/monolith until the end of time, but rather providing a few thoughts around what might cause a push in applying microservices incorrectly.
Isn't the argument that is being made that you did things exactly right. That microservices are a great architecture to migrate to when you feel the need. But aren't a great to start a project.
I don’t see how you get that at all. The comment starts out expressly criticizing when organizations consider migrating to microservices from existing monolith projects.
The comment was arguing the use of microservices is far more common than the need for microservices. And based on your description it sounds like you guys were one of the few that had a need for microservices.
> " The comment was arguing the use of microservices is far more common than the need for microservices."
In re-reading the parent comment several times now and taking some time to reflect on it, I find that I am not able to agree with this interpretation of it.
As I understand it, the parent comment is taking issue with any type of reaction to a monolith in the direction of switching to microservices as a tactic to get rid of the blockage and tech debt. The comment does allow that some cases may support the use of microservices, but this secondary comment is so at odds with the sanctimonious tone sardonically criticizing people who want to migrate to SOA from a monolith, that I just do not find that phrasing to contribute much to my understanding of the comment. It seems clear to me that the comment means to harshly denigrate the idea of wanting to switch to SOA as a solution strategy in those cases, and the "concession" that sometimes it might be the right thing to do is tacked on, not really related to everything else.
I accept that we might just agree to disagree on the interpretation, but I still feel comfortable that my original interpretation is the most consistent with the available text of the comment and the context of it.
This is a great list. I've been reflecting on this drive for microservices in early stage companies for the last four years, and you hit all the major points.
One additional one I'll add is the marketing objectives of containerization and infrastructure companies.
And a good time to resurface Martin Fowler's Monolith First:
People can see that in a one day hackathon, the same bunch of people can produce more stuff than they do in a year otherwise. Why? Are they lazy? Did they use better tools?
My niece Shelly added address book integration to her hobby app in an afternoon, while drunk. WhyTF are we 640 man hours deep into "identity architecture coordination" meetings?!! Just do with Shelly did!
Those things don't make total sense, even to even the saltiest of developers. They know to expect it, but can't understand it. Neither can I, honestly. It's not surprising this gets so many people.
A lot of the hairy, abstract rabbit holes we climb into (whether organisation, like agile, or architectural, like microservices) are an attempt to solve the "100+100=6, wtf!" problem.
I guess that's the difference between proper Engineering and hacking something quickly and under self-inflicted pressure together. Your nieces app might work on her setup but not on others and may need almost a complete rewrite on an OS/API update. The properly engineered solution on the other hand might "just work" for years. I think key is to realise how much engineering is needed on what occasion. I find it disturbing when a simple app has highly sophisticated error handling unit tests as I would find it disturbing to find 30kLOC code bases with no unit tests, completely random spaghetti architecture and no explicit error handling.
I guess it depends on the company. But most of the time it seems bloated bureaucracy is a symptom of dysfunctional work relationships. When people know what to expect from each other and stuff "just works", then there is no need for any such thing and indeed you probably won't find it I think
Especially as I'm sitting here learning ASP.NET Core Identity on Pluralsight for hours and thinking about all the "identity architecture coordination meetings" I'm anticipating having on this next project.
> My niece Shelly added address book integration to her hobby app in an afternoon, while drunk.
We shouldn't downplay this achievement. That is impressive!
I agree with you that reality is very confusing. I think much suffering is caused by not fully embracing this fact. I don't mean to be defeatist. On the contrary, this great confusion presents a vast landscape for potential improvements.
I would attribute it to _fragile ego_. The confidence that's displayed right after reading/skimming a blog post & taking it AS-IS is baffling (not the first time I've seen it)
Mostly agree in full. But I think it's worth noting that microservices - or any technology for the matter - cannot fix a dysfunctional organization that's not clear about it's current and future business needs.
Does IT fail sometimes? Sure. But more often than not projects go sideways at the leadership / team (i.e., all stakeholders, not just IT) level. Blaming IT is a convenient narrative.
You make quite a lot of assumptions with 2 and 4. I was one of those inexperienced. Not as a developer but of the architecture of micro-services. As a technical lead I did not have the visibility of the cost and organizational effort into a micro-service architecture. I just started developing an architecture for our current project domain because we needed the flexibility of deploying these components separately.
But upon pitching the idea to an Organization Solution Architect VP he quickly stopped me and demonstrated me the cost of this effort and he did not have to demonstrate the effort because I went through similar challenges with in my team, and so expanding that effort into the entire organization would have been a massive undertaking.
So he did not shoot down the idea he just wants to take it down a notch and start with compartments, and not the entire organization.
I would just like to add that I consider there to be two types of microservices, one is developer focused, which is what most people are talking about when they say the word, but the other is operations focused, which is some sysadmins have finally been embracing.
When it comes to the sysadmin version, since you may be wondering, it mostly means decoupling entangled services into seperate, less centralized bins, bringing more resilency and quicker diagnosis timeframes when problems occur.
In my experience microservices have been prescribed as a sales bullet point instead of a software architecture decision.
Every time it's resulted in insane low traffic bottlenecks all over the place as services chatter away or separately need to look at he same file data so all request a copy etc.
Any architecture has tradeoffs and it's poor form to pick one before you've even described what the software is for.
Yeah, I definitely agree with these points. And I think quite a few of them come down to one simple thing:
There's a huge disconnect between what many developers wish they were doing and what many developers are doing. They wish they were at Google/Facebook/Amazon/whatever working on some complex greenfield project that'll change the world, they're instead working on CRUD apps for corporate clients, agencies and businesses with far less technical needs.
So their obsession with using microservices and modern JavaScript frameworks and complicated build processes and what not for everything comes down to them trying to turn B into A, even if the actual solutions they need don't actually require any of that complexity.
I don't believe it os reasonable to portray microservices as the result of incompetence and blame-shifting.
Microservices are actually a very basic and fundamental principle of software engineering: separation of concerns. If your system is extensive enough so that it covers multiple independent concerns and your team is large and already organized into teams focused on each concerns then it makes technical and organizational sense to divide the project into independent services.
They're subject to a clear limit, though. The more micro you make your services, the less they resemble operational units and the more they resemble primitives from which your actual system is built, sort of like an inner-platform effect. And then you have to debug interactions between microservices, with all the overhead that entails.
Separation of concerns is important, but how much separation do you need? Separate classes, definitely. Separate libraries...often. Completely separate services on different VMs with an api between them...if that is really what your situation requires, then sure, but I wouldn't make it the default option.
The mistake is when people think a buzzword is the new best practice without doing real analysis.
Is large is a key phrase here. I am seeing many times now that teams build more services then they have members. This only pushes separation of concern into the network layer when it could just as well be in a module or class level.
You can tightly couple services too, you know. I would say it is a good option to have but suffers a bit from too much popularity right now. People are using it for its own sake.
I find that, ironically, they are great for all of the things that lambda and google cloud functions make difficult, e.g. image processing, open cv, heavy calculation using third party libraries, etc. What we need is a lambda-like service that uses Docker files that run for a max of 540 seconds.
> Business is pressuring tech teams to deliver faster, and they cannot, so they blame current system (derogatory name: monolith)
They are not wrong though, monoliths cannot give you fast delivery. Fast delivery implies at least expressive dynamically typed languages with some resilience to bugs, which in turn requires limiting the scope of bugs and therefore decoupling and isolating everything as much as possible. This is very different architecture from monoliths. Microservices are a first step there, but of course not a substitute for lightweight isolated processes and supervision trees. Still, monoliths are definitely bad choices in every way possible if you can split them into isolated services.
I disagree. You can have fast delivery with monoliths - IME microservices provide better separation of concerns, which one the benefits could be faster delivery.
However your end user doesn’t interact with microservices, they interact with a product. “microservices” suggested as a delivery silver bullet tend to be ways managers try and mask the fact that they are trying to hire 9 women to make a baby in one month.
I agree that microservices is the new "must have" technology but actually it isn't a great deal different from a monolith. The monolith can have separation between services and still requires interfaces to work between them.
As others have said, microservices bring a lot of baggage that you might never have seen before (i.e. big learning curve) and the myth of isolated changes is just that, a myth. Unless it is some low level thing, you cannot change it without impacting other services and this is no different than a monolith.
Like the article yesterday about OOP, the same principles exist to write a good application whatever you use to do it.
It occurred to me a couple years ago when designing my deployment architecture that the majority of the complexity was just this. Instead of letting linkers/module systems do the work, I was doing all that stuff first by hand and then basically writing my own form of linking logic to solve a problem we created ourselves. Reminded me of when I read about the old ways people used to link code by hand...
And yes we were definitely cargo-culting...microservices were totally unnecessary for us. The only positive result is that it forced clean boundaries, but those could just as well have been forced by thoughtful architectural design anyway.
If it only forces clean boundaries then I agree, you shouldn’t be using microservices. The main benefits of microservices are things like fault isolation and better security, since you can run different components with different privileges.
This can sometimes be worthwhile in a corporate environment even if technically it makes things harder, because it solves a major political / communication problem.
However, for the expense of dealing with the complications of an additional network boundary to worthwhile, some of the following must hold:
* You need architect who has an overview of the whole system creates abstractions that make sense and puts the API boundaries in the correct place, avoiding the "tightly coupled microservices" antipattern.
* You need good devops people who can track problems that span service boundaries. This is the one area where the company can't skimp (i.e. no outsourcing of this position). Without these people you get an epic political clusterfuck where everybody ends up blaming "the other" team for their problems.
* The two services need to be built by two teams which, for whatever reason, cannot be relied upon to communicate or work effectively with one
another (different orgs, different country, different company maybe).
* The two teams possibly use different programming languages.
I believe these reasons are probably the reason why it worked wonders for Martin Fowler. Then startups read his blog and decided that every team needed to build 15 microservices and the whole world went crazy (thanks Martin).
"Although the evidence is sparse, I feel that you shouldn't start with microservices unless you have reasonable experience of building a microservices system in the team."
Counterpoint: if network calls weren’t flaky and slow would it make a difference in deciding whether to separate modules via a physical machine boundary?
As people already said, there is little point on talking about hypotheticals, since those properties are inherent...
But there are tools that can distribute a program over a network, and let your functions run at any node with the right capabilities "just like" (the network allowing) if it was local.
This is another point that the microservices pushers miss. It's a solved problem, and can be done in a much better way than what they push around.
It helps demonstrate where additional investment and development might be warranted. If we have hit a physical limit in networking, then that is one thing - but if we are only hitting an artificial limit because our technology and infrastructure is not sufficiently advanced - then this problem is actually showing an opportunity.
As an architect you have to sometimes ignore constraints to understand if the final picture you would assemble makes any sense. If the final picture you assemble makes sense, then working backwards through the limitations, to find out, are these really limitations or are these opportunities to innovate?
That's my thought as to why you would imagine they are not a limitation. To aid in brainstorming, innovation and identify opportunities for improvement, or alternative solutions you would not have seen if you simply accept the bottleneck as a given.
Isolated changes aren't a myth. FAANG and others leverage the isolation brought by service separation every day. For very large services, a monolith makes it difficult to test services independently.
I personally don't like the word "microservices" since it implies that services have to be micro. For the last few years I have worked on service oriented systems where the individual components are sometimes pretty big - one could say almost monolithic :).
Splitting a monolith into separate services exacts an operational price. Engineers should be honest in assessing whether it's worth it. Sometimes it is, sometimes it isn't.
Having worked in a payments company, where SLAs can be very demanding,
We broke our monolith server into microservices, and realized that we broke in to too many pieces as our SLAs broke. (every microservice you add to your usecase adds a small but fixed cost)
and finally decided to convert some microservices into libraries to save milliseconds and bring down our 95th percentile, I completely agree with the premise of the article.
never start with microservice, start with monolith with enough flexibility and inbuilt abstractions which can allow you to replace an abstraction with a monolith.
people jump into this because of shit processes like agile which push for incremental and iterative development processes instead of thinking things through properly. usully just to be able to have easy deliverables and easy reporting to upper management about the epic progress that is made. all the while product quality usually is degraded to a point where customers will start to feel it badly.
always the same dumb excuse is used that tech is changing so fast. but really, it's only changing so fast because people invent retarded processes which move too fast. you don't need to keep up with every innovation if you have a good quality product. you only need to keep up with this if you require all the latest buzzwords in your product because without them you are unable to sell your piece of junk. /endrant
What drives me crazy is that what you're saying is absolutely spot on, and I share the sentiment, but this kind of criticism is usually tone-policed into oblivion.
Sometimes it is a good idea to build something as microservices, but you just have taken the wrong approach, and therefore it is a pain in the *. So slicing it a different way might still be a microservice architecture but feel much better.
Recently I thought about setting up a Firefox Sync server. The first bumper was when I learnt that the sync server has a dependency on the accounts server... But the full-featured accounts server, in turn, consists of a bunch of services of its own [1]:
- fxa-content-server
- fxa-profile-server
- fxa-auth-server
- fxa-oauth-server
- browserid-verifier
- fxa-auth-db-mysql
After seeing that I decided to tackle that project another day.
For Mozilla that architecture might be perfect, but for most people who just want to run a separate server for <10 people, that architecture is just a burden.
FWIW, as a developer on the Firefox Accounts team, I strongly endorse the sentiment of this article. We've occasionally found ourselves merging microservices back together because the abstraction boundaries we designed up-front weren't working out in practice...
I have a slightly different opinion. Start with a mostly monolith, sure, that makes sense. But also start off with just one separate microservice for something important. It's important that you establish good patterns for integrating services in to your codebase early on. Its really easy to write a monolith without any thought of external service abstraction, which makes it WAY harder to do down the road if you decide you need to.
I think you can selective. We started with ~5 services all built fairly ad-hoc, splitting on sensible boundaries with the goal of never having a "mega" service.
This means it has been reasonably easy over time to fully replace them on an individual basis with more mature systems without changing the API design.
Now we're 3 years in with ~40 services and the approach has served us very well.
Definitely agree you shouldn't start with a ton of services, but I think you should definitely start with more than one. The jump from monolith to service-oriented thinking is a huge one. But the jump from a few services to more is much easier.
> Definitely agree you shouldn't start with a ton of services, but I think you should definitely start with more than one. The jump from monolith to service-oriented thinking is a huge one.
I can't get my head around this.
Why should you start with more than one?
What is so different about "breaking your application into services" and "breaking your your application into appropriate modules / classes"?
If those need to scale then you should have an interface that you can expand to a micro-service.
As another poster said "It just replaces internal calls between services of your monolith with flaky and slower network calls. "
I've been doing enterprise software and SOA too long. I don't develop programs, I develop 'systems.' So, I tend to see a service where a normal person would see a module.
I work in defense, where software systems tend to stay in service for a very long time... It is very, very hard to keep a monolith from turning into a big ball of mud after it's handed off to O&M. Very often a different company will be awarded the contract to maintain a system you developed (and it may change hands more than once), and they will strive to do the absolute minimum possible to keep the software functioning until it is retired (which is always at least 10 years longer than anyone planned).
What is different about breaking an application into services is that because services run as different processes:
* you have an immediate natural failure domain (the process) as well as resource isolation between services,
* services can be updated independently (something that is done many thousands of times every day at companies like Amazon or Google),
* a corollary of independent updates is that services can be tested independently and new code can be "canaried" by initially deploying only a single instance of a service.
The statement that service orientation "... just replaces internal calls between services of your monolith with flaky and slower network calls" is correct in that network calls are flaky and slower but incorrect in its assessment that you gain nothing from using services.
I'll give you that you can deploy services separately, but the other advantages that you give could be accomplished with automated testing - that's kind of the point in unit testing to test components independently.
The vast majority of people aren't working at Amazon or Google's scale despite developers seeming to think that they need to work the same way.
In theory, translating an interface-based monolith to a SOA should be straightforward - drop-in some replacement classes for your services and you're good to go.
In practice, it's not as simple as that. Serialization across service boundaries requires a bit of thought - invoking a method via a local call stack can accidentally cause a blowout on an SOA service buffer. Network timeouts suddenly become a thing. Latency might be an issue.
Of course, these aren't critical obstacles, and I agree that the architecture should look very similar, no matter whether it's a monolith or microservice. But designing one from the ground up would look a bit different.
I am in the middle of implementing a POC/MVP of a greenfield micro services architecture that I was assigned to do at work.
My background is more monolithic and some SOA, so I have had to adapt my thinking to try to make this work.
I am an open minded architect and always willing to explore what the good and bad takeaways are from a given approach.
I think that microservice architecture gives us a chance to think about what would happen if we thought of an ecosystem of applications fully decomposed into a fabric of services.
The first and hardest thing I have encountered so far, was trying to understand the right decomposition into ideal smaller units, something that is nearly impossible without understanding the requirements in full up front. I am not sure you can easily identify your service/domain contexts and boundaries (a la DDD) perfectly enough when you are doing agile development and the microservice architecture is intended to be used by many applications.
However, there is a caveat- if you build modules to be smaller it is easier to reason about what each one does by itself. So that part actually fits in with Agile well.
Also, if you, for a minute, imagine that network / machine boundaries didn’t have implications (latency, retries, etc.) and were as reliable as service calls, and if you imagine that we had reliable distributed two phase commit (it can be done, but all subsystems involved have to understand transactions and someone has to coordinate it)... I at least can start to see a picture that works.
I believe microservice as simply to be an old idea (build in a modular small form) in a new light, and I think it is part of us trying to evolve our system development and architecture further.
Don’t look at microservice as a panacea nor fad. Look at the problems it raises as opportunities to improve the problems it highlights, and then suddenly all of this might make sense as a scaled up architecture that can start small and scale smoothly to big in the future.
I believe it’s all part of the same journey we all have been on, developing systems that go from local, to global and maybe someday, beyond.
To add, one other problem that still needs solved in this picture is data federation - and if you solve the network problem and IO limitations fall away, you might actually be able to do crazy things like make joins across distributed systems work, as well as distributed transactions. We already do this with some planet-scale databases, but we are early in this - and it might need to work in the application layer outside of databases too.
We are finishing up a 2 year migration out of a data center to Aws and a complete rewrite of a Frankenstein Soa (of sorts) to a couple of monoliths.I was basically in charge of choosing the tech selection (we wanted to minimize tech stacks until we could get a handle on it) and the “how.”
A monolith made perfect sense as there was nothing salvageable, and I mean nothing. And the traffic requirements were definable, growing predictably and not that large. Pretty bag for a medium-sized enterprise.
Picture a company that has no tests, little documentation, 4500 line stored procedures (2000 sPROCS in total containing all the business logic), one data center and no dr, and they would deploy once every 8 days...in 2016! Oh, while making 300M a year with 600 employees.
We are weeks away from turning off the data-centers, have great test coverage, ci/cd; we deploy hundreds of times a week, site is much faster. We were able to combine our front end react tech to gives 99% code reuse in desktop/mobile web and native mobile.
The company makes more money than ever and we have made huge conversion wins by getting our shit together and doing normal, smart product things, while redoing the culture, software and infrastructure.
I hated fighting with these “do-nothing” people that had read articles about SOA/uServices/message passing arch/etc. the worst are the ones that can’t avtually do anything. They are usually the loudest.
In reality we use three different architectures, but the core business and logic is in one single backend DB and rails backend and it’s beautiful.
We have about 100 developers. We had a couple of issues with people stepping on each other at first, but with some structural changes to our app and some automated process (oh and letting 40ish people go while hiring new people) we solved it.
I can’t wait to burn the old servers to the ground. I’m leaving our so much detail. One day I’m going to write the whole story along with my two other partners who really spearheaded the change.
* front end monolith talking to a backend over REST apis. We have internal monolithic applications to help us get our job done. And We have a message passing system called Wormhole and a single uService. Simple...
Bottom line is that it's all a case by case basis. However I'll always warn against slicing microservices too thin. Each slice is a moving part outside of the machine and therefore brings in additional risk. I recently refactored/rewrote a monolithic project into a few fairly chunky services. Each one is sliced by a broad context. The only reason I did it was because the monolithic app required separate physical deployments per client due to data sharing restrictions and the service model (I won't call it micro) allows sharing of the data I'm able (and should) share between them. Perhaps we need a name for the model in between monolithic and "micro"-services. Maybe Macro Services?
Nope. Nothing new here. I remember SOA very well as the buzz word du jour back then. It sounds less cool than micro services though. At least we got our buzz words through the PR & Marketing departments in this decade.
IMHO, the monoliths vs microservices debate is akin to monorepos vs multi-repos: they are both strategies used to share work when your organization grows. Both can work well, depending on your tooling and organization.
But do not forget that those abstractions layers you add, while very useful (say, for release velocity), might also be a direct application of Conway's Law:
https://en.wikipedia.org/wiki/Conway%27s_law
Which means that refactoring some code, might sometime require refactoring your organization, so if you lack the ability to do that incrementally, you might converge to an ossified system that stops evolving.
I once worked with a team that wanted to use Microservices, because they felt like a Monolith approach for their webapp wouldn't support their user base long term.
They had 400 users, serving roughly 40 requests per second. Their database was small enough to fit into RAM on my cell phone.
--
I'm sure there are 100% valid reasons for billion dollar corporations to use microservices. But most of the uses of it that I've experienced personally were not really warranted for any technical reason, and were usually some combination of non-tech/non-product problems bleeding into the codebase. (Whether that's inexperience, mismanagement, communication issues, lack of leadership, political strife, tacit permission to silo oneself off, developer boredom, or whatever.)
The load per user can be widely different in different application. We don't have a great measure for that, one cound use a combination of requests per user per day, and the sum of CPU-seconds used to serve them. The sum of CPU-seconds should contain the periodic (e.g. cron-like, not directly request-answering) tasks too.
At one extreme, you have applications that need horizontal scaling from day one (Scientific computing). At the other, a monolith serving 10^6 users from a single app instance.
Yes, that is a reasonable response and I should have preempted that.
For context: the product at my day job does something likely at an equivalent level of complexity to Shopify. It's just another SaaS business that handles payments, invoicing, etc. If anything, Shopify is more complex.
last few teams I've worked with, "microservices" has come up multiple times, and in at least 2 cases, parties indicated that this was something they wanted to learn and that was part of the motivation. it wasn't part of the justification for microservices - "scaling" and "uptime" and "robust" were thrown around - but some of the private justification was "I want to learn this stuff". Not saying everyone has that motivation, but it's contributed to my lack of enthusiasm. That and the relatively huge amount of cognitive overhead required for what turns out to be usually relatively benefit.
Thanks for the clarification, but it doesn't negate my point. It supports the idea that "scaling" is a lousy motivation for my team to adopt microservices.
Why not? I mean we don't use micro-services for what it is supposed to do. The application would perfectly be a monolith. We do it, just because it lets us experiment newer technologies in not-so-important services in a smaller scale without undergoing major re-work since when the change is big, everyone in management layer understandably becomes conservative.
I worked at a company with this disease before. The system was an abomination of vastly different technologies over the years stitched together loosely.
Development was slowed substantially by having such a mess and the company couldn't move fast enough to compete so the startup died. Usually tech isn't the reason for a startup's death. In this case, it was.
Sure. That's one of the responsibilities of a team lead: to help team members to work and gradually build up their CVs. I want my team to experiment new stuff and learn while working. But I also want to limit the risk boundary. The whole reason why younger people are leaving dinosaur companies is that no one in management layer lets juniors experiment and fail. End of the day junior devs also want to improve and develop.
I'm pretty sure the reason "younger people" leave "dinosaur companies" is because they aren't happy. What causes that unhappiness is a wide spectrum. It could be poor team leads who let the young person's peers waste time while they carry the weight. It could be compensation at large companies is based on past performance rather than future potential growth. It could be because it sounds more impressive to a young person to say they worked at X new hotness tech company, etc.
Your job as team lead, however, is pretty clear. You lead a team to create value for your organization. Professional development is an obvious tool in that toolbox. Finding ways to limit or restrict exploratory development in order to reduce risk is another.
As a lead you have to care about retention as well. People leave jobs when they feel like they are professionally stagnating (or worse, they stay), because for a software career, stagnation is death and everyone knows it. And high turnover will wreck your ability to deliver.
So, you have to strike a balance between getting stuff done and taking care of your people in terms of professional development and growth.
It's ok, most organizations fail at it. The "90% of everything is crap" rule applies to managers as well.
In that case you are going to find that you quickly lose your best devs. I'm not saying that resume building should be even within the top 5 priorities, but you will want to at least keep it in the back of your mind. It's a balancing act.
Otherwise the solution is to crunch-time people into oblivion and and quickly replace them when they burn out. Not exactly sustainable
Your job is whatever you agreed to when accepting that job.
Some companies require managers to aid in the technical development of employees, some don't. Some provide a lot of latitude in how that's done, some don't.
I'm not sure why you think that striving only to achieve business objectives could be considered to be a moral duty. Are you perhaps perhaps thinking of fiduciary duty?
If a company decides that technical development of engineers is good for retaining engineers and you as a manager refuse to do that, then no moral argument is going to help you when you get dinged in your performance review.
edit: Upon rereading the thread, I suspect that we may agree more than we disagree. My comment was directed at asknthrow's comment and I wanted to make the point (which other posters have more eloquently made in the meantime) that if technical development is part of your job as manager, you don't have a choice in the matter and your job is not just "to lead the team in the most effective direction in order to fulfil business objectives" (to quote asknthrow).
For sure. To be clear, I mean that regardless of what your bosses say you have an obligation to the people you manage. Obligations do not only run upward. (This is something bad managers often do not understand.)
NOPE. First and foremost, you work for yourself. Employers come and go. If you are sacrificing your resume to meet your employer's "business objectives", you could end up with a dead worthless skillset.
A balance has to be achieved. Obviously we can't sit around all day rewriting simple things in our pet language of the week...but employers need to understand that a good developer will not let their resume atrophy.
The days of the twenty-five year stint followed by a gold watch and a pension are over...you simply cannot put your employer's needs ahead of your own anymore.
Microservices, test driven development, Agile, 4GL and so on. It's all the same thing: a technique that is applicable some of the time but not all of the time that ends up getting a bad rap because the people promoting them tend to come from the theoretical side of the street, and they see the subject matter as their new revenue stream. They will then promote it to be used even when it isn't applicable.
A web based service can be as messy or as clean as you want no matter whether the underlying architecture is a monolith or a bunch of microservices. I don't like the 'micro' in microservices to begin with, to break up a large and complex problem into multiple smaller problems that are each simple to solve is a core principle of programming. If you take that to an extreme you end up with services that do almost nothing and then you have a communications problem (or at least, you will have one in most environments you are likely to encounter). If you glue everything together in on giant hairball you don't have the comms overhead but you have a cognitive overhead in trying to understand it all.
Like with everything else: there is a happy medium: services that are easy to understand because they do not have horizontal ties to other parts of larger whole, enough isolation to help you with debugging, not so much isolation that you end up doing remote requests for data that should have been nearby.
Everything in moderation.
As an illustration of 'microservices' done well: I worked - the last time I had an honest job+ - as a programmer on a message switch for KVSA, a company that brokers shipping capacity. Super interesting job, even more interesting architecture. Right from day one (contrary to the article title!) it was decided the system was too complex to tackle as a monolith. The reliability demands and the latency requirements led to the base system being built on top of QnX, a soft real time Unix like operating system with a micro kernel. Since in a microkernel environment message passing and service oriented architectures go hand-in-hand the technique percolated through to the application level, which ended up being a series of queues and 'admins' (QnX parlance for a daemon or a service) handling the inputs from these queues and effecting transformations on those inputs resulting in new outputs or side effects (such as a fax or a telex being sent). The system worked flawlessly, had a very high degree of redundancy built in and it most likely would have never made it to production if it weren't designed like this from day #1. For that particular use case it was ideal.
+ in 1993, if you're wondering whether microservices are something new you have your answer.
>It's all the same thing: a technique that is applicable some of the time but not all of the time that ends up getting a bad rap because the people promoting them tend to come from the theoretical side of the street, and they see the subject matter as their new revenue stream.
Also because we've created an industry where everyone must stay up-to-date, so if anything gets traction, suddenly people start getting worried that those things are not on their resumes.
I grew out of microservices because it felt like I was doing the same boilerplate REST service over and over.
So I made a modular REST API service that could load plugins. The plugins can contain anything from simple endpoints to database schemas with sqlalchemy. All this is loaded into the main app at runtime.
So the main app can handle authentication against LDAP for example while all the various deployed microservices can have their own roles.
What I love about microservices is isolation and forcing you to do thing well from the beginning. Monoliths tend to become horrible to maintain after few years. On the opposite change one small 50 lines microservice is a lot less risky!
What I love about monolith is coherence and forcing you to do things well from the beginning. Microservice mesh tend to become horrible to maintain after a few years when no one really measure which components will be impacted by a single change. On the opposite, change one small 50 line function and your IDE will happily show you all the calls to that specific function.
So the consensus is to do things well from the start. IMHO both have their merits but strongly depend on the usecase. There is no one size fits all. A lot of thought has to go into both options to make them really effective. Otherwise it's just garbage in garbage out.
Depends how you do it. Having a sane communication model/interface between the services allows you to test it in two ways:
1) You test the internal structure with unit tests
2) You test changes to the API contract with E2E tests
Both are fairly straightforward once you set it up properly.
Same goes for the monolith, assuming you do your work on following somethink like the SOLID principle, it is easy to test piece by piece. The only thing here is that the interface contracts are internal to the codebase vs external to other codebases. (You might argue the last point actually forces you to write a more coherent API contract in both situations)
Though keep in mind that in microservice world suddenly you have dozen of problems you would never face in monolith world. You are forced to implement a good orchestration, deployment, segregate data between services and other stuff. It's all fun, but takes time to dive into that and build the right process. For sure it is nice and sound once it's built, but you might simply lose track of your primary product if you're solo project dev :)
And then another downstream microservice fails when you deploy your change into a running environment, because you lack the integration testing capabilities of a monolith.
It's true you achieve higher isolation with microservices, but you also lose points in other areas. And you can get lost in your haystack of microservices just as easily as in your typical monolith ;)
You can just as easily create an unmaintainable spaghetti mess of microservices. Isolation is defined by good architecture, microservice or monolith is just how it gets executed.
A simple syntax error in a monolith means the whole monolith is crashed. Bug anywhere in the monolith could mean border effect in the rest of the code anywhere.
With microservices you guaranty isolation so a bug in one microservice won't have impact on the rest of the api
You're blurring the lines a bit between the true definition of microservices (eg: no shared schema, data structures, etc) and just a sensible deployment architecture. If their runtime state is independent then there is nothing stopping you taking your monolithic code base and deploying it N times, for as many independent end points as you want. That way service A crashing is never going to bring down service B, but they can still share code and data structures.
The real question is, do they have shared runtime state? If they don't, you can do the above, but if they do, moving to microservices won't make that go away, it might even make it harder to deal with.
A syntax error should never make it production if you have even half decent infrastructure... In a monolith or a micro service.
And if your microservice get to production with a syntax error do you really think the whole ecosystem is somehow more healthy? Not unless you wrote a ton of horrible defensive code with and retries and HTTP error/timeout catching code every time this service is called...
Typically you make an abstraction for all those timeouts, retries and errors. It can look just like a normal function call and have optional arguments for timeouts.
I don't like the phrase "microservice" because it now has a certain amount of baggage, similar to "service oriented architecture" which it was meant to be somewhat of a counter-point to. Whereas the issue with SOA was its association with maligned/feared technologies (SOAP, WSDLs, CORBA), the issue with microservices is the implied granularity. SOA was a good idea, but people (particularly in start-ups) don't want to say they're doing SOA because it has old-school, corporate connotations. On the other hand, the granularity of microservices seems too extreme for what most products would actually need but the concept is associated with more modern technologies which are attractive to developers, like gRPC or Avro or Kubernetes (or even something as simple as HTTP). So I would say the most pragmatic approach for a greenfield web product (rather than a corporate IT integration) is have a fairly standard core (probably a REST API, a server-side MVC framework, or a GraphQL backend if you're nasty) and factor out services that make sense to the team (maybe a service that handles push notifications, or a service that does image processing, or a service that is the secret sauce of your product) because they need to scale independently or handle async/computational tasks that should have dedicated resources or they pull from a data source that is orthogonal to the rest of the system. You need to strike a balance between "micro" and "service".
There is this idea that you either have microservices or you have a monolith, while its really more of a gradient. I guess what I'm advocating for is "modern service-oriented architecture" or "chunky services" vs "microservices"; reasonably sized, well-considered services that use modern technologies for inter-service communication.
There's trade-off's. They are using a fairly complex CI while with micro-services you release each service individually. It's hard to turn a monolith architecture into a architecture with micro-services. So it all depends on what works best for you. The idea with micro-services is that with a much smaller service - development is faster and cheaper, you can for example rewrite the entire service, use different software stacks (the best tool for the job), etc. Where as a total rewrite is not feasible in a monolith.
Why do you believe development would be faster and cheaper? Because the individual services are smaller?
Come on. The system is as large and as complex as will be necessary. Separating components with network calls doesn't make them any less interdependent.
There's a saying that if it takes one man one year to build a wall, you hire 20,000 workers and the wall would get built in a couple of seconds. That's the monolith thinking. Now if we make several small teams, and each team build a very tiny wall anywhere they like, each team could iterate faster, and quickly rebuild that tiny wall if needed, that's the thinking of micro-services. The later plan is of course not so smart if the wall is intended to stop people from entering, so not all systems are a perfect match for micro-services.
It's hard to turn a monolith architecture into a architecture with micro-services.
It’s only hard if your monolith wasn’t designed properly. In C# parlance...
1. From day one create your monolith with different domain specific projects where the functionality is exposed as an interface.
2. All consumers of each service use a dependency injection framework to map the interface to the service - not http service, in process domain service/module/namespace.
3. When you need to separate out a module to a separate service, it’s easy to split that specific module into a separate service by putting an http front end on it. If you integrate Swagger into your API, there are tools to automatically create proxy classes for your client.
4. Your proxy client can implement the same interface the interface
from step 1. Just change your DI appropriately.
5. If you have modules that are shared between the monolith and new microservice, create a package and a private package repo.
The newish argument for microservices is that they enable compositionality, so wouldn't that same hypothetical apply? I.e. some 10xer is short on time and glues a bunch of microservices together and now you have the same problem but worse because there's no IDE allowing you to trace the code?
I agree if you have no idea what you're doing - which seems to be the case for many developers starting new projects stuck in the paradox of choice.
But like anything else, if you understand the toolset, ecosystem, and have the experience, it can take far less time than esoteric documentation and conversation would have you believe.
I always begin with this pattern, but it's because I've acquired so much experience and know-how with it that it's a quick upstart. However, this didn't come with any ease. Most want to just dive into really myopic course work or tutorials that are "just examples" and "shouldn't be used in production." The WORK to understand it is getting each nuance under your belt. It's just like any other skill set - it takes patience and deliberate effort. Documentation spelunking and trial / error experiments.
That being said, under the fire of a manager, timeline of capital, or just the raw impatience inherent to humans we wind up falling back to what's safe, what has plentiful easy-to-learn patterns, and listen to all the other folk who get 50% through, stop, and then just spin up terribly organized monoliths.
This article and articles like it presume some magical workforce you can hire that will transform or organically evolve your application into a microservice ecosystem. But the fact is that people who know how to execute a microservice architecture, muchless willing to work on your particular budding application are few and far between. So if you don’t know how to build microservices, it seems really hard to start with microservices by factoring a live running application while you are also simultaneously trying to keep that application up and functioning and feature developing in order to monetize it. Wow does that sounds difficult.
So it makes more sense to me to build out from microserves—they can be embedded in one JVM or whatever—so you know how to organically evolve and can focus on monetizing your application.
You really can get all the benefits of popular architectures without going off the rails!
Don't start out with a DI framework, use poor man's DI (ie 'passing stuff in').
Don't start out with microservices, use (poor man's) DI, ban all public static/global variables, and segregate code into separate processes with public 'interfaces' (but don't use actual interfaces until you actually need them! 1 interface per class is an antipattern!). These can all run in an async process pool you build, which can monitor bottlenecks when you get to that point. You can then (years later) easily break an internal service into a microservice when the trade off makes sense.
I'm obviously static typing/oop focused, but there's a version of this which applies to any paradigm.
You should always start with domain specific microservices. Those “microservices” shouldn’t be out of process services necessarily. They can just as easily be in process “domain services” in a monolith that are only accessed via an interface where each module is treated as a black box.
I mostly agree with the article except the part where the author plays down the architectural considerations during project inception.
> Right now it’s just me working on the project, and you can be sure I just cracked open my code editor and started writing code on day 1.
It's definitely fun to start with writing code, but in my view it may be more efficient to pause for a moment, understand the problem and find the right solution for it. Then start writing PoC code, which can be refactored at a later stage. That's just pragmatism - lots of code will go to the bin anyway, but at least we give ourselves a chance to have a longer and happier run with it before that happens.
When making generic rules one should provide a lot of context. I have found that a fairly large rearchitecture is needed to move from a monolithic product to something with microservices. It is not something to grow into that way.
And the technical reasons to move will be mostly for fault tolerance and resiliency (you don’t want your whole service to go down because a small widget failed somewhere). Of course this does not come for free.
Basically it’s not that one is better than the other for all cases. This is a case where people should consider many things including non-technical aspects before making sweeping statements or decisions.
Can't you have both in a way? A monolith that's glued together by an "interface language" (e.g. GraphQL), that can be broken up by "ejecting", removing the glue code?
This is an interesting read, and thanks for sharing. While keeping things monolithic is an efficient, worry-less and simple approach to architecting a product to maturity; what would you advise if it will take as much work or greater to break it down micro-services;
1. To continue anyways or
2. Find a mid-point in the life cycle to break things down or
3. Have a deep thought about the future of the project at the beginning (then decide)?
I agree with writing a lot of code being the way to write better code, but as soon as I encountered the part about "absolute and total intent to replace almost everything you write with better code once you start experiencing real problems first hand" strikes me as not possible in every environment.
If you are writing your own monolith from a basement and are a single developer, sure. Once you have founded your company and made your millions you can decide what, where, and when to change in the code. However, for the vast majority of people who do professional development it is just not plausible to suggest that you create every project as a throwaway.
Because in corporate development, once a project works (even a low percentage of the time or with major problems) it can and often will continue its life forever. Greenfield development has a different process in most places than maintenance / sustaining, and most people will find that making any reasonable structural changes to a legacy / monolith / inherited code base will take years of mostly political arguments because management will be unable or unwilling to recognize the writing on the wall.
Analysis paralysis is obviously a problem as well that exists on the extreme end of the other side. However, I believe actually prototyping and testing early in the cycle is the best of both worlds: you get both the ability to respond to problems early in the cycle because you're exercising the code already, and the process will not cripple you from making those changes.
I agree that writing a lot of code is the cure, but please, for the love of all that is programming, stop insisting that every early prototype makes it into production with their awful duct tape and bubble gum patches intact.
Break the problem down early, learn some of the finicky bits of the technologies you've chosen, and be pragmatic...but insisting on taking your first (often terrible) crack at the problem directly into production where you'll be stuck on it for possibly a decade is pretty bad advice in most environments I have developed in my professional career.
It's a recipe that'll often get you stuck troubleshooting irritating design-induced problems for years to come or hopping to a different company.
There is a middle ground between no design and spending years on whiteboards and blogs before writing a single line of code...that middle ground is what needs to be mined instead of constantly taking an extremist stance.
But hey, this is corporate development...so the loudest voices and most extreme opinions always seem to win out.
1. Sharing models - the models can be moved out to another repository or a NuGet package, but guess what happens when you have to modify them? Inevitably, devs duplicate models.
2. Debugging across five different code bases - have fun changing all the environment variables to point to your local every time, or running five different applications at the same time for local development.
3. Docker and Kubernetes add a LOT of overhead.
4. Multiple front-end apps combined into one "coherent" site always leads to routing problems...and token management problems.
5. Web Components cause bloat by pulling in web component scripts and the fact that each web component needs to fit the style of the whole site. Since shadow dom is isolated, each component pulls in styles again - slow. Again, debugging and checking in web component code is a pain.
6. Finally, siloing is inevitable.
Imo, this doesn't make sense at all for a smaller web app.
But docker adding overhead? Everywhere I introduced docker to devs, productivity went up, not down once a good way of working was presented to them. No 10 devs using some different versions of the same database engine, no more rogue gmail accounts for 'testing purposes' by showing them mailhog, updating the backend service became as much as a 'git pull' and 'docker-compose up' for the frontend devs instead of in the best scenario killing their vagrant vm and reinstalling it, or worst case, a 3-page installation/configuration document to follow on a fresh VM, and the list goes on... Sure there is some overhead involved, people need to learn a new tool - and have a bit more feeling with how software is deployed, but from an infra pov, that's a good thing.
Kubernetes? Yes that adds a ton of overhead, certainly initially. Few really need it, but if you move from a monolithic app into a more micro-service based architecture for scalability issues, something like k8s is a godsend. What I do notice however is that once teams are accustomed to a workflow involving it after building a large application, they actually enjoy it and also start using it for smaller-ones. Architecture-wise it's easy to go overboard with the microservices, splitting things up simply because they're 'cleaner' - but that's something you should resist.
But as you say, for smaller web apps, microservices make no sense...
It's a good question. Going to let others answers. My tldr opinion is "Not really, but code duplication any time, but especially across repos can lead to hard-to-troubleshoot errors".
1) Business is pressuring tech teams to deliver faster, and they cannot, so they blame current system (derogatory name: monolith) and present microservices as solution. Note, this is the same tired argument from years ago when people would refer to legacy systems/legacy code as the reason for not being able to deliver.
2) Inexperienced developers proposing microservices because they think it sounds much more fun than working on the system as it is currently designed.
3) Technical people trying to avoid addressing the lack of communication and leadership in the organization by implementing technical solutions. This is common in the case where tech teams end up trying to "do microservices" as a way to reduce merge conflicts or other such difficulties that are ultimately a problem of human interaction and lack of leadership. Technology does not solve these problems.
4) Inexperienced developers not understanding the immense costs of coordination/operations/administration that come along with a microservices architecture.
5) Some people read about microservices on the engineering blog of one of the major tech companies, and those people are unaware that such blogs are a recruiting tool of said company. Many (most?) of those posts are specifically designed to be interesting and present the company as doing groundbreaking stuff in order to increase inbound applicant demand and fill seats. Those posts should not be construed as architectural advice or _best practices_.
In the end, it's absolutely the case that a movement to microservices is something that should be evolutionary, and in direct need to technical requirements. For nearly every company out there, a horizontally-scaled monolith will be much simpler to maintain and extend than some web of services, each of which can be horizontally scaled on their own.
I also wrote https://adamdrake.com/enough-with-the-microservices.html as a way to communicate some of this, including some thoughts on when and how to structure a codebase (monolith) and when it might make sense to start moving towards microservices, etc. There are cases where it's reasonable (even advisable) to move towards microservices, but they are rare.