Fast forward to six months or a year later, the monolith is still around, features are piling up, 20 microservices have been released, and no one has a flipping clue what does what, what to work on or who to blame. The person who originally sold the microservice concept has left the company for greener pastures ("I architected and deployed microservices at my last job!"), and everyone else is floating their resumes under the crushing weight of staying the course.
Proceed with caution.
As a big monolith got too unwieldy, we’d refactor major pieces out into another service. Over time, we’d keep adding a piece of functionality into the new service or rarely, create another service.
The idea wasn’t to proliferate a million services to monitor but rather to use it as an opportunity to stand up a new Citadel service that encapsulated some related functionality.
It’s worked well but it requires planning and discipline.
At another level of scale, some functionality got rewritten a second time into yet another citadel.
What do you mean by citadel in these contexts? I've never encountered that terminology before.
Been a distracting couple of days.
This has been my experience too. We got rid of monolithic VMs running generic Linux systems (that were well understood and easy to reason about and fix by the entire team) and replaced them with hundreds of Lambda functions written in Java Script. The complexity, cost and vendor lock-in is insane. The Tech world has become irrational and emotion driven and needs an intervention.
Instead we opted for "macroservices". It's still services, they're still a lot smaller units than our monoliths, but they focus on business level "units of work".
And yet, I will contradict myself to say that actually, microservices have the potential to be that original vision of object-oriented programming from the 1960s. The fundamental idea is very cheap computational nodes that only interact by passing messages around, with the idea that a recursive design inspired by cells and biology, will be much more successful than traditional programming where you try to separate data structures from the procedures that operate on them. If your microservice has setters and getters, you're missing the point of this original vision of OOP. You could certainly build a system with 40,000 of these live, but probably you would not have 40,000 different chunks of source code behind them.
Lovely description of the actor model in erlang/elixir and akka :)
We're "replacing" an old system that has lived on a mainframe for the past 60 years. In a strict microservice scenario we would indeed need 40000+ microservices to replace our core business.
The thing about (our) mainframe integration is that it is essentially made up of microservices, only they're small chunks of COBOL being called like pearls on a string, each producing output for the next in the chain.
Sounds like the UNIX way, to be honest.
SOA has ESBs. But both are built around bounded contexts.
Calling them "micro" was a bit of mistake.
If you require transaction consistency across multiple microservices. They are to small.
If you don't require transactional consistency across microservices that's ok. Therefor they can be in separate databases anyway and messaging can be used.
The complexity added by all the services for deploy, bug tracing, and dear god, security auditing, made for a living hellscape that crippled our ability to ship software for some time. Not to mention the bifurcation of resources to keep the monolith happening for customers using it while the microservice mess is being created.
There is almost a zealot-like-brainwashing that has happened to folks. I made case after case to our engineering team "This doesn't solve our business use cases. We're prematurely scaling. We're unable to move code into production efficiently. This is very hard to understand for outsiders joining our team." --- All fell on deaf ears since I "didn't get it." When I put a hard stop on adding any more microservices without a use scale for why we need that scale, I was called "toxic."
In the end we fired the whole team since they wouldn't buy into destroying their microservice dream world for something practical and put everything back in the monolith except for one service.
Our amazon bill is 1/8th what it was. Security auditing/upkeep is 1/100th what it once was. Deploys are done without fanfare more than 1x/week. Our average response time is down from 500ms to less than 100ms since we aren't hop-scotching services all over God's green earth.
Note: This isn't a tiny project. 200k users, 700-1000 requests/minute during peak times, lots of data moving through this.
I once had to work on a giant C# codebase with a frontend written in a wild mix of Angular, React (multiple versions) and Knockout and found it was pretty good because there was clear separation and code standards were very high.
2 in our case.
One customer I work with managed to create 10+ container based services, in order to do a database lookup, render a notification template and send the notification to one of three notification services. Because it’s Java based there’s now also a pretty large menory overhead, as each service needs it’s own memory allocation for the JVM. On the plus side they have became very aware the this is too many micro service and are refactoring and combining a few of the service.
Having too few large services seems much easier to work with an fix than having too many small ones.
I'm still glad we let it evolve that way instead of starting out separately.
Every binary in your /usr/bin/ is a microservice. Just type `watch date` and enjoy two microservices running, no need for containers/kubernetes :)
Microservices are about organization, not about requests being sent on wires throughout some abomination of a cloud infrastructure. Developing a UserService class and having the audacity to just directly inject it into your application for usage is probably one of the most rebellious things you could do in 2020. Extra jail time if you also decide to use SQLite where appropriate.
I wish those were the only indicators of bad code, but illegible aesthetics, over-engineered complexity, under-engineered fragile modules, etc. are to be found everywhere.
Sometimes I’d rather go farming
And then you noticed that scaling an Amazon RDS database to a larger instance requires downtime...
(Expressions mapping nicely to lines, ymmv).
You know what, http3 gonna be sick!
When you're all alone and feeling insane,
you know you lack the sweet sweet 2.0 Webscale.
But seems they just get angry instead.
Why oh why don't people just see,
the true meaning of rhyme, truth and the DDD.
I’ve definitely had enough internet for a while.
The although both words have the same last two letters - "le", the third last letters are different. This produces a different syllable when pronounced.
Recently, however, I had the chance to work with a quite extensive codebase that powers a monolith for a fintech company. The code is written in Scala and extensively uses Akka-streams to neatly separate concerns. In my opinion this approach is the the sweet spot as 1) Devops burden is low as you only have one binary to run / deploy 2) Shapes of the various Akka-streams subgraphs are statically typed checked (unless you opt-out by doing stupid things), 3) makes it much easier to reason about the data flow, and 4) testability is really high as you don't have to mock services but only the upstream subgraphs.
The downsides are that 1) the learning curve is very steep at the beginning as Akka in general is very complex to use effectively, and 2) squeezing the maximum performance can be hard as you don't have the ability to horizontally scale only some microservices.
I fell in love with the approach and I'm migrating some personal projects to it.
What kind of process do you use to on-board new engineers to the point that they can make good design decisions within your architecture?
Microservices is higher devops complexity in exchange for lower software application complexity. A really terrible deal IMO.
I've also talked to junior engineers who want to make every function call a pubsub message.
I've heard principals from Amazon promote a model where one service is responsible for one entity.
What I've decided is that the services in your company should follow conway's law. Most of the problems with a monolith come when multiple teams with differing release cycles and requirements are making changes in a shared codebase and they are having trouble keeping their tree green. You should generally have one to a few services per team. Scoping a service to a team ensures that people can have true ownership.
For SREs microservices are harder, but they give SREs the control plane they need to do a good job. If communication happens between services rather than function calls, it's easier to instrument all services in a common way and build dashboards. It's simpler to spin up different instances connected to different datasources.
I also think this applies much more broadly than just microservices vs monoliths. I recently moved ~40 repositories into just a few. What I've found is that anything that releases together (by teams and timeframe), should stay together. This helps ease modification of related components in an agile way, simplify tagging components, simplifies CI workflow (no multi-project pipelines).
Anything that breaks with this principle should have a concrete reason for it. If you need to combine the results of several teams into one large release, it may be easier to develop tooling for handling it all in one repository rather than developing tooling for handling many repositories. That's really the monorepo tradeoff.
Similarily, there are concrete reasons for breaking a service into smaller parts. Perhaps you want to horizontally scale a part of the service. Perhaps you need a part of the service to have a different lifecycle. But you're paying with increased deployment complexity, so you'd better get something worthwhile in return.
Obviously it’s possible to overdo it. Generally it seems that splitting out services as appropriate is more intelligent than just sitting down with the thought “we’re going to build a microservice architecture.” Goes back to the idea that gets banded around a lot that you should start with something as simple as possible and if you get in a situation where you are at the scale to need to rewrite then that’s a good sign for your business.
Never mind that everyone building micro services just goes “fuck transactions and eventual consistency, I’ll go with maybe/probably my data gets corrupted over time” whoop.
Bounded contexts DO NOT need network partitions to be enforced BTW. For example, I'm pretty sure Google has all their source code in a single repo (or at least a LOT), how do they with a million developers stop people from intertwining everything? My guess is code reviews, hiring good people and tooling.
EDIT: sorry to the person who liked it I've rewritten this comment for clarity, and removed lots of words...
As others pointed out, it works for companies who operate and scale engineering teams. Good luck maintaining complex applications across tens to hundreds of developers.
For very large companies doing mergers etc things are always going to be in flux in ways that the startup world tends to ignore.
I feel the problem is a software problem that should be solved by better development tools/languages rather than throwing up hands and pushing the problem into the operations domain.
It could increase or decrease software complexity. It could also increase or decrease devops complexity.
I never reveal my true thoughts because I don’t think he would understand the heresy of the unbeliever.
Never tell anyone to their face that you don’t believe their god is real.
The position of the monolithics is "you should have one thing". Well, that's obviously wrong, if you're doing anything even slightly complex.
The position of the microservice people is "you should have more than one thing", but it gets pretty fuzzy after that. It's so poorly defined it's not useful.
How about have enough things such that all your codebases remain at a size where you don't dread digging into even the one that you're most prolifically incompetent coworker has gone to town on? Enough things that when not very critical things fail, it doesn't matter very much.
But only that many things. If you need to update more than one thing when you want to add a simple feature, if small (to medium) changes propagate across multiple codebases, well, ya done messed up.
If you're one of the people believing monoliths are The Way, you're making a bizarre bet, because there's N potential pieces you can have to create a complex system, and you're saying the most optimal is N == 1. What are the odds of that? Sometimes, maybe. But mostly N will be like 7 or something. Occasionally 1000. Occasionally 2. But usually 7. Or something.
This seems really obvious to me.
"Pieces" is doing some heavy lifting here. You're assuming that isolated parts of a system need to be seperately developed and deployed systems, which absolutely doesn't need to be true. Seperate parts of a system can be modules, namespaces, libraries, or any number of different solutions to decouple code and create domain contexts and boundaries.
I've never met anyone who prefers to use monoliths that would also say "just let everything call everything else, you don't need any structure". That doesn't necessarily mean that that the only acceptable boundary is an HTTP interface.
It's very much like the case with unit tests.
What is a unit? How small is micro? These two questions on their own are subject of debates of religious proportions.
How to split them has never really been a problem, you tend to develop an intuition and the monolith kind of splits itself.
Eg, currently we have user profiles (user id > the users names, email etc) in one microservice, relationships between users (user id > other user ids) in another.
Each has its own datastore, deployment, scaling etc. It works great and while there is a small overhead cost to pay for splitting things it's vastly preferable to the monolith we used to have, which contained lots of things that had very little to do with each other, took ages to test and deploy, if it went down it was game over etc etc etc.
Did that ... create business value for your company?
> Eg, currently we have user profiles (user id > the users names, email etc) in one microservice, relationships between users (user id > other user ids) in another.
This sounds like a parody of microservices.
* our enterprise db was bursting at the seams containing Literally Everything. Now, every part of the split up monolith has it's own self contained data store tailored to what is appropriate for that particular thing. (some use MariaDB, others Redis etc etc)
* developing, building, testing and deploying took ages. Eg if I only needed to capture some new detail about a business partner user (eg their mfa preference app vs sms) I would still have to do deal with the unwieldy monolith. Now, I can do it in the dedicated business partner user service, which is much easier and faster.
* the whole monolith, including business partner facing operations, could go down because of issues to do with completely unrelated, non critical things like eg internal staff vacation hours.
I could go on.
As for the different services I described, if most of the callers did need both pieces of data it would have made sense to combine them into a single service. But, overwhelmingly, callers are interested in one piece of data or the other, and the load profile, tolerance for staleness in caching etc etc for each of the two services are vastly, vastly different. This is why we chose to split the two into different services.
The few callers that do need to obtain both pieces of data just make concurrent calls to both and them zip them into a single result.
Why do you consider an enterprise DB "bursting at the seams" to be a bad thing? Isn't that what enterprise DBs are built for? Seems like you traded having everything in one large database to having everything scattered in different databases. You probably sacrificed some referential integrity in the process.
> * developing, building, testing and deploying took ages. Eg if I only needed to capture some new detail about a business partner user (eg their mfa preference app vs sms) I would still have to do deal with the unwieldy monolith. Now, I can do it in the dedicated business partner user service, which is much easier and faster.
You traded a clean codebase with a solid toolchain for probably a template repository that you hope your users use or everyone is reinventing some kind of linting/testing/deployment toolchain for every microservice
> * the whole monolith, including business partner facing operations, could go down because of issues to do with completely unrelated, non critical things like eg internal staff vacation hours.
This could apply to any software. Sure, a monolith can have a large blast radius, but I can guarantee one of your microservices is critical path and would cause the same outage if it goes offline.
> The few callers that do need to obtain both pieces of data just make concurrent calls to both and them zip them into a single result.
Almost like a database join?
It's obvious nothing will convince you, but I maintain that eg internal vacation hours should not live in the same codebase as partner facing business critical things.
There might be security issues but good DBs offer security grants that are granular enough to deal with this.
Data sovereignty might be an issue forcing you to split, that depends on the domain and application requirements.
But in general two things being unrelated isn’t a reason to split the DB and splitting the DB isn’t a reason to split into services.
I’m not for or against microservices. I’m definitely for good reasoning though!
You didn't. They're just trying to force the issue for some reason by straw manning your decision.
When the monolith goes down, it's hundreds of thousands, sometimes millions of dollars of revenue lost both during and as a result of an outage.
You really, really don't want a problem with eg internal staff vacation hours to do that.
It sounds like you're describing a monolith that actually contained multiple completely independent applications - which I don't think anyone would disagree with being a good case for splitting.
In most cases I've seen, the decision to split into microservices usually involves a fairly high amount of dependency between different services to achieve a common business goal (hence the concerns about things like referential integrity)
> Did that ... create business value for your company?
Not OP, but the big value of splitting into microservices is isolation.
In production, this isolation offers a limited blast radius in the case of an errant service. Also, independent scaling. Business value: improved reliability.
In code, isolation lets development teams have a smaller and more focused domain / set of concerns to reason about (vs. the entirety of the monolith). Business value: Increased dev velocity.
To add to that, when you have hundreds of services running around and something goes wrong, it ends up being a lot harder to track down exactly what's happening. So when you do get that critical error, oftentimes the downtime is worsened.
As for dev velocity, I find the claims of the microservice gospel a little bit exaggerated. Your layers of nested services all talk to each other, and any of them could be a point of failure. This isn't really all that different from calling another function in your monolithic app - you've just distributed that function call across a network boundary. You still need to know the callee's API, and you'll still spend a decent amount of time trying to understand the ways that the callee might fail. But you've also created a huge amount of additional developer work whenever you need to do something that spans the boundaries of existing services.
I think microservices certainly have their advantages but a lot of the simplistic claims made by their biggest proponents only hold up prima facie.
It's a huge different because if some core critical service starts causing problems it's almost certainly because the last binary push was bad, and you roll it back. You only have to roll back that particular service any everything starts behaving correctly again. Moreover, you probably detected the problem in the first place when the rollout of that service began by replacing a single instance of the updated service with the new binary. Monitoring picks up a spike in errors/latency/database-load/whatever and the push is stopped and rolled back.
Monoliths have inventive ways to address this problem without having to roll the entire binary back, like pushing patches or using feature flags, but few would argue that the microservice approach to handling bad pushes isn't superior.
> To me that sounds like poorly written software regardless of architecture. I can't imagine a scenario in any of the recent codebases I've worked on (microservices and monoliths both) where errors in what sounds like an internal CRUD tool would cause an entire production application to crash. I find it even harder to imagine if the application has a halfway decent test suite.
Easy enough with a sufficiently large codebase in C or C++. Somebody's parser encounters an input that was supposed to never happen and now it's off clobbering the memory of who-knows-what with garbage.
Also, I'm not sure what kind of internal CRUD tools you're writing, but "malicious input" doesn't really seem likely to come from your coworkers.
It also let's you choose which parts to pay closer attention to - the microservices that's depended on by everything gets the extra operations
Why, in this example, would only 1 instance of the monolith go down but all instances of the crucial microservice go down?
Image what a piece of crap Linux kernel would be if it was developed with this mindset. :-)
EDIT: The context is that there's constant code splitting code moving being done inside Linux.
In my experience, monoliths are simpler and often faster to ship a working v1.0. When that matters (which seems like it would be “quite often”), they are a short-term winner.
You have to survive the short-term in order to face the problem of the long-term.
I just imagined a different way. Monoliths can do transactionally consistent stuff.
This is how you e.g. can split a bank transaction service from a bank balance-telling service, with different requirements for scaling, latency, SLAs, deployment schedules, etc. You can deploy a change to the balance-telling service and then roll it back if it exhibits a problem, all without touching transaction proceeding in any way.
These abilities are worth something if you're a big enough. But this is not a starting point for a new side project. You start with a monolith.
I think that is the HN syndrome; people start projects like they will be larger than facebook even though the project will probably never launch and if it does, get more than 10 users. Yet it runs all services aws has to offer and has 100 microservices and the little company(‘s investors) are paying through the nose in both dev time and hosting while a raw php script on a $1/mo vps would’ve been sufficient to validate the idea and get to ramen profitability and (far) beyond. Like someone said in another thread; focus on your market and acquisition channels first, build great stuff (much) later.
An architecture that fits well for a small company is inadequate for a large company, and vice versa.
Instead of a balance of a checking account, think about the karma balance of a reddit post, where this approach more emphatically belongs.
So can microservices, both internal to the service (via simple transactions) and between services (via distributed transactions, e.g., 2PC), though a good service design minimizes the need for the latter and is mostly guided by consistency boundaries as to where to draw service boundaries.)
(It's true that there are naive microservices architectures pursued that are essentially normalized relational designs with a one-service-per-table rule, but that's just bad—and usually cargo-cult—design.
I suppose in the end it depends on if you want to trade the hard problem of managing a monolith for possibly a much harder problem.
I find that for these discussions, the definition of a microservice is too nebulous. Personally I think that if you have "state" that crosses service boundaries (ie. a fault in either service causes the loss of that aggregate state) you have instead built a distributed monolith.
Eg, in our case, there's simply no reason why we should have a single monolith service that has both our b2b partners user profiles and, I don't know, internal staff vacation hours?
They have nothing to do with each other, have vastly different load profiles, tolerance for failure etc etc.
In fact, I think its quite common.
What do you do when you need to query data across these boundaries?
OTOH you can often export the data in different, more appropriate ways to make your joins more efficient.
Many proponents of microservices though request that you split your software into small independent services even when they manage the same business process, and this is where the complications come.
I don't know the career histories of these kinds of people, I'm sure its varied, but I just can't imagine them working in very large public facing dynamic sites that update multiple times a day with monoliths thinking "this sure can't be improved! We have reached the peak of computing!"
I've seen microservices go really badly of course - I worked in a place where the devs insisted that two microservices need to go out "at the same time," as if such a concept existed, because they depended on each other. At the end of me working there, there were around 40 microservices, all Java so they needed at least 2GB of memory each, some 8GB, and at least 3 replicas for high availability. Cost a small fortune in servers for what really could have been like, 5 microservices, written in Go or Node and run on a handful of normal servers.
But microservices, like Kubernetes, are not _hype_, they are not a flash in the pan and they're here to stay because they are a good idea conceptually, even if they often aren't executed very well.
Do you run DR tests? I bet $TEXAS you will have orphaned relationships as your data stores will be restored from different points in time.
On the other hand, a 3 person team can much more productively contribute to a monolith. From my experience in the industry so far, taking into account the current quality of tooling, I would say it starts to make sense for teams above the 10 person mark to own their own service.
I have interviewed at a few places over the last year and not one interviewer has given me a hundred percent guarantee that we will never have to allow access to the backing store (database) that isn't through our service.
Of course, microservice will fail in such environments. That's not a microservice fault. That's a defect in management.
> Each has its own datastore
How do you maintain referential integrity between datastores?
For example, when a user is deleted, how do you update the relationships between users in a concurrency-safe way?
This article has meaningful advice but I’m not sure that it will be often applied.
This strikes me as the crux here, along with this line from the article:
> There were three main reasons for the initial success of microservices as an architectural pattern for software: [...] 3) an excuse for having no architecture.
Microservices are a little bit too much of a "just so" story to me. It's a cozy non-answer to the hard problems of system design.
It’s like spreading out components with larger copper traces and separating them. If you’re circuit is wrong, no amount of things you can do on the PCB level will fix it. Fix the schematic (circuit) first.
One, for microservices created by the companies themselves, those bascically grow out of their org chart. It is well known that organizations ship their org chart, and this is no exception. Person A wants to become a manager to make more money and ascend the societal hierarchy. Person A's manager wants to grow their fiefdom and have more people under them (which is the only true measure of one's worth as a manager, as far longer term career is concerned), so the person A proposes to their manager that they use "best practices" and carve off a "microservice". Their manager happily obliges, because they don't know anything about distributed systems and consistency. Person A gets promoted. Now person B wants to become a manager to make more money and have their fiefdom... The end result is you have a nightmarish maze of microservices where a single simple monolithic binary would do a better job at one quarter the dev cost. You also have a very deep and branchy org chart, which is the preferred state for the management, since it lets them justify (and increase) their pay.
Two, for cloud microservices, they offer a simple way to create recurring, extremely sticky revenue.
That's not to say that microservices are useless - they are useful sometimes. It's just that "nobody ever got fired for moving to AWS", and when the incentive structure does not encourage more robust engineering and/or cost savings, your money won't be spent wisely, because nobody gives a shit about that.
As with any people problem, the only way to push back against this is by making the desired state the lowest energy state. This can be done several different ways, none of which have anything to do with engineering arguments, at least not if you want it be be effective against such very human things as greed and desire for social status.
But what is actually happening in some companies is just so far beyond this. There are developers I've talked to at companies where the micro-services outnumber the developers by 5:1 or more. That is insane, and I imagine it did start with the kind of empire building you are talking about.
Yes. I'm old enough to see nuance. I do not assert that these are the only causes. I only assert that they are the primary causes most of the time.
As a side note, there's something deeply wrong if CI takes 30 minutes. This usually indicates that all tests are re-run every time, which is something you can easily avoid by e.g. using a modern build system such as Bazel, which will only re-run the affected subset of tests when something changes, because it is able to track all changes to the transitive closure, including data.
Also 80 people on a team sounds like a nightmare. I don't think I've ever seen a monolithic team this large in my 25 years in the industry, some of which was spent at Microsoft, of all places. The best size is that of a large family - 5-7 engineers. That way communication overhead does not dominate, and you can still do very sophisticated and substantial things. Beyond this magic number the productivity growth is usually negative.
Some random thoughts:
* What build toolchains are suited to these arbitrary DAG arrangements? Bazel? Perhaps I missed a reference in the article, but I'd be interested in the author's take on this, and of course any thoughts from the community here.
* The "testing monolith" is a pattern that I've used in a less well-named fashion, and it's great for cases where lots of code-services need to execute a business process that might span weeks or months; building a test rig to mock time across an ensemble of microservices sounds like an interesting challenge; mocking time inside a single process running the combined logic of all the services is much more palatable. (This isn't really possible if you use multiple languages though, so it only gets you so far).
* DDD bounded contexts as service boundary -- this is a good starting point, especially if you take a loose definition of "service". Under the DDD definition of "Service" you can actually have multiple deployables running; for example a typical Django/Rails monolith will have a DB (SQL) and an async worker (Celery/Sidekiq via Redis/RMQ) and perhaps a cache (Redis) so it's really a bunch of different deployables, even if we refer to it as a "monolith". Likewise with smaller services. If you think of the Service as being a constellation of processes with an external API, then you can start splitting out parts into separate deployables without the outside world caring, say to scale a particular workload independently of the rest of the Service logic. This is kind of the direction that Uber ended up moving in with their "Domain Oriented Architecture" (https://eng.uber.com/microservice-architecture/). This is actually how Django monoliths already work; you use the same codebase to specify your sync API workers, and your async Celery tasks, and you'll deploy them as separate deployables from the same repo.
> * What build toolchains are suited to these arbitrary DAG arrangements?
In general any CI/CD tool that allows for easy composition of jobs/pipelines, where versioned artifacts are a first-class citizen and can be the outputs and the inputs of jobs. Preferably one where the graph is emergent, i.e. just a consequence of declaring which "jobs" (taken loosely) depend on which artifacts.
I've had good experiences with GoCD https://gocd.org and Concourse https://concourse-ci.org, the latter being a fresher, younger take on these concepts. I haven't surveyed the landscape recently so there may be other tools that work well.
1. "I have this application, but I need to process batch jobs / run heavy workflows / do ML etc..." -- definitely split into multiple services.
2. "I have this application, but this one feature I want to add is implemented really well as a library/app that's not easy to integrate" -- same spirit as 1 -- also a good candidate for SOA
3. "I have an application for end users, and various internal tools that differ in their quality/security/privacy requirements" -- probably a good reason to build those separately (but not a necessity)
4. "I have an application and it doesn't scale well with my growth" -- this is one of the most common reason behind implementing microservice architecture, but I think it requires a lot more thought than just 'yup, let's obviously do horizontally scalable microservices'
5' "Our monolith is slow to compile, run and test" -- careful with this one: it's probably easier in the majority of cases to fix your tests, builds, and runtime speeds, than start splitting up your (probably already very complex) application into services
6. "Our org is split into multiple functional groups and we want to move independently" -- not a good reason for SOA/microservices: you're increasing eng complexity and reducing org collaboration and risking a lot of work overlap.
7. "Microservices have been successful in [Big Company]" -- not a good reason. The benefits/trade-offs are usually fairly unique to each organization and require careful "inward" thinking instead of following trends
8. "One microservice per business function is a good pattern and we're going to mandate it" -- terrible idea. Top-down eng culture mandates prevent better solutions from even being considered. Don't do this at your company.
I enjoyed reading it, several times.
Most of the time, things start off for good reason and represent good practices. We just butcher the hell out of them when we start selling them to one another. (Same thing happens in reverse; once enough people destroy any goodness left in a buzzword, people oversell how bad the idea is when they start pitching their plan to get rid of it)
To be clear, I don't disagree with the premise that lots of microservices implementations are a clusterfuck of monumental proportions. It's just ranting and raving about devs and dev shops screw up stuff isn't exactly news to anybody. That's the natural state of affairs no matter what your flavor-of-the-week.
We get close to something useful near the end, "...focus on the right criteria for splitting a service instead of on its size, and apply those criteria more thoughtfully..."
The rest of this looks like a rehash of generally-accepted architectural principles, most of which were misapplied and resulted in us getting here in the first place. I'm not going to line-by-line critique this. There are far too many points to counter and I can't imagine the discussion keeping any sort of reasonable cohesion (hardy har har) if it spreads out that wide. Oddly enough, I find this discussion of why monoliths might be a preferable default state too monolithic to split up into reasonable chunks to analyze. Much irony there.
We got to microservices (or I might rather say we _returned_ to microservices) by following good code organization principles. Instead of starting with the resulting implementation (such as the DAGs and various models in this essay) and trying to argue first principles, it's better to start with first principles and then come up with criteria for evaluating various results.
I feel I would be neglectful if I didn't add this: if your code is right, whether you're deploying as a microservice or a monolith shouldn't matter. That's a deployment decision. If it's not an easy deployment decision, if you can't change by flipping a bit somewhere, then either your first principles are off somewhere or you made a mistake in implementing them. The way you code should all be about solving a real problem for your users. How you chunk your code and where those chunks go are hardly ever one of those problems.
I vote the next flavor of the week paradigm should be centered around this. Devs and dev shops seem to mostly only be able to clusterfuck all the things eventually. What we need is software that is Developer Proof.
I'm going to write the Developer Proof Manifesto.
I'm telling you, it's all Conway's Law. We literally just don't want to think about the design in a complex way, so we make tiny little apps and then hand-wave the complexity away. I've watched software architects get red in the face when you ask them how they're managing dependencies and testing for 100s of interdependent services changing all the time, because they literally don't want to stop and figure it out. Microservices are just a giant cop-out so somebody can push some shit into production without thinking of the 80% maintenance cost.
If one series LLC gets ensnared in a legal dispute, the others can continue operating as usual.
You can probably guess where I'm going with this. The idea is a tech company can use this new structure to put each microservice into its own LLC and basically operate as its own company that communicates with others through a rich API and formal company communications.
Whats useful about this is you can use it to get around GDPR and other 21st century issues. Microservice LLCs could basically launder data among themselves, buying and selling it, and using creative hollywood style accounting. When a service gets sued for some privacy violation and threatened with a fine, it could shut down and go out of business and be replaced by a new LLC that rises mysteriously from no where. It then becomes increasingly expensive to pursue litigation. Every time you get close to levying a fine or getting justice, the target evaporates and is replaced by entirely new companies with a whole new corporate structure.
If they would transition to this setup, there's nothing for regulators to point at for antitrust enforcement.
So to me, microservices made it hard to put coupling between parts of system, so it made system overall easier to maintain, less tech debt.
Though I think microservices didn't actually make it hard to put coupling between parts of the system.
It's the same old story of how everyone is doing Agile wrong. The cargo-cult implementations fail to deliver on the promises because they don't actually follow the principles.
"Microservices" often means little more than "not the monolith". It's a pretty low bar. The loosely-coupled property is what is important. If that doesn't hold, then you've almost certainly made things worse. If it holds then you've almost certainly created a path towards making things better.
As a corporate contractor who had to use CORBA back in the iInterface days of Win95, just the mention of it spins me into a semi-PTSD fit.
Please...REST is a solid, simple, and powerful transport method and let's maybe do without CORBA moving forward.
> REST is a solid, simple, and powerful transport method and let's maybe do without CORBA moving forward.
Chill. :-) The world agrees with you.
It’s clear that article affected heavily by outdated Java way of building software. Where everything is injected by DI , AOP so even a small change in a core component is a big deal.
No it’s not. Unless micro service is doing what expected and passes unit tests I don’t care. Even I change a language or db or network layer.
Report 1: Milk is amazing for you.
Report 2: We were wrong, milk is terrible for you
Report 3: Milk may have some benefits
Report n: Whew, milk is good within bounds
Repeat for chocolate, microservices, meditation, religion, wine, nuclear power
He called it: Thesis, Antitheses and Synthesis.
Essentially the organisation wants to hire more engineers because that makes it all go faster, right? But then you can't have all those engineers in one team so you create lots of small teams. This means you'll get lots of small systems. You then want to make them "autonomous" so they all get their own repo and cicd. And then before you know it you're in micro service "nirvana".
There is also that org were one guy read a blog post about Netflix once and then...
Aside from PDF generation (which is always a mess) I can’t see why anyone would think a more granular approach would be better in basically anything that’s not amazon and ebay.
Nobody anywhere says microservices has to be single-function-sized services.
Can't remember seeing that in practice, other than many libraries combined into a "monolith". Anyone have other examples?
Is there a type-safe way of communicating across microservices without duplicating verification logic, writing extra layers for the protocol overhead and conceding to more cognitive load during programming?
This approach only works well if most of your services are using the same language (and sometimes framework).
> There is no substitute to the effortful application of cognitive power to a problem.