Hacker News new | past | comments | ask | show | jobs | submit login
Microservices – architecture nihilism in minimalism's clothes (vlfig.me)
306 points by zdw on Nov 2, 2020 | hide | past | favorite | 204 comments



In my opinion, microservices are all the rage because they're an easily digestible way for doing rewrites. Everyone hates their legacy monolith written in Java, .NET, Ruby, Python, or PHP, and wants to rewrite it in whatever flavor of the month it is. They get buy in by saying it'll be an incremental rewrite using microservices.

Fast forward to six months or a year later, the monolith is still around, features are piling up, 20 microservices have been released, and no one has a flipping clue what does what, what to work on or who to blame. The person who originally sold the microservice concept has left the company for greener pastures ("I architected and deployed microservices at my last job!"), and everyone else is floating their resumes under the crushing weight of staying the course.

Proceed with caution.


I actually worked in an org that took the Tick/Tock model and applied it to rewrites.

As a big monolith got too unwieldy, we’d refactor major pieces out into another service. Over time, we’d keep adding a piece of functionality into the new service or rarely, create another service.

The idea wasn’t to proliferate a million services to monitor but rather to use it as an opportunity to stand up a new Citadel service that encapsulated some related functionality.

It’s worked well but it requires planning and discipline.

At another level of scale, some functionality got rewritten a second time into yet another citadel.


> a new Citadel service > yet another citadel

What do you mean by citadel in these contexts? I've never encountered that terminology before.



Yes exactly. Sorry, didn’t realize I’d used it without providing context.

Been a distracting couple of days.


"and no one has a flipping clue what does what"

This has been my experience too. We got rid of monolithic VMs running generic Linux systems (that were well understood and easy to reason about and fix by the entire team) and replaced them with hundreds of Lambda functions written in Java Script. The complexity, cost and vendor lock-in is insane. The Tech world has become irrational and emotion driven and needs an intervention.


Alan Kay has been calling The Tech World a pop culture for a long time.


We looked very hard at microservices some 5-6 years ago, and estimated we'd need ~40000 of them to replace our monoliths, and given a scale like that, who'd keep track of what does what ?

Instead we opted for "macroservices". It's still services, they're still a lot smaller units than our monoliths, but they focus on business level "units of work".


That is more or less the approach that I would recommend to modern microservices, I would advise someone to read about domain-driven design and I would ask them to pay very careful attention to the notion of bounded contexts, the idea of a namespace for business level words. If you build your microservices partitioned at bounded context level, and you vehemently resist the idea of a remote procedure call—“under no circumstances are you giving me orders, you are maybe telling me about a state change that has happened to you, but don't you dare tell me what to do and when,” those two principles can guide a decent architecture. Have a message broker. Do “event storming” with non-technical users of the system to get a better model of the domain. Stuff like that.

And yet, I will contradict myself to say that actually, microservices have the potential to be that original vision of object-oriented programming from the 1960s. The fundamental idea is very cheap computational nodes that only interact by passing messages around, with the idea that a recursive design inspired by cells and biology, will be much more successful than traditional programming where you try to separate data structures from the procedures that operate on them. If your microservice has setters and getters, you're missing the point of this original vision of OOP. You could certainly build a system with 40,000 of these live, but probably you would not have 40,000 different chunks of source code behind them.


>And yet, I will contradict myself to say that actually, microservices have the potential to be that original vision of object-oriented programming from the 1960s. The fundamental idea is very cheap computational nodes that only interact by passing messages around, with the idea that a recursive design inspired by cells and biology, will >be much more successful than traditional programming where you try to separate data structures from the procedures that operate on them. If your microservice has setters and getters, you're missing the point of this original vision of OOP. You could certainly build a system with 40,000 of these live, but probably you would not have 40,000 different chunks of source code behind them.

Lovely description of the actor model in erlang/elixir and akka :)


> You could certainly build a system with 40,000 of these live, but probably you would not have 40,000 different chunks of source code behind them.

We're "replacing" an old system that has lived on a mainframe for the past 60 years. In a strict microservice scenario we would indeed need 40000+ microservices to replace our core business.

The thing about (our) mainframe integration is that it is essentially made up of microservices, only they're small chunks of COBOL being called like pearls on a string, each producing output for the next in the chain.


> small chunks of COBOL being called like pearls on a string, each producing output for the next in the chain.

Sounds like the UNIX way, to be honest.


Dwarven techniques are not uncommon in Mordor's forges too. :P


That's microservices.


That's services. And Service Oriented Architecture, circa 2000.


Indeed. According to Sam Newman, the main difference smart pipes/dumb pipes.

SOA has ESBs. But both are built around bounded contexts.

Calling them "micro" was a bit of mistake.


Are we just adding prefixes to words for fun?


Pretty much yeah.


Naïve microservices are separated per data store\DB. Because DBs are usually the bottleneck and we want to scale them separately per usage. Looks like they combined some to make it possible to manage, while not utilizing resources to the max.


The requirement of transactional consistency is one of the ways you identify a bounded domain in DDD. Well aggregates at least.

If you require transaction consistency across multiple microservices. They are to small.

If you don't require transactional consistency across microservices that's ok. Therefor they can be in separate databases anyway and messaging can be used.


They seem more like systems, like SCS https://scs-architecture.org/


I logged into just to say this is spot on.

The complexity added by all the services for deploy, bug tracing, and dear god, security auditing, made for a living hellscape that crippled our ability to ship software for some time. Not to mention the bifurcation of resources to keep the monolith happening for customers using it while the microservice mess is being created.

There is almost a zealot-like-brainwashing that has happened to folks. I made case after case to our engineering team "This doesn't solve our business use cases. We're prematurely scaling. We're unable to move code into production efficiently. This is very hard to understand for outsiders joining our team." --- All fell on deaf ears since I "didn't get it." When I put a hard stop on adding any more microservices without a use scale for why we need that scale, I was called "toxic."

In the end we fired the whole team since they wouldn't buy into destroying their microservice dream world for something practical and put everything back in the monolith except for one service.

Our amazon bill is 1/8th what it was. Security auditing/upkeep is 1/100th what it once was. Deploys are done without fanfare more than 1x/week. Our average response time is down from 500ms to less than 100ms since we aren't hop-scotching services all over God's green earth.

Note: This isn't a tiny project. 200k users, 700-1000 requests/minute during peak times, lots of data moving through this.


This is astonishing! You said exactly what we are living through right now except we are at 4 years and counting.


In my experience if the monolith has standards and is well-written then it's not so bad.

I once had to work on a giant C# codebase with a frontend written in a wild mix of Angular, React (multiple versions) and Knockout and found it was pretty good because there was clear separation and code standards were very high.


Two years and counting


>20 microservices have been released

2 in our case.


It seems like those who have success with microservice and advocate their use often have a much lower number of service than those critical of microservice architecture.

One customer I work with managed to create 10+ container based services, in order to do a database lookup, render a notification template and send the notification to one of three notification services. Because it’s Java based there’s now also a pretty large menory overhead, as each service needs it’s own memory allocation for the JVM. On the plus side they have became very aware the this is too many micro service and are refactoring and combining a few of the service.


Yeah, that's a recipe for disaster. We've had quite decent success by just splitting along boundaries when something becomes too large or has too many responsibilities.

Having too few large services seems much easier to work with an fix than having too many small ones.


Did you guys split along the equivalent of a bounded context? That would seem to make sense to me in terms of keeping things loosely coupled but still cohesive.


I'd say so. What we did is if you make a description of what a service does like "this service ingests data X and stores it in a database and makes it available as an API and aggregates it into business events and makes predictions based on it" we cut the services on the and's. This example now is 3 services: DB + API, event generator and predictor.

I'm still glad we let it evolve that way instead of starting out separately.


I couldn’t have put this a better way. I was on the receiving end of the microservices chaos.


Programmers have always partitioned their code in, roughly, the following way:

   - lines
   - functions
   - objects
   - modules
   - binaries
The languages/terms differed, but that's how software has been constructed since approximately forever, and we've always had debates on optimal size/length of lines/functions/objects/modules, etc. We've also had numerous reincarnations of binaries talking to each other over some form of RPC. What happened here was a marketing knife-war between companies in the container management space. Then someone (we'll never remember who) tried to differentiate by coining this term, which basically means "binaries with RPC".

Every binary in your /usr/bin/ is a microservice. Just type `watch date` and enjoy two microservices running, no need for containers/kubernetes :)


I have found that this applies all the way up to the loftiest levels of abstraction. For instance, we partition our problem space in terms of logical business process once you get to a certain altitude in the architecture. But, the most obvious way to represent these discrete process domains was with... objects. So yeah. It's objects all the way up. The only difference between good and bad code is how the developers handle namespacing, drawing abstraction hierarchies and modeling business facts.

Microservices are about organization, not about requests being sent on wires throughout some abomination of a cloud infrastructure. Developing a UserService class and having the audacity to just directly inject it into your application for usage is probably one of the most rebellious things you could do in 2020. Extra jail time if you also decide to use SQLite where appropriate.


> The only difference between good and bad code is how the developers handle namespacing, drawing abstraction hierarchies and modeling business facts.

I wish those were the only indicators of bad code, but illegible aesthetics, over-engineered complexity, under-engineered fragile modules, etc. are to be found everywhere.


I’d also add “not a shred of documentation” to the list. Not a comment, no issue- tracker IDs in the commit comments; just a confabulating pile of generically named “things doing things”.

Sometimes I’d rather go farming


Yes, programming keeps me straddling the line between futurism and going full luddite. Come on people, nobody is a mind reader, document your intentions in comments at the very least!


Writing straight SQL will land you in a prison.


Things were so much easier 15 years ago. I see very little gain for all the complexity we have added since then. Ok we can scale easily on demand, but the vast majority of places won't need to.


Actually, we still can't scale easily on demand. Let's say you get 2x the traffic. Sure, your front-end instances in the cloud can scale to 2x the capacity, but that just moves the bottleneck over to your database.

And then you noticed that scaling an Amazon RDS database to a larger instance requires downtime...


I don't think that definition of microservice is useful and I think it promoting it will add to the confusion around the issue. You can comment on the similarities in the way modern microservices and watch/date are architected without calling the latter microservices.


One additional level of granularity, above “lines”: tokens.

(Expressions mapping nicely to lines, ymmv).


Every couple of years Sun RPC/DCE gets re-invented, basically.


Exactly this. Well hype moves sales when people buy into that marketing crap.


Throw IT at wall, see what sticks.

You know what, http3 gonna be sick!

When you're all alone and feeling insane,

you know you lack the sweet sweet 2.0 Webscale.


As a non-native English speaker, I'm always puzzled by Americans rhyming. How does this work? And why does wormhole monocle not?


As a native English speaker, I have no idea what this person was trying to communicate :)


Only the first two lines rhyme.


It's supposed to make people think ahead.

But seems they just get angry instead.

Why oh why don't people just see,

the true meaning of rhyme, truth and the DDD.


This is the first poem I’ve read which mentions DDD.

I’ve definitely had enough internet for a while.


none of the lines rhyme.


Regarding "wormhole" and "monocle", they form a "slant rhyme" or "half rhyme": https://www.thefreedictionary.com/slant+rhyme


wormh_o_le mono_c_le

The although both words have the same last two letters - "le", the third last letters are different. This produces a different syllable when pronounced.


I like it not sure why buried. Guess rings too close to home for some.


I love my service based architecture. It's about 10 years old and I can't imagine a better way of separating concerns and keeping a clean, maintainable, and resilient e-commerce operation up and running. I don't have hundreds of services, there are maybe 30. Each service is responsible for a particular domain, and if another service wants access to that domain it must go through the appropriate service. I can't imagine having a single process "monolith" being responsible for so many things, it would be a nightmare. I also can't imagine not enforcing separation of concerns via services, code would be duplicated everywhere. I was thinking the other day about the cost of serialization, which isn't too bad. If it is too bad then you're probably sending too much data over the wire, and should do more processing on the remote end of the wire. I think some people go overboard with the micro part of microservices. If there isn't a clear separation of domain, then there should probably just be one service. You can still modularize the code as much as you want within the service.


I'd like to challenge the "I can't imagine a better way of separating" bit. I agree that microservices, if done right, are awesome.

Recently, however, I had the chance to work with a quite extensive codebase that powers a monolith for a fintech company. The code is written in Scala and extensively uses Akka-streams to neatly separate concerns. In my opinion this approach is the the sweet spot as 1) Devops burden is low as you only have one binary to run / deploy 2) Shapes of the various Akka-streams subgraphs are statically typed checked (unless you opt-out by doing stupid things), 3) makes it much easier to reason about the data flow, and 4) testability is really high as you don't have to mock services but only the upstream subgraphs.

The downsides are that 1) the learning curve is very steep at the beginning as Akka in general is very complex to use effectively, and 2) squeezing the maximum performance can be hard as you don't have the ability to horizontally scale only some microservices.

I fell in love with the approach and I'm migrating some personal projects to it.


I've seen very few posts here mention horizontal scalability like you did. In my mind, it's the first and foremost reason to consider a distributed rewrite over a monolith. I would bitch long and hard if someone was suggesting the transition from monolith to micro-services without the metrics to prove horizontal scalability was becoming an issue.


Fully agree. Another point worth considering is costs. Cloud providers now let you use ephemeral VMs which are amazing in terms of value. If you can identify some parts of your system that use too much in terms of resources you can get quite some nice costs savings by wrapping that around a microservice and deploy it in said cheap VMs.


How big is your team?

What kind of process do you use to on-board new engineers to the point that they can make good design decisions within your architecture?


We have the same way, a main monolith to do the basics, create and update. But other parts of the system handled by a server. A lot of it is queue consumption based. E.g a server for sending emails, another server for file exporting. Our main system also shares a module for connecting to the DB, which makes it easier to handle database upgrades or changes.


Sounds like you've hit the SOA sweet-spot.


Do you get the benefits of separation by using separate data stores as well, or are most fed from a common database?


Most data is stored in the same database server instance. There is a database access service that with seperate modules for each database, by convention each of these modules has exclusive access to it's tables with the exception of a special module that performs cross-database joins, needed for performance with some reporting tasks. Other services don't talk directly to the database, they go through the database service. Some services use their own data stores in a variety of formats.


I prefer lower devops complexity and higher software application complexity. Any day.

Microservices is higher devops complexity in exchange for lower software application complexity. A really terrible deal IMO.


I've seen both extremes. For instance I'm told github largely has a rails monolith and that they have to run headless instances of rails to do database leader election (although this statement implies they are trying to break things out).

I've also talked to junior engineers who want to make every function call a pubsub message.

I've heard principals from Amazon promote a model where one service is responsible for one entity.

What I've decided is that the services in your company should follow conway's law. Most of the problems with a monolith come when multiple teams with differing release cycles and requirements are making changes in a shared codebase and they are having trouble keeping their tree green. You should generally have one to a few services per team. Scoping a service to a team ensures that people can have true ownership.

For SREs microservices are harder, but they give SREs the control plane they need to do a good job. If communication happens between services rather than function calls, it's easier to instrument all services in a common way and build dashboards. It's simpler to spin up different instances connected to different datasources.


I agree wholeheartily that Conway's law is a very useful guiding principle for making "architectural" decisions.

I also think this applies much more broadly than just microservices vs monoliths. I recently moved ~40 repositories into just a few. What I've found is that anything that releases together (by teams and timeframe), should stay together. This helps ease modification of related components in an agile way, simplify tagging components, simplifies CI workflow (no multi-project pipelines).

Anything that breaks with this principle should have a concrete reason for it. If you need to combine the results of several teams into one large release, it may be easier to develop tooling for handling it all in one repository rather than developing tooling for handling many repositories. That's really the monorepo tradeoff.

Similarily, there are concrete reasons for breaking a service into smaller parts. Perhaps you want to horizontally scale a part of the service. Perhaps you need a part of the service to have a different lifecycle. But you're paying with increased deployment complexity, so you'd better get something worthwhile in return.


Hard agree. As an SRE, half the time my current company splits something off into its own service its for performance reasons driven by us. It’s just as often us working with the devs because we need X service off the primary database or Y service the ability to scale on its own as it is them having created a separate service of their own accord. Plus as an SRE, it’s a lot easier to wrap my head around what each service does on its own and what responsibilities are broken when its down than it is to understand the full workings of a monolith, and building monitoring around the smaller chunks is easier.

Obviously it’s possible to overdo it. Generally it seems that splitting out services as appropriate is more intelligent than just sitting down with the thought “we’re going to build a microservice architecture.” Goes back to the idea that gets banded around a lot that you should start with something as simple as possible and if you get in a situation where you are at the scale to need to rewrite then that’s a good sign for your business.


This is the first time I've heard of Conway's law used in a positive or at least non-negative way.


I don't think I ever read mention of conway's law as having positive or negative connotation.


The law itself is fine, it’s just that the organisation of most companies is abysmal, so your software is too.


In case of rewriting the codebase it actually makes sense. Your organisation already has a codebase and complementary org structure. Any microservices rewrite should be tailored to the existing boundaries of teams for maximum effectiveness.


But the idea that interacting services can be built by different teams isn’t just dev ops complexity, it’s insanely complex managing that stuff because it involves humans.

Never mind that everyone building micro services just goes “fuck transactions and eventual consistency, I’ll go with maybe/probably my data gets corrupted over time” whoop.


It doesn't matter if it's multiple services or one monolith, once you have multiple teams on one product the complexity is already there. The argument is that microservices force it to be visible and dealt with while monoliths hide it until you blow your feet off.


Particularly, IMO, for internal business apps, microservices make it more likely that you can align products, business owners, and teams, whereas monoliths force complicated governance as well as multiteam products. And, in practice, the development teams and business owners aren't aligned, so you get a many-to-many web of requirements and approvals communication.


Who said anything about one monolith? I believe much more in starting with one service and splitting it when it becomes absolutely torture to work with... Make as few services as you can possibly handle and make absolute guarantees between them in terms of data consistency. This is basically what the article suggests in detail, but I assume you disagree with it?

Bounded contexts DO NOT need network partitions to be enforced BTW. For example, I'm pretty sure Google has all their source code in a single repo (or at least a LOT), how do they with a million developers stop people from intertwining everything? My guess is code reviews, hiring good people and tooling.

EDIT: sorry to the person who liked it I've rewritten this comment for clarity, and removed lots of words...


Very very few companies are actually like Google to the point where I'd say making an argument that assumes you are like Google is a fallacy. When you have a near infinite stream of ad money to pay developers million dollar salaries then amazing things are possible.


I actually think the opposite, Google haven’t launched anything apart from Mail and maps and those were skunkworks projects...


1000 people working on the same code base is human complexity too.


So is 1000 people trying to coordinate package versions and interface changes across 100 systems (throw in loosely typed languages for extra fun).


That assumes all services talk to all other services directly. a) they probably don't, and b) message busses can help a lot.


APIs shouldn't change in backward-incompatible ways. That's sorta the bedrock of a service oriented architecture. If teams have to communicate with each other through more than API documentation and can push responsibilities onto each other by making backwards-incompatible changes then you've kneecapped the entire benefit of a service oriented architecture from the get-go.


I'm not talking about backwards compatibility so much as adding features in parallel that span many services, which happens when you're closing a lot of sales deals when the salespeople don't say no to things :p


Sure, if you are independently building your service, or working a tight feedback loop with others on the product.

As others pointed out, it works for companies who operate and scale engineering teams. Good luck maintaining complex applications across tens to hundreds of developers.


Tens to low hundreds possibly, but micro services can make things much worse as you scale to thousands of developers. The ultimate limit of any design is how much any one person can understand both from both a complexity standpoint and a rate off change standpoint. It’s the same issue that pushed people from goto > functions > libraries > ... Eventually you need another layer of abstraction.

For very large companies doing mergers etc things are always going to be in flux in ways that the startup world tends to ignore.


You are correct and I agree with you. I have not worked in big projects, only small, so that’s where my beliefs come from.

I feel the problem is a software problem that should be solved by better development tools/languages rather than throwing up hands and pushing the problem into the operations domain.


I think whether that's the case is largely dependent on implementation.

It could increase or decrease software complexity. It could also increase or decrease devops complexity.


My manager at work is a big believer in microservices.

I never reveal my true thoughts because I don’t think he would understand the heresy of the unbeliever.

Never tell anyone to their face that you don’t believe their god is real.


This is a debate I will never understand.

The position of the monolithics is "you should have one thing". Well, that's obviously wrong, if you're doing anything even slightly complex.

The position of the microservice people is "you should have more than one thing", but it gets pretty fuzzy after that. It's so poorly defined it's not useful.

How about have enough things such that all your codebases remain at a size where you don't dread digging into even the one that you're most prolifically incompetent coworker has gone to town on? Enough things that when not very critical things fail, it doesn't matter very much.

But only that many things. If you need to update more than one thing when you want to add a simple feature, if small (to medium) changes propagate across multiple codebases, well, ya done messed up.

If you're one of the people believing monoliths are The Way, you're making a bizarre bet, because there's N potential pieces you can have to create a complex system, and you're saying the most optimal is N == 1. What are the odds of that? Sometimes, maybe. But mostly N will be like 7 or something. Occasionally 1000. Occasionally 2. But usually 7. Or something.

This seems really obvious to me.


> If you're one of the people believing monoliths are The Way, you're making a bizarre bet, because there's N potential pieces you can have to create a complex system, and you're saying the most optimal is N == 1. What are the odds of that? Sometimes, maybe. But mostly N will be like 7 or something. Occasionally 1000. Occasionally 2. But usually 7. Or something.

"Pieces" is doing some heavy lifting here. You're assuming that isolated parts of a system need to be seperately developed and deployed systems, which absolutely doesn't need to be true. Seperate parts of a system can be modules, namespaces, libraries, or any number of different solutions to decouple code and create domain contexts and boundaries.

I've never met anyone who prefers to use monoliths that would also say "just let everything call everything else, you don't need any structure". That doesn't necessarily mean that that the only acceptable boundary is an HTTP interface.


I find it bizarre how many arguments on the topic are just of the type "microservices are the cure" / "microservices are cancer" - no reflection on specific domains and context.

It's very much like the case with unit tests.

What is a unit? How small is micro? These two questions on their own are subject of debates of religious proportions.


It's not actually saying that. It's saying, that the N is not really relevant technologically enough to be worth evaluating every damn time. It maybe relevant organizationally to fit your org chart. Also, putting a barrier to creating a new service is a feature of monolith. Other types of partitionings are still there. They just might require some insight into existing arch.


I refer to these as “reasonable sized services.” I feel like a lot of people that “do microservices” spent to much time in CRUD systems and haven’t done a lot of domain driven design.


I've never understood all this drama. In my experience, monoliths are universally bad in every imaginable way. A huge part of my career has been splitting monoliths into cohesive smaller codebases which could be called microservices.

How to split them has never really been a problem, you tend to develop an intuition and the monolith kind of splits itself.

Eg, currently we have user profiles (user id > the users names, email etc) in one microservice, relationships between users (user id > other user ids) in another.

Each has its own datastore, deployment, scaling etc. It works great and while there is a small overhead cost to pay for splitting things it's vastly preferable to the monolith we used to have, which contained lots of things that had very little to do with each other, took ages to test and deploy, if it went down it was game over etc etc etc.


> A huge part of my career has been splitting monoliths into cohesive smaller codebases

Did that ... create business value for your company?

> Eg, currently we have user profiles (user id > the users names, email etc) in one microservice, relationships between users (user id > other user ids) in another.

This sounds like a parody of microservices.


Yes, it did create business value.

* our enterprise db was bursting at the seams containing Literally Everything. Now, every part of the split up monolith has it's own self contained data store tailored to what is appropriate for that particular thing. (some use MariaDB, others Redis etc etc)

* developing, building, testing and deploying took ages. Eg if I only needed to capture some new detail about a business partner user (eg their mfa preference app vs sms) I would still have to do deal with the unwieldy monolith. Now, I can do it in the dedicated business partner user service, which is much easier and faster.

* the whole monolith, including business partner facing operations, could go down because of issues to do with completely unrelated, non critical things like eg internal staff vacation hours.

I could go on.

As for the different services I described, if most of the callers did need both pieces of data it would have made sense to combine them into a single service. But, overwhelmingly, callers are interested in one piece of data or the other, and the load profile, tolerance for staleness in caching etc etc for each of the two services are vastly, vastly different. This is why we chose to split the two into different services.

The few callers that do need to obtain both pieces of data just make concurrent calls to both and them zip them into a single result.


> * our enterprise db was bursting at the seams containing Literally Everything. Now, every part of the split up monolith has it's own self contained data store tailored to what is appropriate for that particular thing. (some use MariaDB, others Redis etc etc)

Why do you consider an enterprise DB "bursting at the seams" to be a bad thing? Isn't that what enterprise DBs are built for? Seems like you traded having everything in one large database to having everything scattered in different databases. You probably sacrificed some referential integrity in the process.

> * developing, building, testing and deploying took ages. Eg if I only needed to capture some new detail about a business partner user (eg their mfa preference app vs sms) I would still have to do deal with the unwieldy monolith. Now, I can do it in the dedicated business partner user service, which is much easier and faster.

You traded a clean codebase with a solid toolchain for probably a template repository that you hope your users use or everyone is reinventing some kind of linting/testing/deployment toolchain for every microservice

> * the whole monolith, including business partner facing operations, could go down because of issues to do with completely unrelated, non critical things like eg internal staff vacation hours.

This could apply to any software. Sure, a monolith can have a large blast radius, but I can guarantee one of your microservices is critical path and would cause the same outage if it goes offline.

> The few callers that do need to obtain both pieces of data just make concurrent calls to both and them zip them into a single result.

Almost like a database join?


Who says the monolith we had was a clean codebase with a solid toolchain? It was a giant piece of software that had existed since the 90s, with hundreds of developers working on it at any given moment, thousands of people coming through it over the years, full of bizarre things that would make anyone scream and tear their hair out. (each of which may have been the right thing to do at the time, but combined over the years wasn't optimal to say the least)

It's obvious nothing will convince you, but I maintain that eg internal vacation hours should not live in the same codebase as partner facing business critical things.


You can very well have a bad monolith and still think micro services is a bad idea, and his concerns especially around referential integrity are perfectly valid and valuable. Maybe all you needed is to fix the existing monolith incrementally?


Of course microservices should be sliced with referential integrity in mind. But a monolith by definition has everything, including utterly unrelated things that have absolutely nothing to do with each other, bundled up into a single, giant, well, monolith. What referential integrity could there possibly be between eg business partner user profiles and internal staff vacation hours?


There is nothing inherently bad about storing unrelated things in the same db, in general.

There might be security issues but good DBs offer security grants that are granular enough to deal with this.

Data sovereignty might be an issue forcing you to split, that depends on the domain and application requirements.

But in general two things being unrelated isn’t a reason to split the DB and splitting the DB isn’t a reason to split into services.

I’m not for or against microservices. I’m definitely for good reasoning though!


> Who says the monolith we had was a clean codebase with a solid toolchain?

You didn't. They're just trying to force the issue for some reason by straw manning your decision.


No offense, but someone coming in and splitting out a database into multiple bespoke data stores because the primary database is "bursting at the seams" is basically my nightmare. If your description of your reasoning is accurate, it's a completely broken mental model.


Well, that's not what happened. The whole org from devs to the board were highly aware of the problems and knew and agreed what had to be done. I'm just one of the many people who architected and implemented the change.


Could you be specific about the size of your company and the size of the data you're processing? And could you also be more specific about the business value it created? Are you an investor in the business? Is it your own money on the line?


Publicly listed, ~13 billion USD revenue, 22k employees, millions of b2b partners and users.

When the monolith goes down, it's hundreds of thousands, sometimes millions of dollars of revenue lost both during and as a result of an outage.

You really, really don't want a problem with eg internal staff vacation hours to do that.


Ok, interesting. That kind of scale is not exactly the norm though, so I don't agree with your implication that monoliths are always a bad idea.


I really don't know why anyone who wasn't so resource strapped as to have no alternative would want their $$$ services sharing a build-test-deploy-rollback lifecycle or database with the internal staff vacation hours tracker.


It seems like the real solution here is to outsource vacation hour tracking to one of the HR apps for the purpose (see Workday) instead of having the domain expert engineers split the company’s application apart into micro services so they can have a custom vacation tracker.


It was just one of the worst examples. A more obvious case would be the different lines of business. They have very little to do with each other and shouldn't share code or data. Another is business partners vs end users (regular people) - there's a lot if reasons, including legal ones, why they shouldn't share code or data.


That makes sense, but is nowhere near universally applicable. For folks working in most startups, talking about different lines of business is non-sensical - there's generally one product.

It sounds like you're describing a monolith that actually contained multiple completely independent applications - which I don't think anyone would disagree with being a good case for splitting.

In most cases I've seen, the decision to split into microservices usually involves a fairly high amount of dependency between different services to achieve a common business goal (hence the concerns about things like referential integrity)


> > A huge part of my career has been splitting monoliths into cohesive smaller codebases

> Did that ... create business value for your company?

Not OP, but the big value of splitting into microservices is isolation.

In production, this isolation offers a limited blast radius in the case of an errant service. Also, independent scaling. Business value: improved reliability.

In code, isolation lets development teams have a smaller and more focused domain / set of concerns to reason about (vs. the entirety of the monolith). Business value: Increased dev velocity.


In my experience, it's easy enough to have services which have a large blast radius themselves and can become points of failure for your entire ecosystem. I don't find this to be a huge point of difference from a monolith, although of course it depends a lot on what you're working on. To pick on the OP a bit here (sorry!), they said that their entire legacy system "could go down because of issues to do with completely unrelated, non critical things like eg internal staff vacation hours." To me that sounds like poorly written software regardless of architecture. I can't imagine a scenario in any of the recent codebases I've worked on (microservices and monoliths both) where errors in what sounds like an internal CRUD tool would cause an entire production application to crash. I find it even harder to imagine if the application has a halfway decent test suite.

To add to that, when you have hundreds of services running around and something goes wrong, it ends up being a lot harder to track down exactly what's happening. So when you do get that critical error, oftentimes the downtime is worsened.

As for dev velocity, I find the claims of the microservice gospel a little bit exaggerated. Your layers of nested services all talk to each other, and any of them could be a point of failure. This isn't really all that different from calling another function in your monolithic app - you've just distributed that function call across a network boundary. You still need to know the callee's API, and you'll still spend a decent amount of time trying to understand the ways that the callee might fail. But you've also created a huge amount of additional developer work whenever you need to do something that spans the boundaries of existing services.

I think microservices certainly have their advantages but a lot of the simplistic claims made by their biggest proponents only hold up prima facie.


> In my experience, it's easy enough to have services which have a large blast radius themselves and can become points of failure for your entire ecosystem. I don't find this to be a huge point of difference from a monolith, although of course it depends a lot on what you're working on.

It's a huge different because if some core critical service starts causing problems it's almost certainly because the last binary push was bad, and you roll it back. You only have to roll back that particular service any everything starts behaving correctly again. Moreover, you probably detected the problem in the first place when the rollout of that service began by replacing a single instance of the updated service with the new binary. Monitoring picks up a spike in errors/latency/database-load/whatever and the push is stopped and rolled back.

Monoliths have inventive ways to address this problem without having to roll the entire binary back, like pushing patches or using feature flags, but few would argue that the microservice approach to handling bad pushes isn't superior.

> To me that sounds like poorly written software regardless of architecture. I can't imagine a scenario in any of the recent codebases I've worked on (microservices and monoliths both) where errors in what sounds like an internal CRUD tool would cause an entire production application to crash. I find it even harder to imagine if the application has a halfway decent test suite.

Easy enough with a sufficiently large codebase in C or C++. Somebody's parser encounters an input that was supposed to never happen and now it's off clobbering the memory of who-knows-what with garbage.


If you find yourself routinely pushing a bad binary that has (1) passed your test suite; and (2) passed whatever manual QA process you have on sandbox/staging deployments, then I would again suggest the problem is the process and not the architecture. Not to mention that if the service is indeed critical, you're not rolling out a deployment to every production server at once (or you're low enough scale that it doesn't matter). Either way, easy enough to roll back a bad deploy before things get hairy regardless of architecture.

Also, I'm not sure what kind of internal CRUD tools you're writing, but "malicious input" doesn't really seem likely to come from your coworkers.


Input doesn’t have to be malicious to cause problems, just unexpected. Separating memory-unsafe code into the finest granularity memory spaces possible is just good practice regardless of whether it’s microservices or just process isolation in a monolith.


When you're dealing with a giant piece of software that has existed since the 90s, with hundreds of developers working on it at any given moment, what you're describing doesn't work. You can read about a famous example here https://news.ycombinator.com/item?id=18442941


Well, you began by saying that "monoliths are universally bad in every imaginable way." You're now saying that a select few poorly-maintained, 25-year old monoliths are bad, which is an entirely different claim. That doesn't really prove that microservices are any better (are 25-year old poorly maintained clusters of services really going to be an improvement?). Let alone "better in every imaginable way."


How is a blast radius limited in the case where a bunch of things depend on that microservice? It seems a microservice can have an arbitrarily large blast radius.


The things that don't depend on it don't go down. Eg. Your email system doesn't go down because the building's elevator had a bug.

It also let's you choose which parts to pay closer attention to - the microservices that's depended on by everything gets the extra operations


If you have a high availability system, then one whole monolith instance going down is going to have less effect than one crucial micro-service going down.


That seems like a conveniently contrived example.

Why, in this example, would only 1 instance of the monolith go down but all instances of the crucial microservice go down?


Most of the time it's one dodgy parameter or variable (or one dodgy combination of variables) that causes something to crash. The majority of bugs I fix take a bit of effort to reproduce.


Exactly this!


Agreed, isolation is a good reason to opt for microservices. One app goes offline, others still accessible, and you can do isolated maintenance. That, and features that aren't directly related, are super lightweight (quick to build, run, low on memory and CPU) as little projects.


> Did that ... create business value for your company?

Image what a piece of crap Linux kernel would be if it was developed with this mindset. :-)

EDIT: The context is that there's constant code splitting code moving being done inside Linux.


> monoliths are universally bad in every imaginable way

In my experience, monoliths are simpler and often faster to ship a working v1.0. When that matters (which seems like it would be “quite often”), they are a short-term winner.

You have to survive the short-term in order to face the problem of the long-term.


> In my experience, monoliths are universally bad in every imaginable way.

I just imagined a different way. Monoliths can do transactionally consistent stuff.


Now you have a good rule of thumb for splitting services: stuff that must belong to a single transaction should live in the same service.

This is how you e.g. can split a bank transaction service from a bank balance-telling service, with different requirements for scaling, latency, SLAs, deployment schedules, etc. You can deploy a change to the balance-telling service and then roll it back if it exhibits a problem, all without touching transaction proceeding in any way.

These abilities are worth something if you're a big enough. But this is not a starting point for a new side project. You start with a monolith.


> You start with a monolith.

I think that is the HN syndrome; people start projects like they will be larger than facebook even though the project will probably never launch and if it does, get more than 10 users. Yet it runs all services aws has to offer and has 100 microservices and the little company(‘s investors) are paying through the nose in both dev time and hosting while a raw php script on a $1/mo vps would’ve been sufficient to validate the idea and get to ramen profitability and (far) beyond. Like someone said in another thread; focus on your market and acquisition channels first, build great stuff (much) later.


I remember that eBay seriously rewrote their backend three times — not because they kept doing it wrong, but because they grew.

An architecture that fits well for a small company is inadequate for a large company, and vice versa.


I bet there are orders of magnitude more cases where a ton of time and effort has been put into scalable architecture when it wasn't needed. Ebay is a serious outlier when it comes to scale of software, most people aren't going to serve anywhere near as many customers.


I've been through a couple of iterations of "MONOLITH BAD!" where none of the issues were monolith related but instead database design and usage. Also an iteration of "MICROSERVICES GOOD!" where there are more microservices than entire company employees (not just devs) and yeah, it's a bad idea at this scale.


I know it's a simple example, but wouldn't the balance of an account be critically important to be part of a transaction. Otherwise, how could you manage two simultaneous withdrawals if the transaction processing service isn't able to put a lock on reading the balance until it's complete?


The teller can show a slightly stale, eventually consistent balance figure, depending on its use. This would allow for massively better read performance, which is important when reads occur many times more often than writes.

Instead of a balance of a checking account, think about the karma balance of a reddit post, where this approach more emphatically belongs.


> Monoliths can do transactionally consistent stuff.

So can microservices, both internal to the service (via simple transactions) and between services (via distributed transactions, e.g., 2PC), though a good service design minimizes the need for the latter and is mostly guided by consistency boundaries as to where to draw service boundaries.)

(It's true that there are naive microservices architectures pursued that are essentially normalized relational designs with a one-service-per-table rule, but that's just bad—and usually cargo-cult—design.


Sure you can do transactions across services. Do you really want to deal with the problem of distributed consensus though? If software especially built to deal with it often gets it wrong (as I think Jepsen tests have frequently demonstrated), what are the chances your team is going to succeed and not build a distributed monolith instead?

I suppose in the end it depends on if you want to trade the hard problem of managing a monolith for possibly a much harder problem.

I find that for these discussions, the definition of a microservice is too nebulous. Personally I think that if you have "state" that crosses service boundaries (ie. a fault in either service causes the loss of that aggregate state) you have instead built a distributed monolith.


For sure, that's a factor that should be taken into account when deciding what to split and how. But monoliths always have completely unrelated things bunched up in a single codebase, storage etc etc etc.

Eg, in our case, there's simply no reason why we should have a single monolith service that has both our b2b partners user profiles and, I don't know, internal staff vacation hours?

They have nothing to do with each other, have vastly different load profiles, tolerance for failure etc etc.


I've seen monoliths screw this up. All it takes is some abstractions, maybe an ORM. Suddenly the novice programmer is too far away to notice they're running multiple queries outside a transaction without a simple way to fix it.

In fact, I think its quite common.


microservices can too, just most databases don't support this without an additional service layer. fwiw Spanner does.


> currently we have user profiles (user id > the users names, email etc) in one microservice, relationships between users (user id > other user ids) in another.

What do you do when you need to query data across these boundaries?


If most of the callers did need both pieces of data it would have made sense to combine them into a single service. But, overwhelmingly, callers are interested in one piece of data or the other, and the load profile, tolerance for staleness in caching etc etc for each of the two services are vastly, vastly different. This is why we chose to split the two into different services.

The few callers that do need to obtain both pieces of data just make concurrent calls to both and them zip them into a single result.


Sounds like you have a simple system, in which case the separation have few drawbacks. It's not uncommon for a customer to ask for a report that to be generated needs to span dozens of tables. Or to save a DTO object that will trigger multiple microservices (and a lot of validations, transactions, rollback, etc). Business rules can be very complex and entangled in some shops.


It's not a simple system, we have all those things and more. But when we make a service that contains partner user profiles data we optimize for that (=millions of partners accessing their profiles, has to respond immediately and scale infinitely, has to be easy and quick to capture new details like eg mfa preference app vs sms etc), not peripheral things like reports etc - such things have to live life being harder for them than they would be if we had a monolith.


Good thing if your data fit into a single database; sometimes they do not, because they are too big.

OTOH you can often export the data in different, more appropriate ways to make your joins more efficient.


I'd say it sounds like they have a distributed monolith :)


From your following comments I see that you consider a monolith a software that does "everything", from managing staff holidays to giving services to customers. But this isn't what monolithic architecture Vs microservices is about; if you have independent business processes, it's been completely normal forever to have independent "monoliths" to manage them.

Many proponents of microservices though request that you split your software into small independent services even when they manage the same business process, and this is where the complications come.


If it were up to Hackernews commenters, no software created after 1998 would be used - everything would run on a single monolith, use Postgresql, run with System V and written in, I don't know, Perl with CGI. It worked fine back then, why do we need to change it and make it all fancy and complicated?

I don't know the career histories of these kinds of people, I'm sure its varied, but I just can't imagine them working in very large public facing dynamic sites that update multiple times a day with monoliths thinking "this sure can't be improved! We have reached the peak of computing!"

I've seen microservices go really badly of course - I worked in a place where the devs insisted that two microservices need to go out "at the same time," as if such a concept existed, because they depended on each other. At the end of me working there, there were around 40 microservices, all Java so they needed at least 2GB of memory each, some 8GB, and at least 3 replicas for high availability. Cost a small fortune in servers for what really could have been like, 5 microservices, written in Go or Node and run on a handful of normal servers.

But microservices, like Kubernetes, are not _hype_, they are not a flash in the pan and they're here to stay because they are a good idea conceptually, even if they often aren't executed very well.


Having worked on both types of systems, old school way was far simpler, and bugs were faster to fix. Sure there are advantages to kubernetes, it will scale easily, but that is a problem that I have seen solved far more times than it has actually been needed.


If you're like facebook, it makes sense to split user profiles and relationships to two microservices, but if you handle a few tens of thousands of users it does not, so you're doing yourself a disservice by using a poor example to drive the point


You’ve given up on data consistency entirely, which is a significant problem with micro-services in general.

Do you run DR tests? I bet $TEXAS you will have orphaned relationships as your data stores will be restored from different points in time.


The issue (for me at least) is not the idea of microservices itself, it is the granularity at which services get defined/split off. What usually happens is that companies go all-in on the microservices idea, and force it upon all teams regardless of their size/head count. It is far easier for a 10 person team to maintain a microservice than it is for a 3 person team. Microservices impart some fixed costs to every team managing a service and thus make very little sense for small teams.

On the other hand, a 3 person team can much more productively contribute to a monolith. From my experience in the industry so far, taking into account the current quality of tooling, I would say it starts to make sense for teams above the 10 person mark to own their own service.


Maybe I don't understand microservices but what is to stop a team from declaring the whole monolith to be just one microservice?

I have interviewed at a few places over the last year and not one interviewer has given me a hundred percent guarantee that we will never have to allow access to the backing store (database) that isn't through our service.

Of course, microservice will fail in such environments. That's not a microservice fault. That's a defect in management.


> we have user profiles (user id > the users names, email etc) in one microservice, relationships between users (user id > other user ids) in another.

> Each has its own datastore

How do you maintain referential integrity between datastores?

For example, when a user is deleted, how do you update the relationships between users in a concurrency-safe way?


I’ve always seen microservices as primarily used a trendy way to impose basic engineering discipline on projects where there isn’t the will to define interfaces between components explicitly (DNS being a crude service locator pattern, different VMs providing basic encapsulation). Unfortunately this is only a marginal improvement, as the team remains devoid of engineering discipline. :/

This article has meaningful advice but I’m not sure that it will be often applied.


> Unfortunately this is only a marginal improvement, as the team remains devoid of engineering discipline. :/

This strikes me as the crux here, along with this line from the article:

> There were three main reasons for the initial success of microservices as an architectural pattern for software: [...] 3) an excuse for having no architecture.

Microservices are a little bit too much of a "just so" story to me. It's a cozy non-answer to the hard problems of system design.


You nailed it so perfectly. The difference is encapsulation of dependencies and independent scalability of each service. That aside, whether we have a monolith or microservices, you cannot fix bad architecture.

It’s like spreading out components with larger copper traces and separating them. If you’re circuit is wrong, no amount of things you can do on the PCB level will fix it. Fix the schematic (circuit) first.


IMO it's not really about the build at all. It's about people and business.

One, for microservices created by the companies themselves, those bascically grow out of their org chart. It is well known that organizations ship their org chart, and this is no exception. Person A wants to become a manager to make more money and ascend the societal hierarchy. Person A's manager wants to grow their fiefdom and have more people under them (which is the only true measure of one's worth as a manager, as far longer term career is concerned), so the person A proposes to their manager that they use "best practices" and carve off a "microservice". Their manager happily obliges, because they don't know anything about distributed systems and consistency. Person A gets promoted. Now person B wants to become a manager to make more money and have their fiefdom... The end result is you have a nightmarish maze of microservices where a single simple monolithic binary would do a better job at one quarter the dev cost. You also have a very deep and branchy org chart, which is the preferred state for the management, since it lets them justify (and increase) their pay.

Two, for cloud microservices, they offer a simple way to create recurring, extremely sticky revenue.

That's not to say that microservices are useless - they are useful sometimes. It's just that "nobody ever got fired for moving to AWS", and when the incentive structure does not encourage more robust engineering and/or cost savings, your money won't be spent wisely, because nobody gives a shit about that.

As with any people problem, the only way to push back against this is by making the desired state the lowest energy state. This can be done several different ways, none of which have anything to do with engineering arguments, at least not if you want it be be effective against such very human things as greed and desire for social status.


I think its kind of both. When you have a lot of people working on a monolithic code base one of the places where you feel the most friction is at the build process. If CI takes 30 minutes for example and you have 80 developers working on the code base (roughly the stats from my previous job) then you can see how just coordinating the build/merge/rebase/build/merge cycle can be an issue without good tooling and practices. At some point service boundaries are inevitable and shipping the org chart is something we've always done.

But what is actually happening in some companies is just so far beyond this. There are developers I've talked to at companies where the micro-services outnumber the developers by 5:1 or more. That is insane, and I imagine it did start with the kind of empire building you are talking about.


> I think its kind of both.

Yes. I'm old enough to see nuance. I do not assert that these are the only causes. I only assert that they are the primary causes most of the time.

As a side note, there's something deeply wrong if CI takes 30 minutes. This usually indicates that all tests are re-run every time, which is something you can easily avoid by e.g. using a modern build system such as Bazel, which will only re-run the affected subset of tests when something changes, because it is able to track all changes to the transitive closure, including data.

Also 80 people on a team sounds like a nightmare. I don't think I've ever seen a monolithic team this large in my 25 years in the industry, some of which was spent at Microsoft, of all places. The best size is that of a large family - 5-7 engineers. That way communication overhead does not dominate, and you can still do very sophisticated and substantial things. Beyond this magic number the productivity growth is usually negative.


I like this way of thinking about services; decoupling the "project model" from the "deployment model" is a good way of thinking about things.

Some random thoughts:

* What build toolchains are suited to these arbitrary DAG arrangements? Bazel? Perhaps I missed a reference in the article, but I'd be interested in the author's take on this, and of course any thoughts from the community here.

* The "testing monolith" is a pattern that I've used in a less well-named fashion, and it's great for cases where lots of code-services need to execute a business process that might span weeks or months; building a test rig to mock time across an ensemble of microservices sounds like an interesting challenge; mocking time inside a single process running the combined logic of all the services is much more palatable. (This isn't really possible if you use multiple languages though, so it only gets you so far).

* DDD bounded contexts as service boundary -- this is a good starting point, especially if you take a loose definition of "service". Under the DDD definition of "Service" you can actually have multiple deployables running; for example a typical Django/Rails monolith will have a DB (SQL) and an async worker (Celery/Sidekiq via Redis/RMQ) and perhaps a cache (Redis) so it's really a bunch of different deployables, even if we refer to it as a "monolith". Likewise with smaller services. If you think of the Service as being a constellation of processes with an external API, then you can start splitting out parts into separate deployables without the outside world caring, say to scale a particular workload independently of the rest of the Service logic. This is kind of the direction that Uber ended up moving in with their "Domain Oriented Architecture" (https://eng.uber.com/microservice-architecture/). This is actually how Django monoliths already work; you use the same codebase to specify your sync API workers, and your async Celery tasks, and you'll deploy them as separate deployables from the same repo.


Author here. Thanks for reading.

> * What build toolchains are suited to these arbitrary DAG arrangements?

In general any CI/CD tool that allows for easy composition of jobs/pipelines, where versioned artifacts are a first-class citizen and can be the outputs and the inputs of jobs. Preferably one where the graph is emergent, i.e. just a consequence of declaring which "jobs" (taken loosely) depend on which artifacts.

I've had good experiences with GoCD https://gocd.org and Concourse https://concourse-ci.org, the latter being a fresher, younger take on these concepts. I haven't surveyed the landscape recently so there may be other tools that work well.


I think we need to go back and think about what “micro service” really means. In my company the terms “Web API”, “Service” and “Microservice” are pretty much the same. The only difference is that “Microservice” is cool now.


Absolutely, it's applied to everything from tens of thousands of RPCable routines to webapps backed by a handful of 'polyliths'.


Real life Microservices documentary https://www.youtube.com/watch?v=y8OnoxKotPQ


My take:

1. "I have this application, but I need to process batch jobs / run heavy workflows / do ML etc..." -- definitely split into multiple services.

2. "I have this application, but this one feature I want to add is implemented really well as a library/app that's not easy to integrate" -- same spirit as 1 -- also a good candidate for SOA

3. "I have an application for end users, and various internal tools that differ in their quality/security/privacy requirements" -- probably a good reason to build those separately (but not a necessity)

4. "I have an application and it doesn't scale well with my growth" -- this is one of the most common reason behind implementing microservice architecture, but I think it requires a lot more thought than just 'yup, let's obviously do horizontally scalable microservices'

5' "Our monolith is slow to compile, run and test" -- careful with this one: it's probably easier in the majority of cases to fix your tests, builds, and runtime speeds, than start splitting up your (probably already very complex) application into services

6. "Our org is split into multiple functional groups and we want to move independently" -- not a good reason for SOA/microservices: you're increasing eng complexity and reducing org collaboration and risking a lot of work overlap.

7. "Microservices have been successful in [Big Company]" -- not a good reason. The benefits/trade-offs are usually fairly unique to each organization and require careful "inward" thinking instead of following trends

8. "One microservice per business function is a good pattern and we're going to mandate it" -- terrible idea. Top-down eng culture mandates prevent better solutions from even being considered. Don't do this at your company.


Honestly, a confusing article to read. I think it's difficult to put into abstract words when microservices should be used without bringing up anecdotes of when it shouldn't have been or... that it's simple 'start with one, split with reason,' which is so simple it's hard to follow.

I enjoyed reading it, several times.


Overstated. Makes the usual "I hate blah blah" essay mistake of confusing the way something is understood and implemented in the general dev community with what it actually is.

Most of the time, things start off for good reason and represent good practices. We just butcher the hell out of them when we start selling them to one another. (Same thing happens in reverse; once enough people destroy any goodness left in a buzzword, people oversell how bad the idea is when they start pitching their plan to get rid of it)

To be clear, I don't disagree with the premise that lots of microservices implementations are a clusterfuck of monumental proportions. It's just ranting and raving about devs and dev shops screw up stuff isn't exactly news to anybody. That's the natural state of affairs no matter what your flavor-of-the-week.

We get close to something useful near the end, "...focus on the right criteria for splitting a service instead of on its size, and apply those criteria more thoughtfully..."

The rest of this looks like a rehash of generally-accepted architectural principles, most of which were misapplied and resulted in us getting here in the first place. I'm not going to line-by-line critique this. There are far too many points to counter and I can't imagine the discussion keeping any sort of reasonable cohesion (hardy har har) if it spreads out that wide. Oddly enough, I find this discussion of why monoliths might be a preferable default state too monolithic to split up into reasonable chunks to analyze. Much irony there.

We got to microservices (or I might rather say we _returned_ to microservices) by following good code organization principles. Instead of starting with the resulting implementation (such as the DAGs and various models in this essay) and trying to argue first principles, it's better to start with first principles and then come up with criteria for evaluating various results.

I feel I would be neglectful if I didn't add this: if your code is right, whether you're deploying as a microservice or a monolith shouldn't matter. That's a deployment decision. If it's not an easy deployment decision, if you can't change by flipping a bit somewhere, then either your first principles are off somewhere or you made a mistake in implementing them. The way you code should all be about solving a real problem for your users. How you chunk your code and where those chunks go are hardly ever one of those problems.


>...dev shops screw up stuff isn't exactly news to anybody. That's the natural state of affairs no matter what your flavor-of-the-week.

I vote the next flavor of the week paradigm should be centered around this. Devs and dev shops seem to mostly only be able to clusterfuck all the things eventually. What we need is software that is Developer Proof.

I'm going to write the Developer Proof Manifesto.


Microservices are popular because managing large teams is a pain in the ass and creating a small team to spin off some new business case is really easy to manage. You get budget, you create the new team, if it sucks, reorganize or fire the team (and offload the services to other teams).

I'm telling you, it's all Conway's Law. We literally just don't want to think about the design in a complex way, so we make tiny little apps and then hand-wave the complexity away. I've watched software architects get red in the face when you ask them how they're managing dependencies and testing for 100s of interdependent services changing all the time, because they literally don't want to stop and figure it out. Microservices are just a giant cop-out so somebody can push some shit into production without thinking of the 80% maintenance cost.


Last August Delaware rolled out the laws for new Protected Series LLCs. After founding one main LLC for $300, you can then created an unlimited number of LLCs under it, each with their own debts, liabilities and obligations which cannot be enforced against another series, or the original founding LLC as a whole.

If one series LLC gets ensnared in a legal dispute, the others can continue operating as usual.

You can probably guess where I'm going with this. The idea is a tech company can use this new structure to put each microservice into its own LLC and basically operate as its own company that communicates with others through a rich API and formal company communications.

Whats useful about this is you can use it to get around GDPR and other 21st century issues. Microservice LLCs could basically launder data among themselves, buying and selling it, and using creative hollywood style accounting. When a service gets sued for some privacy violation and threatened with a fine, it could shut down and go out of business and be replaced by a new LLC that rises mysteriously from no where. It then becomes increasingly expensive to pursue litigation. Every time you get close to levying a fine or getting justice, the target evaporates and is replaced by entirely new companies with a whole new corporate structure.


this has to be the craziest i read in 2020, so sudo llc init is going to be a thing soon?


Absolutely, it would be trivial to create some kind of program that could spin up LLCs on the fly once you have founded the main series.


Speak for yourself pet lover! I’m setting mine up from a terraform file.


Oh wow, AFAIK this is basically how AMZN already operates.

If they would transition to this setup, there's nothing for regulators to point at for antitrust enforcement.


Monothlics are not good or bad. It's just that monothlic architecture made it easy to add couplings between dependencies, and this is the real problems of software engineering. Loose-coupling is the goal.

So to me, microservices made it hard to put coupling between parts of system, so it made system overall easier to maintain, less tech debt.


I agree with your notions of what the problems are, and what the goals ought to be.

Though I think microservices didn't actually make it hard to put coupling between parts of the system.

It's the same old story of how everyone is doing Agile wrong. The cargo-cult implementations fail to deliver on the promises because they don't actually follow the principles.

"Microservices" often means little more than "not the monolith". It's a pretty low bar. The loosely-coupled property is what is important. If that doesn't hold, then you've almost certainly made things worse. If it holds then you've almost certainly created a path towards making things better.


What are the problems caused by tight coupling?


Queues of work that block because of all the dependencies during all parts of the SDLC that increase the lead times, decrease deployment frequency, and can increase the blast radius of failures leading to longer, and more severe outages.


Did you seriously mention CORBA as a API interface method in 2020?

As a corporate contractor who had to use CORBA back in the iInterface days of Win95, just the mention of it spins me into a semi-PTSD fit.

Please...REST is a solid, simple, and powerful transport method and let's maybe do without CORBA moving forward.


Hah! Not seriously, no.

> REST is a solid, simple, and powerful transport method and let's maybe do without CORBA moving forward.

Chill. :-) The world agrees with you.


I remember some fun with CORBA as recently as a few years ago. I wholeheartedly agree.


Version and dependency is just a tool to achieve expected behavior for API. You still need to change versions centralized. The only difference is that monolith will not let you do that partially. Example there’s vulnerability in package, but only gateway exposed externally. Unless gateway still produced valid json/gRPC etc request it doesn’t really matter , you change a lib version without having impact on all other services that might be affected by library version change.

It’s clear that article affected heavily by outdated Java way of building software. Where everything is injected by DI , AOP so even a small change in a core component is a big deal.

No it’s not. Unless micro service is doing what expected and passes unit tests I don’t care. Even I change a language or db or network layer.


If I may join in this game of hubris, I have discovered a pattern as well.

Report 1: Milk is amazing for you. Report 2: We were wrong, milk is terrible for you Report 3: Milk may have some benefits Report n: Whew, milk is good within bounds

Repeat for chocolate, microservices, meditation, religion, wine, nuclear power


This is called the dialectic of german philosopher Hegel.

He called it: Thesis, Antitheses and Synthesis.

https://en.wikipedia.org/wiki/Thesis,_antithesis,_synthesis


counter-examples: heroin, opiates, rat poison, etc.


Also, wine. No amount of alcohol over any time period has been found to be beneficial, it was just feelgood crap wineries/breweries told us.


Alcohol can be beneficial in terms of it being a "social glue" whether you personally believe it or not. Within reason is fine, and won't make you an alcoholic. So that doesn't fit the counter examples.


Of course there's happiness benefits to it in social settings, not disputing that. Just saying that, in a vacuum, any amount of alcohol is harmful to the human body. The "1 glass of red wine per day" is BS.


Possible, but the generally held belief is that a bit of red wine is good for you


Those have all reached the good within bounds, no?


How about smoking? Report 1: Smoking is good because it makes you more masculine. Report 2: Smoking is bad because it causes lung cancer. Report N: Smoking is still bad.


It's simply Conway's law more explicitly materialised.

Essentially the organisation wants to hire more engineers because that makes it all go faster, right? But then you can't have all those engineers in one team so you create lots of small teams. This means you'll get lots of small systems. You then want to make them "autonomous" so they all get their own repo and cicd. And then before you know it you're in micro service "nirvana".

There is also that org were one guy read a blog post about Netflix once and then...


This is the most correct answer here. "Your products will mirror your organizational structure." See "Building Microservices: Designing Fine-Grained Systems" by Sam Newman, chapter 10


Can someone explain where a system would fit which breaks off large chunks (auth, profile, shopping cart, PDF generation) but doesn’t go as granular as per-function services?

Aside from PDF generation (which is always a mess) I can’t see why anyone would think a more granular approach would be better in basically anything that’s not amazon and ebay.


That would be microservices. Like you said there’s rarely a need to break apart services beyond that, and that’s fine.


> but doesn’t go as granular as per-function services

Nobody anywhere says microservices has to be single-function-sized services.


The idea that the "Project model" and "Deployment model" could look totally different is and interesting one.

Can't remember seeing that in practice, other than many libraries combined into a "monolith". Anyone have other examples?


Honest question:

Is there a type-safe way of communicating across microservices without duplicating verification logic, writing extra layers for the protocol overhead and conceding to more cognitive load during programming?


If I go the microservices route I usually implement a service-client library for any services I expect to be called internally often. This way service A which needs to communicate with service B can just pull service-client-B into the project and call its methods. Most verification/errorhandling/transport is abstracted away in the service-client library. This library usually also includes model/dto classes.

This approach only works well if most of your services are using the same language (and sometimes framework).


Wow. This is so much cutting through a lot of noise on microservices for me.


Likely a seminal article


I’m saving this article to reread at work a few times. The M word has been mentioned :-)


Sometimes a monolith is the way to go and sometimes microservices are the way to go; it really all depends on a number of factors.


I think this summarizes the article:

> There is no substitute to the effortful application of cognitive power to a problem.


Related: Any good recommendations for resources regarding architecting service-based systems?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: