Hacker News new | past | comments | ask | show | jobs | submit login
Why Segment Went Back to a Monolith (infoq.com)
604 points by BerislavLopac on April 29, 2020 | hide | past | favorite | 316 comments



I think that the problem here was that they were fighting against Conway's Law: https://en.wikipedia.org/wiki/Conway%27s_law

> Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.

I think microservices work well in organizations that are big enough to have a team per microservice. However if you've just split your monolith up and have the same team managing lots of microservices you've made a lot more work for the team without the organisational decoupling which are the real win of microservices.

In my experience it is really difficult to fight Conway's law, you have to work with it and arrange your business accordingly.


As with a lot of things, it comes down to communication. Between teams, and between the services they write. Which is just another expression of Conway's Law.

IIRC Fred Brooks pointed out that the # of bugs in a system correlates closely with the # of lines of communication within and between the teams. Joshua Bloch recommends in "Effective Java" that, if possible, 3 potential clients should participate in the design of an API, for the same reason. So a well-designed interface or OpenAPI spec is worth its weight in gold.

Ofc, "microservices" here means separate running instances available on a network. But monoliths can be "service"-oriented as well. OSGi was good for this in Java, but any system able to load shared objects or plugins dynamically can follow the same pattern. And the benefit is that, if your app hits the jackpot and needs to scale outwards, the service interfaces, ie the lines of communication, are already well-defined.

So, service-oriented monolith first, then microservices if needed.


> As with a lot of things, it comes down to communication. Between teams, and between the services they write. Which is just another expression of Conway's Law.

This is so accurate. I've heard engineers give state not needing to communicate, chillingly, as a positive for microservices, like "we won't need to talk to each other if all of us are working on different services". My other favorite is using microservices as an excuse for why the product isn't working "oh, my service is working fine, but his service is doing this when it shouldn't", when we're on a small engineering team.


> not needing to communicate

sighhhhhhhhh

API documentation is a medium of communication as much as any user interface.

If you don't keep this in mind, then using your service's application programming interface will be a bad experience.


I think there's an element of truth to the engineers' claims. Working on different code bases means there are a lot of things you would otherwise need to talk about that now you don't. It's very much the case that you still need your interfaces to be clear (in fact, clearer!) but those discussions can be somewhat isolated, so more work can proceed asynchronously. Just how isolated depends on how exact (and correct) the specifications are, which is a question of trading up-front work against interruption.


> So a well-designed interface or OpenAPI spec is worth its weight in gold.

When I worked on a SOA team, I tried to begin any new effort (whether a new API or modification to existing API) solely discussing the API contract. It was (ideally) high-level enough that business analysts and project managers would understand it, and it helped to guide us away from getting mired in implementation discussions too early.

At that organization, we rarely had the opportunity to involve multiple customers at the same time during design discussions (we were typically engaged to help a specific consumer implement a specific feature), but the institutional memory in the SOA team helped us to keep in mind existing/potential other users of each particular webservice.


The last place I worked that split the devs into UI and backend teams was in a sort of slapstick comedy situation. Nothing ever shipped on time because the front end and backend could never quite talk to each other or needed elaborate conversations to do the simplest of things. This was our new flagship project, I got consolidated in from another team and ended up as a lead not long after.

We had been doing some UML modeling, sequence diagrams during planning, and still having this problem, so rather than repeating the same action and expecting a different outcome I started trying to flip the script. What ended up working was not code diagrams but data flow diagrams and sequences. To get X you need Y, and to derive Z you need A, B, and X. To publish you need all five.

After that, the APIs mostly wrote themselves, we reordered a few different forms, but most importantly variance dropped like a rock.


Can you share examples of your data flow diagrams? Do any open source projects share these documents?


Aside, take a look at tools like PlantUML as a way to create your diagrams. It's higher-level than, say, rolling everything with Graphviz, while easier to share and edit than a bunch of PowerPoint/Visio/etc. files.

The great thing about generated-diagrams is that you can easily store and version the original text representation along with the code it describes or applies to.

https://plantuml.com/


Mostly these were white boarded, but essentially I/we would draw a collaboration diagram (although I could have sworn these used to be called something else). They showed what data was needed to make certain decisions (eg, a conditional drop down that is populated based on another piece of data, or complex validation steps) and where to get data that already existed.


activity diagram?


Yeah it looks like the activity diagram was substantially altered in UML 2.0. What we called an activity diagram then looks more like a collaboration diagram now.


Yeah, the problem with microservices is because the organisation structure is wrong. I’ve literally heard every excuse about microservices at this point. My architecture is better but it doesn’t have a snappy name; it’s called the smallest possible number of services that can be reasoned about and network partitions are NOT necessary to create bounded contexts in a codebase, often just a directory is FINE.


I agree. I hate the term microservice for the same reason I hate superlative infected clickbait titles. There's no need for half of the word to exist. Service. What's wrong with service?


There was a period of time when Service had a different meaning than microservice. A service traditionally may exist across bounded contexts and be almost a mini-monolith whereas a strict microservice should touch very few data models and exist strictly in a bounded context.

Of course real life is messy and plenty of people realized writing small single purpose services was valuable, and plenty of people build giant "microservices" that have nothing to do with the original term and are just badly constructed monoliths.


I agree with you, but I find some value in people using that term - it signals to me that I should consider the the architecture was prematurely split-up and could suffer from the various pitfalls associated with microservices.


Microservice implies systems that are decoupled for deployment purposes. For example, Microservice A could restart to a new version while Microservice B keeps running. This is a more complicated interaction contract than services where their deployment is coordinated in concert.


But this was true in the middleware type of products too, and you can't get more monolithic than that.


I don’t think this is accurate. I’ve worked at companies that did “service-oriented architecture” long before the rise of the term “microservice” and it was clearly recognized that different “services” shouldn’t be so coupled together you can’t redeploy them separately.


This thread considered an issue: whether services and microservices are equivalent concepts. They are not. There is a quality that is held by Microservices, yet which is not universally held by Services.

You have observed that other services also have that quality. Indeed. Nowhere did I say, "all services with decoupled deployment are microservices".


Revisit. I can where brown9 is coming from. I could have avoided leaving that interpretation open by writing, "here is an example of a quality that is held by all X, yet not by all Y".


Yes, you're right. A system that requires services to be deployed together is just a distributed monolith.


> There's no need for half of the word to exist.

Yes there is. A service is a very generic concept to the point it's only relevant as a high-level concept.

The concept of a microservice makes all the sense in the world if you look back to where we came from: web services. When compared with all the work and requirements and complications of using SOAP and WSDL and UDDI and everything around, just sending small JSON payloads around, and the ability to peel off smaller services leveraging that architecture approach, was a far lighter and uncomplicated way of doing business.

I mean, the name microservices becomes obvious once you look back and all that you see is macroservices.


Plenty. A subroutine is a service. A library is a service. A Windows daemon is a service. The vendor I just inked a contract with provides a service. A web service is a service.

I really hate that word when used without further definition.


> A subroutine is a service.

Which makes the term "microservice" even weirder, given that any microservice is going to be bigger than a single subroutine.


It is probably too late to change the name. But you have a good point, the "micro" prefix is highly misleading. Furthermore, there is little guidance in the literature on how big the microservices are.


Or an NPM package works nicely (or .NET assembly, Ruby Gem, Java whatever, etc.)


Library code for sensibly defined pieces 100%... but if you aren't sure of the abstraction, copying code can be more forgiving than making a mess.


That's not what the person you are replying to said though, "the organizational structure is wrong". More like: It is a mistake to use microservices UNLESS you have a certain organizational structure/capacity already.

I think they were saying something more aligned with your opinion than you read it as.


We are dealing with poorly defined terms. However, services mapping ~1:1 teams was generally called service oriented architecture not micro services. Micro services involved breaking things into even smaller chunks, so backing off of that idea really just means SOA as originally defined is a bad idea.


That’s not quite right, SOA as originally defined had no mapping to team structure or deployment runtime, it was mostly about defining discrete service interfaces and ensuring your clients used that contract rather than back channels to communicate. Most often you had a dozen services running in a single app server cluster. Conway’s law was rarely discussed (with some exceptions).

Microservices tended towards a single runtime per service, ensuring the deployment lifecycle was tied to the build lifecycle and thus allowing for independent evolution.


I am not saying that’s how SOA was defined, just that it was used to refer to such team organization around architecture. EX: Amazon famously uses a Service-oriented architecture where a service often maps 1:1 with a team of 3 to 10 engineers. https://en.wikipedia.org/wiki/Microservices

At the beginning Microservice was generally viewed as more granular than SOA, though that’s been backed off of.


The general view of microservices was largely invented out of thin air ;) when you look at Martin Fowler’s wiki or Adrian Cockcroft’s presentations which were the originating popularizers of the term , it was all a reasonable refinement of SOA.

But then you’d get some that would make bizarre claims like a microservices must be under 100 lines of code. :shrug:


People where not pulling that from thin air.

Cockroft’s Rule of Thumb

Can complete a service in two weeks or less Completed = coded, tested, and in production • Fits in “one or two developers’ heads”

At that rate you quickly hit hundreds of services.


Noo! Building teams around software components cements your architecture and prevents most cross-cutting improvements.

I'll claim that splitting a well-structured monolith into microservices will always make it less maintanable, but it might be worth it if you need to for some reason like elasticity or failure tolerance.

But for the love of god, keep the design open. Don't tie the existence of internal software components to peoples livelihoods.


> Don't tie the existence of internal software components to peoples livelihoods.

The claim is that such ties, at the macro-structure level, are inevitable and exist regardless.

The point is then to determine the best way either to restructure the organisation, or, the code base, to cope.


I think the ties arise because people are actively seeking areas of responsibility. Software components are an obvious grab if your eyes are on the software specifically. But there are other ways of dividing your teams; based on for instance customers, use-cases, aspects of the code (performance, security).

The problem is that the software usually keeps expanding until programmers find it hard to cope. If you split teams up so that some people are only concerned with a certain part of the codebase, chances are you are going to grow the size of the codebase by a quite large factor.

I think there should be an incentive in place to keep the codebase small and understandable by most.


It's pretty hard to keep the design open once the whole architecture is bigger than what a single programmer can keep track of. Say, the Linux kernel. The overall architecture is fixed, there's no way around it. At that point, splitting into components that are maintained separately does no harm. AFAIK the Linux kernel is maintained like that already in practice, even if it's a single repo.


I agree with you in such cases, but I'm willing to bet that most codebases don't need to be as big as they are, and that it's better to create an incentive to collaborate and keep the codebase maintanable and small.


The opposite of "has a team around it" is "abandoned". Or at least low down on somebody's priority list.


That's generally true, and it's a big problem with microservices, because they need so much upkeep.

But if your code is living as a few hundred or thousand readable lines in the common codebase, that isn't really a problem. The code is there, readable and working, and if anyone needs to change it they can. If it falls out of fashion, it can be deleted.


I've seen this pendulum swing both ways, often within an organization. Cross functional teams owning code bases allows divergence to specialize and ownership of a release, teams with a single functional focus allows efficiency of work and cross cutting gains.

Both have their boatloads of suck, neither is inherently better. Interestingly, trying to mix them to get the benefits of each doesn't seem to invalidate any of their downsides; often it exacerbates them.


What is your alternative? Tying "the existence of internal software components to people's livelihoods" across the expanse of the entire codebase is the only remotely effective approach I've seen to scaling the SDLC at scale.


"What is your alternative?"

Aggressively small teams, with no hands-off middle-management layer.

You can build massive capability around a small number of well-managed message-backbones and a single codebase. By keeping the number of hands small and the structure flat, you force high standards. (Skilled staff won't tolerate distractions caused by bad engineering or inadequate automation.)

Heuristic for analysing firms: who has strategic power in decision-making? Conventional answer: a group of hands-off middle-managers who run on meeting tempo, and who are valued by how many people and systems report into them. Under AST: an engineering effort running on maker tempo in cooperation with a hands-on sales effort.

Microservices tend to have multilateral contracts with other systems in the organisation. This steers all planning towards meetings. This creates middle-management bloat.


Is there any example where this works (articles, presentations, etc)? In particular, anywhere with more than a couple dozen developers?


Amazon has a famous love for what they call “two-pizza teams” and you can find writeups about the philosophy by searching the term. The joke is that a team should be small enough that you only need to order two pizzas to feed them all. The philosophy is about the number of participants in the decision-making process. Keep teams small and give them total ownership of decision making so that decisions can be made by a small group of people who work with each other every day. That way no meetings (and certainly no cross-team meeting) need to happen for most decisions to be made.


Amazon is very well known for having A LOT of middle managers too, so I'm not sure it's a good example?


Seems sorta reasonable that if you need a manager for every 6-8 engineers, you would end up with a lot of managers.


OP’s post was “ Aggressively small teams, with no hands-off middle-management layer”. 6-8 swe teams + hands off people manager reporting to middle manager, who reports to director, is how Amazon organizes teams, therefore it isn’t an example of what their suggestion was...


> The joke is that a team should be small enough that you only need to order two pizzas to feed them all.

That's a tricky way to measure, given that I can eat a large pizza myself in a single sitting ;)


Think of all the open source libs. Generally speaking, anyone can contribute to any part of the project.

That's not to say that some people are better than others at certain parts of the codebase, but you don't want people fighting to keep old cruft in because it's on their job title (figuratively speaking).

You can organize around customers, use-cases, platforms, concerns or other things. Some might naturally map 1-1 to software components, but the software component should not be the "reason d'aitre" for a team, rather the customer experience or something else which can transcend multiple interations of the software.


I see, you meant things in a more literal sense. I generally agree with you in that case, that the customer experience should be the thing which the team owns, which incidentally involves owning software components. But on the other hand, it's also certainly the case that at a company of a given size or in a given sector, certain kinds of software components and infrastructure are not directly customer facing and yet must be owned in house, and logistically serve as one of (if not the only) competitive advantage over competitors.

Is it wasteful to have whole teams at GOOG, FB et al owning and improving the state of the art of infrastructure? It depends. At a certain point, there are enough internal customers for teams to reach contribution margin positive on engineering initiatives that have no direct but only second order effects on customer experience.


Mmmmyes and no. Depending on the size of your project, that may not be the case. I've had to work with two titans of monoliths, maintained by relatively small teams(anywhere between 2 and 6-7 people for several million lines of code). At some point managing a codebase this big within a single project becomes a huge burden, for both developers and even more so for those who develop and do code-reviews(first hand experience right here). At times I've spent 3 weeks straight doing code reviews with 2 notebooks filled with notes and diagrams of the different components inside the code. And at that point, the easiest and most sensible thing to do is chunk out large parts of the project and put them aside as a microservice with the adequate amounts of tests. For small projects, microservices make little absolutely no sense. But in the case of something the size of AdWords(which my two such experiences can be compared to), you are playing with a raging lion if you decide to go monolith.

My argument here is that it's not so much the size of your team but rather the size and scale of your project that needs to be taken into consideration.


Good monoliths are highly modularized. But it's a whole different thing to package up a module as a separately deployable unit for external "public" use (external to your app, that is, not your company).

I'm just curious to know, when you said "the easiest and most sensible thing to do is chunk out large parts of the project and put them aside as a microservice" ... were these chunks separately deployable units for external "public" use.


I think this is actually one of the reasons that microservices became a thing to begin with: teams wouldn't actually apply engineering best practices.

Microservices actually make you encapsulate your code, at least within the microservices, because you can't call out to it directly. They don't necessarily force you to implement the single responsibility principle, but they do a good job of pushing you. Microservices implement a service-locator pattern through DNS or web routing, one form of the dependency inversion principle. Microservices make you pass data around as entities, instead of Active Record instances.

The price for this sort of thing is very steep, though; distributed systems are inherently icky, harder to trace, and more prone to failure, and besides this, you've added network overhead to each service call.

I wish more engineering teams would consider spending half the effort of microservices on simply disciplining their monoliths. They might get somewhere...


> They don't necessarily force you to implement the single responsibility principle, but they do a good job of pushing you.

In my experience, if your services are developed by the same people, and not separated by teams, engineers will often tightly couple the services with fragile and opaque dependent changes regardless.

While in monolith this is painful, at least you have a complete stack trace and the ability to run things through a step debugger you orient yourself. In a distributed system tribal knowledge tends to be your only savior.

When we design systems, we need to spend more time thinking about what is most likely to happen as opposed to what we feel should happen.


>I wish more engineering teams would consider spending half the effort of microservices on simply disciplining their monoliths

100%. This is an uphill battle, though. I've encountered so many engineers who equate "real engineering" with "building giant machines." You just can't convince them otherwise.

I've watched people build giant, real-time stream processing pipelines compromising tons of moving pieces (lambda, sqs, s3, sns, stepFunctions, etc..) to build... a reporting table, and all for... 1.3gb of data. Literally.

Ultimately, despite the "sell," I don't think microservices as a forcing function for good practices works in practice. If the team lacks the skills to build a disciplined monolith, then they 100% lack the skills to build a distributed one.


Oh, all of those were heavily modularized to begin with. But that wasn't enough to keep them manageable. So at the end what we did is figure out which are the core components between the different modules, isolate what they did and put them aside in a smaller microservices, which were easier to track, maintain and monitor. What was once the monolith is now arguably just an interface/API for all the heavy lifting which is done by microservices. Again, my point is that all this must be done depending on the scale and complexity of your application. If you are going to make an authentication microservice for an application that has 50,000 users which simply fetches a username and compares a hash in a database, obviously you are doing it wrong. I am talking about applications which in the simplest of times operated on 24 different databases located in completely different geographical locations(the case of my first such monolith). Some of those databases used different engines. And due to the nature of the infrastructure and the requirements we couldn't simply ditch everything and start over from scratch. So splitting everything into microservices was the only option. And this is something I was working on back in 2012 iirc so back when microservices were considered witchcraft by most people. And yes, I'm talking about several million lines of code and 2 developers - my inexperienced out of uni ass, and an utterly conservative dev twice my age. Took us around 6 months but the project was extremely successful.

There is this trend in technology - every few years everyone changes their minds about everything:

* 2012 - sql is the best.

* 2016 - sql sucks, nosql is the future

* 2020 - nosql suck, sql is the best.

* 2024 - {fill in the blank}.

The same thing is happening with microservices. But in addition docker, kubernetes and recently unikernels have joined the party. The concept is the same though.

What I am trying to say is that either of those can be good or bad in different scenarios. It's a question of picking the most appropriate one for the situation.


The fun is that we have seen this so many times.

Sun RPC, CORBA, DCE, DCOM, XML-RPC, SOAP, REST, gRPC,....


You're right, and I think you've highlighted what makes a good monolith so hard to build and maintain.

You need to be disciplined to keep a monolith highly modularized. For microservices, in contrast, their architecture encourages modularization.


I don’t know that you need to be much more disciplined to write a large application in a module way vs writing any application in a modular way. A monolith could definitely get messy though if you write them how I see people write microservices.


If you've got 7 people maintaining millions of lines of code, you're going to have a heavy burden no matter what you do. Extracting a service does not a priori simplify anything. It can encapsulate and enforce a more strict boundary, and optimize compile time or test suite throughput and operations for the extracted logic, but it always comes with overhead, and if the interface between the services is not well-defined and stable it can easily be a net-negative in terms of productivity as you are now giving up your in-language tools for distributed systems tools. Now if you have large swathes of code stable functionality, then it's easier, but at that point why not just isolate modules within the same codebase?


I cannot agree more: I worked at a company where we went from a monolith deployed on IaaS with a couple handful of engineers to Docker containers deployed on ECS with over 200 engineers. The main reason we did it was because Docker+ECS was cheaper than a bunch of EC2 instances and you can't effectively use 200+ engineers with a single monolith.

After 2 years we had over 450 microservices while keeping our AWS bill flat or slightly decreasing.


On the other hand, over 200 engineers on payroll is way more expensive than a couple handful!

Presumably you're getting significant value out of the additional engineering work in which case the architecture shift probably makes sense (to stay aligned with the expanded organizational structure), but there are also cases where a small and flexible team maintaining a simple monolith would be much more nimble and cost-effective.


In all honesty, I think the monolith/microservice distinction misses the point a little bit.

It's inevitable that the longer the codebase exists, the more difficult it is to maintain. It's a battle that you can't necessarily win and it's turtles all the way down as your dependencies, and their dependencies, tackle the same issues.

All it takes is one or two roughly defined APIs and you've already created the nucleation point for ever-more tech debt, and while you'll be able to tame some of it you won't manage all of it due to business requirements, or other teams depending on private APIs to save time, or whatever else you can imagine. Switch the architecture and you'll either have all your problems bunched in one codebase, or you'll have distributed your problems all over the place.

I'd go as far as saying that a perfect monolith and a perfect distributed architecture are theoretical ideals that require perfect communication to build them.


Maybe its conways law, or maybe it's just that designing a distributed service is difficult, and when you break a monolith down, you're having to deal with distributing that monolith N times, and solving those CAP issues N times, which usually is not trivial. Not to mention tuning the network.


I don't agree with your premise that development structure == deployment structure. There are plenty of good ways of splitting up development of a monolith without the huge devops headache of deploying microservices.


A team per microservice? That sounds really wasteful. How many microservices need constant evolution?


But does conway’s law require microservices? It doesn’t say anything about microservices.


> Melvin Conway, who introduced the idea in 1967.


Yes, I don’t think that you need microservices to be able to tackle Conway's law. At least it doesn’t have anything to do with each other.

You could still do microservices and still fail to deal with Conway’s law.


> You could still do microservices and still fail to deal with Conway’s law.

That's what the poster suggests happened. Nowhere do they suggest that microservices are overall required.


You don't tackle Conway's law. You can't. You use it on your favor by creating organizational structures that reflect the design that you want in your software.


There is nothing wrong with most developers working on and communicating about the entire code base. Having teams work in silos is not a benefit. You're touting as a benefit what is one of microservices' gravest issues - teams stop communicating beyond the surface level of their respective APIs.


Have you tried coordinating entire teams to work on a shared codebase?

Honestly, I have never been in an organization so large that this became a necessity (if you solve tens of different problems, that would require almost thousands of developers). But coordinating single developers without an API is hard enough already, I can only assume for teams its nearly impossible.


Define “codebase”? You can have multiple services, user facing apps or modules inside a single repository, but if there are no boundaries coordination will be difficult of course.


The definition implied by the GGP is: shared codebase = everybody will change the same lines; separated codebase = people will work on different sides of an API.

At least, that's what I understand from his comment.


> I think microservices work well in organizations that are big enough to have a team per microservice.

Presumably by definition we’re talking about a few hundred lines of code, or a couple of weeks development time here at most. What does this team do all day otherwise?


So you are saying something in line of: let's increase our development staff X-fold and then we can finally do the same thing that way fewer people doing just fine right now?


They're clearly not saying that. If your team is too large to effectively work on a monolith, splitting it up can make sense, but you also need to split the team into smaller groups responsible for different parts. And if you don't end up with teams responsible for individual services, you likely split to small. And quite possibly, your staff isn't large enough to warrant it.


With microservices there's no way around it, as there's additional overhead when splitting a for loop between multiple services. Won't stop people from jumping on the bandwagon though.


Just because monoliths may have diminishing returns at certain team/project scale doesn't mean the scale itself is the problem...


The problem is with people trying to do "cool" things when completely unwarranted


I see a lot of places that seem to either think that:

1. Microservices will let them ship things faster or

2. It's microservices everywhere or nothing

Microservices might let you ship faster if you are really good at deciding where to draw the lines between services and really good at managing multiple deployment pipelines and all the infra - that's a pretty tough ask.

Also, if you have a monolith it's perfectly fine to pull out one or two parts that need to scale much more efficiently and leave most of your codebase in the monolith, but a lot of times I see companies think once you have created one microservice the monolith is now the worst thing possible and it needs to be broken up entirely.

My general rules for this are to always start in a monolith and break things out as they start to fail or break other parts of the codebase, and don't go all in just because you now have one microservice that works well by itself


This, this, this! It's been said elsewhere in these comments, but the term "micro"-services really do them a disservice, like it's expected that you need to break your application up into little pieces, to eliminate complexity. But many applications are inherently complex, and splitting them up isn't going to get you anywhere.

I've been trying to advocate for a "solar system model of services", where you have a big core application in the middle (the sun), surrounded by helper services of various kinds. Your important business logic can be left alone, but the database, other data stores, functions, timers, queues, integrations with third-party systems, one-off jobs, and other things can all stay in orbit.

There are benefits that you get from multiple services that you don't get from a monolith: having to rely on service discovery instead of hard-coding addresses or passwords, being unable to assume that the server your code is running on will live forever, and requiring a concrete CI-CD pipeline to get your code up-and-running are all good things to have, no matter your model, so it's important to have a clearly-defined process for them. A service-oriented architecture can give you that — put down the pickaxe, you don't need to split the monolith in two.


Another aspect no one seems to talk about is whether your deployment is monolithic or fragmented. It seems like a lot of the pain of managing microservices comes from designing a coherent CI/CD pipeline, how to share libraries between various microservices, etc. If you have a monorepo, good build tooling, and a good infrastructure as code tool, I think much of that pain goes away, but none of those things are easy and the precise selection and combination of tools depends a lot on your organization (I wouldn't recommend Bazel or Nix--build tools--to a small or medium-sized organization, for example).


>> If you have a monorepo, good build tooling, and a good infrastructure as code tool,

Yep, and so many organisations ignore these. Especially after a less successful transition to micro-services.

"you mean you want to spend more time doing non-customer visible development? You just did that micro services thing a while ago!"

"Yes, but to take proper advantage of that we need to invest in the right infrastructure and tooling"

"how can I sell that?"


Honestly, that sounds like the devops team didn't communicate well with the business when they pitched them microservices. The business can't reasonably know that moving to microservices entails a change in infrastructure and tooling--you have to build that into your high level estimates.


Often, they didn’t. How many stories have we all heard about “wow.. transitioning to microservices was way harder than we expected” followed by a move back to monolith instead of fixing the issues.

That doesn’t mean a micro service architecture is wrong... just that you have to either learn from your mistakes or hire an amazingly talented team... that have learned from mistakes somewhere else.


I’ve been pushing for a similar model for years as well but called but used rings to model it. Might have to try your solar system model and see if I have better luck.


I've found it most helpful to think in terms of deployments: Each (micro)service effectively gets deployed independently.

One implication of this is you need to ensure your APIs are backwards compatible with any other services - even if it's only one service that your team also manages. This also includes databases, if shared by multiple services (which I won't get into, suffice to say congrats, your database schema is now also a crappy API).

As soon as you start having concurrent deployment dependencies -- that is, the updates for service a + b both have to be deployed at the same time or things are broken -- you've effectively built a monolith anyway, just with an annoying code layout (eg, spread across multiple repositories).

You can use orchestration to tie these deployments together, but this means you're effectively building a monolith with a microservice architecture. Is that really what you want?


Sharing databases across services (micro or not) is generally a pretty bad idea exactly for reasons around versioning.

Versioning APIs is a pretty standard way to get around this.

If your deployment relies on synchronized service deployments you really dont have independent services at all.


"Sharing databases across services (micro or not) is generally a pretty bad idea"

I don't think such a blanket statement is justified. There are plenty of situations where it may make sense to pull out some functionality into its own service--so it can be written in a different language, scaled independently, isolated from failures, or whatever--but where giving that service its own separate database would be serious overkill, complicating ops and introducing potential data integrity issues for no real benefit.

"If your deployment relies on synchronized service deployments you really dont have independent services at all."

So what? That's really the point: blindly following the Microservices (TM) doctrine is often a mistake. It's better to just solve whatever problem you're facing in the simplest possible way. While that may mean by-the-book microservices with independent databases, in many cases something in between is a better choice.


I didn't mean it as a completely blanket statement, hence why I said "generally". In my experience it's a lot harder to manage a contract between a db schema and multiple codebases over managing versioned contracts between APIs.

"It's better to just solve whatever problem you're facing in the simplest possible way."

I completely agree with this, but I dont beleive that syncronizing deployments across multiple services is ever simple - have been in this situation at a past company where it would take an entire week every 3 months to do a deployment


Fair enough, but I think even "generally" is too strong. There's a huge cost to splitting into multiple dbs, and I'd say it should be avoided by default unless the benefits clearly outweigh the complexity costs. Just to give one example, authentication gets way more difficult with separate dbs, and you get a whole new class of potential security bugs.

Your concerns about versioning and deployments are certainly valid, but I don't think they outweigh the costs of turning your data layer into a distributed system until a project gets very large or those issues are actively causing you headaches.


Writable views or stored procedures are a pretty standard way to versioning database access from clients.


> this means you're effectively building a monolith with a microservice architecture. Is that really what you want?

I actually kinda do want that, although maybe it's a niche thing.

It would be nice to be able to deploy a monolith that's already cut at seams where there's an obvious API boundary. At one extreme, you could imagine a single binary where processes communicate via RPC.

What that would give you is an easy way to split off microservices as they're needed.

I'm sure somebody has done work along these lines.


This is actually a similar approach to how next.js deployed to vercel (previously zeit) works by default. Each page or api endpoint is served by an individual lambda function so they can scale up or down independently

https://nextjs.org/docs/deployment#optimized-for-nextjs


Awesome thank you


At my current place of work we have

1 monolith and 2 "Micro-Serivces"

Working in the monolith is fine, but running tests is slow because it is a giant rails app that is 7+ years old.

There is 1 "microservice" that does its thing and the few people who need to interact with it like it.

the second microservice was created, deployed and abandoned. Now people want to move it into the core monolith. It is a distinct unit of functionality that doesn't really have any overlap with the core app. I'm going through and adding all of the tooling to this project because it enables us to solve a certain class of problem (Report generation) that the monolith can't do very well for a couple of reasons. Articles like this have fueled the fire to re-combine it but the pain points have nothing to do with this particular service being separate.


If it's working well why change it?

If your co-workers argument is just "microservices bad" then obviously they are making a mistake. But in the general I've seen far more frequent inappropriate splitting of monoliths than inappropriate combining of microservices. (this is honestly the first time I've heard of it.)


I mean you admit it yourself, far too often the splitting off is inappropriate. Normally an inappropriate microsservices is a net negative over all that costs you money in the long run. Just because its working doesn't mean it is efficient.

My last two "assimilations" where because one microservice was written in Java. The original guy left and no one (around the company) likes to touch Java (or pretend they don't know/do it) which means it was alaways me who had to update it. It was a very small service, likely why he thought small = micro! but it was about 3 hours of work in the monolith. Now anyone can update/contribute to it and not bug me every time

The other was a microservice that only served the monolith. New features required the monolith to be updated in order to realize the new features.


Makes total sense.


Honestly it doesn't work well now as a developer. It is about a week worth of work away from being a great developer experience. They'll come around. Honestly every time I have split out a separate service it is because the current state of affairs is bad and there is a distinct need. Those handful of things have been rock solid and needed very little attention, but it has been a last resort.


The thing that jumps out to me: there are WAY more page-loading indicators now than there has ever been before. Lots more jumping content, lots more laggy content population, many more elements sliding around...I know "worse is better" is a truism of sorts in technology, but this is ridiculous.

What good do any of these architecture decisions make when the experience for the user, the customer, is measurably worse? I mean, aside from not being able to interact with elements of a page before a chain of JavaScript finally gives the all-clear, sites clearly look worse with grayed placeholders and whatnot. There should be a Conway's Corrollary for revenue-oriented choices.


As Matt Easton says: "Context!"

I think "5 Whys" might be a useful exercise here.

Why was building X as a microservice faster? [reason]? Well, why was that?

My general rules for this are to always start in a monolith and break things out as they start to fail or break other parts of the codebase, and don't go all in just because you now have one microservice that works well by itself

I like this. A key tactic is to always do things, such that one can change one's mind!


That advice of starting with a Monolith has consistently been given by those involved in Microservices since the start. I remember a Fowler article in particular. Unfortunately the "if you only have a hammer everything is a nail" analogy holds true when people start looking at where microservices may fit into an overall system architecture. Their answer is invariably - everywhere!

Really small, reusable/shareable and stable domains are the sweet spot for microservices. As you say that is most likely to come from decomposing from a monolith. Microservices can really help with building rich domain components in an overall architecture and removing complexity from other components through delegation to the microservice is my experience. They just don't need to be everywhere.

The same problem of over-eagerness is becoming apparent with some of the movement to event-based architectures. People become obsessed acolytes and there is no other way. When in fact they may well be ideal for a portion of your overall system architecture but are unlikely to serve it all well.


I'm yet to do micro-services at all at work or outside. I count myself lucky :-).


Starting with a monolith could lead to really difficult refactorings unless you structure the code in a way that it can be easily decoupled.


Monolith -> microservices : difficult refactoring

Microservices -> monolith : difficult refactoring

Microservices with poorly chosen context boundaries -> microservices with well chosen context boundaries: very difficult refactoring.


"Monolith -> microservices : difficult refactoring" I guess my point is that it doesn't have to be complicated if you architect the monolith carefully. That usually doesn't happen though because frameworks don't necessarily promote the practice and projects are short sighted.


It's also really hard. Trying to determine how to split up any code base into logical divisions such that you when adding the next 5 years of functionality you'll have the fewest number of cross division processes is hard.

This is why Martin Fowler recommends starting with a monolith and refactoring into microservices unless you have extensive experience building out very similar applications in the same domain.


I once worked in a monolith that was structured in a way that it could have been easily decoupled. It never was because the codebase was so modular and well-tested that the only time we ever felt the need was when trying to assign ownership to runtime exceptions.

https://gocardless.com/blog/getting-started-with-coach/ was the framework.


Exception triage often requires examining the stack regardless— even if you have multiple processes, you're still going to have errors bubbling up from your pool of shared library code.


Law of Demeter!

That idea had a lot of influence from Smalltalk, where the natural way of developing was in a monolith. So tactics like that which are about decoupling by default were a good idea in that context.

https://wiki.c2.com/?LawOfDemeter


The same argument can be made for building separate services too. Could become very difficult to merge data between two services after you had redundant information being saved across the two because of a bad design up-front.


If you take a look at some of Segment's open source code, it isn't hard to see why they wound up struggling with microservices. It looks like they subscribe to the "left-pad" style of software development. They have tons of repositories that have less than 10 lines of code. They have a two line repository for calling preventDefault[0], a four line repository for getting the url of a page[1], and a eight line repository for clearing the browser state that calls into eight different packages[2].

Disclaimer: I run a Segment competitor. I'm pretty biased, but still...

[0] https://github.com/segmentio/prevent-default/blob/master/lib...

[1] https://github.com/segmentio/canonical/blob/master/lib/index...

[2] https://github.com/segmentio/clear-env/blob/master/lib/index...


Oh my god! Who in its right mind comes up with this? The boilerplate is 10x the size of the actual code :'(


Wow, I figured there was more to it than the article was saying. This is insane!


What does segment.io do and what does your company do?


Sure. I'm the founder of freshpaint.io.

The premise of segment.io is that there are lots of tools that take user behavior data from your site and it's a lot of work to integrate them all. For example, when a user signs up, you may tell multiple different tools that a user signed up:

  - You tell Mixpanel so you can create graphs of how many people signed up.
  - You tell Google Ads so Google knows a specific ad just resulted in a conversion.
  - You tell Optimizely so it knows a specific page from an A/B test just converted.
Before Segment, you would need to write code for each tool separately. This doesn't sound so bad, but it becomes a pain when you have dozens of different tools and dozens of different events you want to track. With Segment, you only need to tell Segment that someone logged in. Segment will then send that event to all your other tools. You can think of it as like a multiplexer for user behavior data. Instead of integrating 10 tools, you just integrate Segment.

The challenge with Segment is you need to write custom code for every action you want to send into Segment. This is bad for two reasons. Usually the end user of Mixpanel/Google Ads/Optimizely is a non-technical person that doesn't know how to write code. What they have to do is file a Jira ticket for an engineer to add a new bit of tracking to the website. Depending on the size of the organization, that person can end up waiting two weeks or more in order to start tracking a new bit of data from the website.

The other challenge is people often don't know what to track ahead of time or forget to track something important. For example, if you launched a new feature two weeks ago and forgot to setup tracking on it, there's no way to get that data back.

Freshpaint solves these problems by automatically collecting every user action upfront. Anytime someone clicks a button on your site, that fires an event in Freshpaint that someone clicked that button. You can then use Freshpaint's point and click UI to say that whenever someone clicks that button that is a "login" event. Then you can send that event into different tools. This is great because the point and click UI allows a non-technical user to send data into different tools and because we track everything up front, even if you forgot to track something, Freshpaint will still have recorded every instance of that action. That way, even if you decided you want to start tracking some action today, you can use our "time travel" functionality and recover every instance of that action since you installed Freshpaint.


This is both interesting and horrifying when I remember how much we are being tracked.


This is a discussion on pretty much every team I've been on for the last 5 or so years. I agree mostly this stuff is done for the wrong reasons.

IMHO it doesn't matter if you replace microservices by components, corba objects, rpc objects, soap services, etc. It all boils down to chopping your software into smaller bits that than immediately start having a need for sending messages between them, finding each other, defending their boundaries, etc.

So, the first mistake would be assuming this is a new problem to think about. It's not. You can find similar debates about how to chop up software ever since people moved beyond just having their code ship in punch card form.

The right discussion to have would be first deciding whether you want to break down by your logical architecture so that your deployment architecture reflects that or your organization diagram (aka. Conway's law). Then the next step is deciding whether your primary goal is network isolation of unrelated chunks of code or enabling asynchronous development of these chunks of code (if so, there are other solutions). Usually it boils down to, again, Conway's law: different teams just don't want their stuff to depend on shit happening in another team because of internal bureaucracy and hassle.

Now say you have a valid business reason or technical reason for actually wanting to have different stuff be isolated (e.g. for scaling reasons or security reasons). The next step is deciding whether this means you also want to break up your code base. Monorepos and microservices are a thing. Look at e.g. lerna for node.js, or multi module gradle projects on the jvm. In Go this is well supported as well. If you're really sure that you don't want micro services because of Conway's law there are lots of valid reasons for having a well structured mono repo with a bit of reuse of shared functionality, a simplified review process and more visibility in what is happening.

IMHO people do this for completely the wrong reasons; like wanting to try out some new language, organizational issues, etc. that ultimately result in fragmented code bases, lots of devops overhead and complexity (it's never simple or cheap), lots of project management overhead, etc. You pay a price.


>Shared libraries were created to provide behavior that was similar for all workers. However, this created a new bottleneck, where changes to the shared code could require a week of developer effort, mostly due to testing constraints.

That is a big red flag. Microservices that suffer from shared code changes are not really microservices, but a distributed monolith instead.


This is really a time-of-binding argument; the difference between a "library" and a "service" is that one is in-process and accessed over function calls, and the other is out-process and accessed over RPC.

If you change code that other services are using, you can break those other services. No way round that.


There are circumstances where they are equivalent, but they're very different overall. Namely, if you use a service, you update it once and see the new behavior everywhere. If you use a shared library, you have to update and redeploy every service. Libraries are strictly inferior in that scenario. This sounded, to me, like it was Segment's problem. They were updating shared libraries all over the place all the time.

I generally avoid creating shared libraries, they're a trap. They have a very narrow band of usefulness squeezed in between the more palatable solutions of creating new services or just copy & pasting code and allowing it to diverge for each different use-case.


While that is true, a microservices architecture can (and in my opinion should) rely on messaging and account for message schema evolution. Dependencies between services should be way less coupling than dependencies between an application and a library.


Schema evolution is just as big of a dependency hell as managing direct library dependencies. With a monolithic architecture, a lot of those concerns are contained within the context of a single repo, and can be tested much more easily than with many repos.


A library API can rely on versioning and account for schema evolution too. Even different versions can coexist if you decide that's important from the beginning (what is the same requirement as with services).

The only real difference is that services have a slow serialized network interface that fails 4 or 5 orders of magnitude more often than libraries, but can migrate over memory domains.


Sharing code or reinventing the wheel repeatedly is inevitable once you have more than one concern by which you can divide services.

For example: let's say you have lots of integrations, and you need to scale compute, and parse and generate common data sent to and from the integrations.

You can either have a monolithic integration service which you scale out on load; or you can have integration-specific services that scale out on load and share your data parsing & generation library. Due to multiple concerns, there's no "best slice".

FWIW, scaling out compute is a stronger argument to me for a service boundary than responsibility segregation. Scaling out requires distribution; scaling up complexity doesn't, though it can help for other reasons, like CI/CD. I prefer FaaS architectural patterns with the freedom to share libraries in different functions (images) to services, especially if long-running state is not needed.


Sorry for not being clear.

Having a shared library is not a bad thing on its own. Making the library a bottleneck is the anti-pattern.

If you wish to have a shared-library of microservices you should be prepared to have multiple versions of it running at the same time without any pressure to update everything at once.

If your shared library is the bottleneck, it means that your microservices are tighly coupled (hence the distributed monolith)


That just sounds like the shared libraries needed to make breaking changes less often. If you're going to make changes to core code, it's going to take time to get everything up to date no matter how your code is organized. In other words, shared code needs to be treated just like a third-party library/service (both from the developers and users points of view).


One view is that the difference between a service and a microservice is that a microservice can be sketched between being a local library or wrapped in an RPC server.


> can be sketched

What does that mean?


Reinventing all parts for every microservice sounds wasteful to me. Especially if they handle the same data and/or use the similar business logic.


A common practice is to introduce services that handle shared functionality.

One common example is, instead of having a shared library that reads & verifies JWTs, use a gateway service that handles this before requests reach the upstream service.

This means changes to your organization's JWT code will only require a redeployment of one service, the JWT Auth service.


But that also for input handling, formatting or simple business libraries? Sure you could implement that as services but that would probably result in up to a hundred service calls for one customer interaction. Maybe it looks clean from an architecture perspective but I can't imagine how that'll result in a good user experience.


Let me clarify because I wasn't clear in the parent comment.

Microservices using shared libraries -> ok

Microservices "suffering" from shared libraries -> not ok.


That sounds like an overly broad generalisation.

They might well all share the same basic framework code, of course. Why not share code for recurring concerns like auth?


Idea is that you have a service that authorizes transactions


Because if you share something (like auth for example), you should have microservice for that. The question is not about duplicate code, but about duplicate libraries that handle the same thing. Decoupling the auth process into separate microservice removes the bottleneck.


Eventually you'll have a service to format phone numbers in the format that the company needs to be standard across all services.

If you don't want to do that, then you need a simple shared library for that.

The problem is that there is no easy way to draw the line between "this is obviously a trivial library function we should just link into our code" and "this is something we can't share because it would create friction or break our isolation".

Auth is obviously a "service" but phone number formatting as a service seems extreme.


One of the lines is going to be acceptable performance. Your phone number formatting microservice is going to be orders of magnitude slower than a client library.

The auth service will likely have to hit a DB anyways. Assuming the microservice call has roughly the same network latency as the DB call and the DB has 0 response time, it would double the total time to perform the auth. It only gets more favorable as DB response times go up.

More generally, I think microservices make sense in scenarios where the time to process the request is longer than the network latency incurred by making it a microservice. Things that have to hit a DB are generally okay. Pure functional things that just compute on CPU and RAM are generally not, unless they're very computationally expensive like running a simulation or something like that.


I'm reminded of the classic problem of static utility classes, where you have functions for say formatting phone numbers, or computing a commonly occurring simple mathematical function. It can be difficult to figure out how to better modularize the functionality provided by this class, the motivation typically being having a large static utility class often violates the principle of a class having a clear, single responsibility. Breaking up a large static class into other static classes that better encapsulate some functionality/concept can help but isn't always the best solution.

So lets say we have shared code for doing something like phone formatting. My question for more experienced microservice practitioners is -- does it make sense to create a microservice for preprocessing data in general? Phone number-formating-as-a-service is excessive but creating a microservice for processing data where phone number formatting is just one aspect of this service makes sense to me. All other services can throw data at the data processing service and get data back in some sort of standard and expected way conforming to whatever business logic/processing rules required.


> Auth is obviously a "service" but phone number formatting as a service seems extreme.

You clearly haven't finished drinking your Kool-Aid yet.


If you have zero coupling, it means you have multiple products.


A good architecture is orthogonal, meaning parts can scale independently...

Shared code shackles everything together, like global variables...


> Microservices that suffer from shared code changes are not really microservices, but a distributed monolith instead.

In other words, if you can share a lot of code between services, a monolith is actually an appropriate architecture.


I'm struggling to understand the problem with shared code and the desire to fragment the code repo!

Why can't you have both independently deployed microservices and a shared code base?

If the deployment lifecycle is different for each microservices and each deployment is self-contained, then they can be deployed with different versions of the code - even if they use the same source tree and share code.

Obviously the shared code needs to be properly maintained and evolved, but it seems to me a lot of the software engineering problems occur when people move away from source code dependencies - with great tooling - versioning, diffs, debuggers - to other types of dependencies ( shared libs etc ) where the tools are non-existent or very simple.

Now granted if you needed to fix a critical bug in the shared code - that would require a redeploy of everything, but that happens much less frequently than the need to deploy a single service with immunity as long as your keep your microservice contract. It also means the discipline of making sure every services is deploy-able at anytime is kept to.

And if you didn't share code - you probably wouldn't be fixing a single bug once, you'd have much more code, with many more bugs.


> Why can't you have both independently deployed microservices and a shared code base?

This is what everyone does, so I can't even comprehend what Segment was doing. Maybe they were deploying a fleet of microservices inside a monolithic deployment? If so, there's no wonder it failed.


We do separate code repos, my last place did separate repos, place before that did monolith(s) but still did separate repos for anything not in the same monolith. I'm pretty sure it's more common to do separate repos, rather than mono-repo, for separate services.

Seems to me, though, the problem is people trying so hard to reuse code. That's the main problem cited in the article. People get really gung-ho about reusing code and creating shared libraries, but reusing code is actually bad most of the time. You should strive to only depend on things that you can reasonably expect to not change, and that you don't need to update even if a new version comes out. What you're supposed to do is take that code in the shared library, and make it a microservice, and obey the usual backward/forward compatibility rules.

Using a monolith hides that problem because the code remains easy to update and build, but just as fragile and in need of heavy testing whenever you change code modules that have multiple consumers. That goes against the idea of mono-repos as well.


> People get really gung-ho about reusing code and creating shared libraries, but reusing code is actually bad most of the time.

Disagree here, in general. I'm not in the ruby hyper-DRY camp, but copypasta is not the solution to dependency management problems.

Creating shared libraries does require discipline; you should do your best to just avoid breaking changes ever, and on the rare occasion you must, you need heavy communication and testing to ensure consumers find out about the change. And you can only change the API of the library; you can never incompatibly change how the library interacts with other services. I get that this is hard, but it's worthwhile if you can do it right.

We have thousands (maybe even tens of thousands) of lines of share library code at this point. Some of it is probably not necessary, but most of it we'd be completely lost without. Reimplementing core logic and utility classes and auth code over and over again is a great way to burn out your developers and create bugs. And these bugs are even worse than your garden-variety bugs, because you have to track them down and fix them over and over, and each fix is slightly different because each reimplementation is slightly different.

I agree that sometimes sharing code is a bad idea, but asserting it's bad "most of the time" is completely antithetical to my experience.


"People get really gung-ho about reusing code and creating shared libraries, but reusing code is actually bad most of the time. You should strive to only depend on things that you can reasonably expect to not change" -- things changing is one of the main reasons you want to share code.


Ok I see - part of the root of the problem is you are probably using git rather than a version control system that works naturally with large shared repo's like svn?

So it's easier to just have separate repo's - and then that makes sharing code sensibly a nightmare without additional tooling... etc etc because there isn't a single versioning system.

Everything as a separate git repo has a lot to answer for in my opinion.


Segment's business in particular has them integrating with dozens of unique endpoints. There's an inherent desire for code-reuse in a system like that, along with customization required per endpoint.


I see quite a few people defending microservices; the org is the problem, they must not have written the software correctly, etc. Most org structures are not great. Most software is not great. If you expect the exception to be the rule you're setting yourself up for a career full of disappointment.

Microservices are a modern re-branding of service-oriented architecture, but 'microservices' sounds cuter and less like it belongs in Java-land, and there's some theoretical idea that splitting your app into even smaller pieces will somehow make the whole thing better.

SOA/microservices solves a few problems and introduces a great many. The original SOA proponents were pretty explicit about this. Beware! There be dragons here! But one of the main pieces of prescriptive advice from domain driven design is helpful for splitting into distributed services: split along domain lines with minimum inter-service dependencies. Payments is an obvious one. Microservices seems to buck much of this advice in favor for a blissfully ignorant principle of "small" or "isolated". Good luck isolating something that is not meant to be isolated.

Scaling software is hard. Scaling teams is harder. Trying to scale teams by scaling/distributing software is an understandable goal but extremely hard to pull off because of additional complexities and costs you incur in doing so. Dev gets harder, deployment/ops becomes harder, testing becomes harder. Cross-team communication, documentation, API publishing and adherence goes from being very low impact within an org to suddenly being critically important.

To do SOA/microservices effectively you need complete organizational buy-in, and you have to commit completely to developing tooling and solving all the associated problems in moving to a services approach. Often, it's easier to just put it all back together, organize the code in such a way to minimize merge conflicts and wait for the ungodly slow test suite to run in CI. There are good reasons you rarely hear SOA/microservices success stories outside of enormous companies (Netflix, Facebook, Google, Amazing, etc). Doing this stuff takes an enormous investment and commitment from the entire organization, and there are just lower friction ways to skin this cat if you don't operate at mega web scale.

Growing a monolith is hard. Growing a microservices/SOA architecture is also very hard. Growing is hard.


> split along domain lines with minimum inter-service dependencies.

Exactly, and done right that quite often means big 'microservices'.

All too often I see the 'functional programming disease' where the aim is to deconstruct to the smallest possible reusable functions ( 'micro' services right? ), often prematurely, creating high levels of compositional complexity and with zero tools to help you understand how the actual 'app' - say payment system works if it's distributed across 20 services.

Yep each single microservice is simple - but the payment system might not be and that's what you need to understand - better if your payment system is one thing - with maybe one or two things separated out if you need to scale that part.


It's sometimes said that in software, there hasn't been anything truly new under the sun since the 1970's, just incremental refinements or repacking under a different name. And Lisp has been around since about 1960.

So if any fad/trend comes along promising the sun and stars of simplicity or productivity, search IT history and find the downsides and trade-offs.

I wish there were more KISS pundits than fad pundits.


Completely agree. The micro thing is rooted in a desire for simplicity, which is laudable from a theoretical perspective. Small is simple and simple is nice to work with. But in any production software it's usually a pipe dream.

Kind of the whole point of software is to take a bunch of complexity and make it simpler for the end user. The fact that the software visually looks like shit or is difficult to work with because it's difficult to reason about is a rather unfortunate side-effect, but usually has no bearing on actual business success.

There are lots of things we should try to do to reduce software complexity and make it easier and safer to work with and change. But trying to force simplicity by way of size usually has the opposite effect.


"Yep each single microservice is simple - ..." but the whole is not.

I always find it more interesting what's _not_ in the single microservices, the stuff you do see. When you make a diagram with boxes and arrows, the interesting stuff would be the arrows, not the boxes themselves.


Indeed my loudest prescription to people doing service-oriented architectures of any kind is to simplify these arrows.

The common mistakes that I see are for two services to share read access to a common database, or to discover each other and send RPCs to each other. Both really dangerous for exactly this reason! The common database obscures how the two communicate with each other, and invariably everything connected to a database becomes one service -- call it a "mini-lith" if the overlapping sets created by databases do not cover the whole architecture. The problem is the preponderance of implicit arrows; when I reason about what it means to make this datetime nullable so that I can store such-and-so, I need to consider whether everybody who can read that datetime will be prepared for its nullability.

RPCs and APIs are the same way. I add a contract about what I am outputting and then everybody needs to know about my contracts and I must commit to them or else modify all of my consumers. So because the arrows are bi-directional everything just becomes one monolith again.

Instead, I recommend message brokers -- all that pubsub stuff. A given service tells all the other services simultaneously "this happened," and it is their responsibility in their codebase to listen for that event and then say "okay, then this must happen." Publishing a new version of the event is done by just emitting both the old and the new version of the event and perhaps having a shared standard for deprecation across the codebase so that you get deprecation warnings in your prod logs.

Every service has its own database and they generally only communicate to each other through these broadcasts, makes the arrows into the "stuff you do see".


>Cross-team communication, documentation, API publishing and adherence goes from being very low impact within an org to suddenly being critically important.

Totally agree, but I think this is underappreciated by many. People tend to wave this away by just saying we'll just use Swagger/gRPC/whatever-doc-gen-tool, but that's not the main problem. The problem is that each service needs to have some coherent purpose, and must adhere to that purpose. Changes to that must be reflected through a proper API change and migration. But that requires thought, discipline, and (sometimes slow) work.

When you inevitably run into a situation where you could instead throw a quick hack into the wrong service that will make things work now the temptation to do it very strong (bonus points if this is due to regulatory changes). But now you have an undocumented behavior dependency between those services - they're coupled in a subtle way. And eventually the accretion of those results in a distributed monolith instead of a plain-old monolith.

>domain driven design is helpful for splitting into distributed services: split along domain lines with minimum inter-service dependencies

Definitely, yes. But then when your business evolves in some way that causes all your domains to be inappropriate, you're up a creek again. I don't think there's really a solution to that, though.


I have this thing about micro-services/complexity in that it follows Conway's Law - the architecture follows the organisational structure.

If you push authority and decision making and responsibility for a service to a (2 pizza) team then guess what, microservices work really well.

If you have vast monolithic centralised production operations teams, and no way in hell is their C-Exec going to assign two of them to look after the user-login service, you might not do so well.

Like most things, the organisation needs to change to get the best out of the opportunities software offers. Those that don't will face increasing friction and eventually die off.


Conway's Laws isn't a law, it's just an interesting thought experiment. Organization and architecture bidirectionally effect each other, but not directly, and not completely. I hate how current discourse invokes these different "Laws" as if they are physical properties of the universe. I've worked at places with a strong, hierarchical organization that created a wonderful set of "micro" services, and I've worked at places with a chaotic environment that developed monoliths.

There are shitty hierarchies and shitty flat organizations, just like there are shitty monoliths and shitty microservices.

Sorry if you actually agree with this more nuanced view, it's just that I've seen Conway's "Law" invoked more than once in this discussion and it drives me bonkers. I get the same way when someone ("Medium Developers" I call them, more than green but less than seasoned who swallow everything the read on Medium as gospel and run around quoting it zealously) quoted liskov substitution principle at me as if it was one of Newton's Laws.


Conway's Law is a physical law in the same sense as Murphy's Law.

It's also obviously true. The organization builds the architecture. The architecture either helps or hinders the organization. The organization builds a new architecture. There's no indirect connection here. If you've seen hierarchical organizations implement microservices, it's because that organization's complement was a microservices architecture. And likewise for a chaotic organization.

--well, sidetrack: Aren't strongly hierarchical organizations the best suited for microservices? With all the strongly divided responsibilities and whatnot?


> Conway's Law is a physical law in the same sense as Murphy's Law. It's also obviously true.

It's like a tautology: "In logic, a tautology is a formula or assertion that is true in every possible interpretation."


Thank you for everything you said. The reality is more nuanced and depends on the specifics. The "law" being zealously cited here isn't a rule. Nor is the thought an approach is wrong if a big organization failed at it.


2 pizza team?

Well finally I might get my own microservice after all.


This metric is also unsuitable for Europe, where generally pizzas are individual.


You can get party pizzas but in that case the two pizza team might be a bit large.


Well.. just consider every "microservice" a separate company, exposing its own product/service. Also think about all the overhead that comes with it - product managers, finance, recruitment etc.


See Coase and the "Theory of the Firm": https://en.wikipedia.org/wiki/Theory_of_the_firm

Occasionally companies actually do this by fragmenting divisions into separate companies, such as outsourcing IT. It has a very broad range of outcomes, from saving to destroying the business.


So like monoliths vs microservices it's probably a balance between the two (leaning heavily in one direction).

I've never understood why it needs to be either/or. Is it really that difficult to support a microservice deployment that only represents 50% or even 20-25% of the org/project?


If all the services that the microservice needs are also services behind an API, what's the overhead? Something like:

hire("developer", 10, "10x).addToPayroll().office("openplan", "wfh").enforceHRPolicies()

Is all you need

The best thing about this is that you can keep everything in change control and just rollback whenever you need to, or spin up new companies at will.


Absolutely, you probably can't succeed with microservices without selforganizing teams. You just get more hot potatoes to drop


My takeaway from these kinds of stories is that microservices make sense if it's no longer possible to operate a monolith. By existence proof, that was clearly never the case at Segment. The common fallacy seems to be that microservices lead to better software via better architecture, regardless of human factors like team size. My sense is that it's the opposite: microservices are a necessary evil to scale teams past a certain size due to the bottlenecks that emerge with monoliths as more people begin trying to make changes simultaneously, and should be viewed as neutral at best in terms of a software architecture pattern to increase reliability, performance, etc. In practice, it seems wise to keep your engineering team as small as possible for many reasons, one large one of which is that past a certain point you will be forced to move to microservices. All other things being equal, that's a move you don't want to ever have to make.

If you have hundreds of engineers then certainly microservice architecture starts to make sense, since even the idea of transactional deploys of the monolith break down due to queuing at that scale. But jeeze, don't pull that trigger until you actually find yourself backing up on necessary complexity like deploy queues, PRs stuck due to inability to maintain the branch given the velocity of master, etc. Don't let Conways law lead you prematurely to microservices. If I'm ever in a position where I am feeling real pain that leads to an urgency for microservices, I am probably going to first ask the question if I can just fire some people to make the problem go away. The risk of the transition to microservices is just that high.

It's the same rule of thumb with other things like hiring, feature roadmaps, etc: YAGNI. If you are hiring someone before the pain is so high the work cannot be done otherwise, building features before you have people explicitly showing the need for them, or making deep, cross cutting architectural changes that impact everyone before they are strictly necessary due to concrete problems with shipping software, you're probably choosing the wrong use of opportunity cost, capital, etc.


This sounds like the old arguments about OOP.

Turning everything into an object can make a small program into a big program, so it’s maybe not such a good idea for small-scale stuff.

http://www.solipsys.co.uk/new/TheParableOfTheToaster.html

However, in my experience, OOP made it possible to do really big stuff.

It’s all about not having a “one-size-fits-all” approach. I don’t think it’s just about scaling architectures; it’s about changing architectures to match scale.

It’s difficult as hell to make these changes, because people get invested in methodology, and insist on applying the same lens to everything we do.

It sounds like they had the right idea, but they probably had the wrong people.


The reason OOP made it possible to do big stuff, seems tobe because it improved the average productivity of the average programmer.

With procedural code, you would need an exceptional programmer to produce a big program. With OOP, an average programmer can deconstruct a problem into its component parts and solve it, mainly because, the human brain can reason about concrete objects more easily, than say, abstract methods like functional programming.

Edit : OOP has encapsulation which, in my view, significantly reduces the cognitive load when thinking about state management in an app. I remwmber writing a small graphics library using Borland Graphics Interface in Turbo C++. It was a breeze to do because I know about 'things' I want on my screen and coded my classes to reflect those things.


This phenomenon can be described as "excessive factoring" and it can easily happen under any paradigm.

Perhaps it's more prevalent with OOP programmers, but perhaps it just appears that way because the boilerplate for classes is a bit larger than the boilerplate for functions and structs.


The “micro” in “microservices” implies “excessive factoring”. Otherwise they’d be services.


"Turning everything into an object can make a small program into a big program, so it’s maybe not such a good idea for small-scale stuff."

In my experience OOP actually makes programs smaller. Assuming of course they have good programmers/architects and the program itself is larger than "Hello world".


In my experience, OOP makes programs different. Might be bigger or smaller, but the real difference is complexity. Not in that it makes things more or less complex, but in that it moves the complexity around to different places. Those places being complex (and others simpler) might make it easier or harder to maintain your program, which is what makes these decisions highly dependent on your particular systems and teams.


OOP does not "move" complexity. People do.


A distinction without a difference.


There is a difference. OOP is just one of many tools to help accomplish a task. Many other tools as well. It is up to the people how to use tools for a job and what tools for what job. You equating OOP with the dangerous things that should be kept away has no basis in programming.


The way people use OOP causes them to move the complexity in a particular way. That's why the distinction doesn't make a difference in this context. You're right, technically it wasn't OOP that was writing the code, it was the person. We would have never figured that out without your guidance, we all just though OOP was banging away on the keyboard.


"The way people use OOP causes them to move the complexity"

Why don't you try to read carefully what you've just said in the sentence above


Why? I wrote it, I know exactly what it means. Maybe you should take a closer look? If everyone who uses OOP moves the complexity in the same way because they're adhering to OOP, then it is a distinction without a difference because the complexity is moved regardless. It's a nuanced concept so don't beat yourself up if you don't understand.


I want taking a side here, just noting a tradeoff inherent to the choice in programming paradigm. When you make effective abstractions, you make your program bendy in all the right places. It's easy to add new objects where you will need them, and it's easy to extend behavior in places you need to do that.

Of course, if you guess wrong, you're totally fucked. Well, either that, or you are smart and see it coming in time to rewrite the code that put the complexity in the wrong place.

Hope that helps with the context. I'm not some anti-OOP zealot, and those do exist.


I am in total agreement with what you've just said. Basically it all comes down to programmer being either smart or stupid. OOP on its own has nothing to do with the overall complexity and where it is "moved". Shitty programmer will fuck things up no matter paradigm. And good programmer can use various paradigms to their advantage depending on particular situation. But no. From what I see we have crusaders here.


The concept of footgun exists.


Guns don't kill people, people kill people. With guns.


To get your line to a logical conclusion: do not program at all


Ooh, sweet false dichotomy!


Ooh, sweet equating of programming paradigm to guns.


Ooh, sweet non sequitur.


Don't get me wrong. I love OOP, and have been using it since before it was cool. It's been a standard wrench in my toolbox for decades.

In fact, I have been running into folks, these days, that don't understand it, as, apparently, OOP is becoming "uncool."

I've always been a "right tool for the right job" kind of guy. I started off with ML (Machine Language, not Machine Learning). I am quite comfortable, sitting down with a breadboard, and flashing an OS.

But I remember the old days of OOP, where "classic" structured programmers didn't "get" OOP, and designed these horrific chimeras.

I always make it a point to understand my methodology and drivers "to the bone." Just because someone at a conference said it, doesn't mean that I should use it for everything.


Please write a blog post called "Horrific OOP chimeras" and post a link on HN ...


Oh...the stories I could tell...

But I have made it a point of personal ethos not to post criticism or polemics, denigrating/excoriating the work of others.

I know that could buy me a lot of clicks (and probably some considerable HN Above The Fold time), but I think we have enough negativity and finger-pointing on the Internet.

If you read my stuff, you won't see much of that. I may, in a rather vague way, allude to something that gives me a frowny-face, but I don't want that to be part of my "personal brand," so to speak.

I do take tremendous personal pride in my work; both coding and writing, and hold myself to a high bar. I may even project that bar onto others (only in some circumstances), but I don't think it's helpful to do so in public.

I find it most gratifying to write a "This is how I do this..." post, as opposed to a "This isn't how you should do it..." post.


Lots of words. What's the conclusion? OOP is bad? Or maybe it is incompetent people who manage to f.. things up no matter what you give them or people with the agenda going on holy crusades?


No, OOP is not "Bad." I'm not feeling particularly argumentative. I apologize if what I wrote upset you. I suspect that we may actually agree on most things.

In some cases, it is not the best tool for the job, but I find that I tend to use OOP for almost everything I do; large or small.

It isn't of much use in small utility scripts, though, and some languages are just not written to natively support it. In those cases, procedural (or FP) is the way to go.

There are new-ish methodologies, like functional programming, and protocol-oriented programming, that deprecate "classic" OOP. Some folks are using these as backing for declaring OOP "dead."

I suspect that might be a bit...premature. My current fave lang is Swift, which pretty much allows you to use any methodology you want, or mix them together (Will it blend? That is the question).

I have found that it isn't helpful for me to "write off" any methodology, and most of my work is actually a hybrid approach; with elements of multiple methodologies.

BTW: I'm a "lots of words" kind of guy.

Prolix, JAMES Prolix...


"There are new-ish methodologies, like functional programming, and protocol-oriented programming, that deprecate "classic" OOP. Some folks are using these as backing for declaring OOP "dead."

Those are anything but new-ish. In programming new often means that some old concept suddenly becomes fashionable. Myself I do not restrict to any single paradigm and use what I believe is the most suitable for current task.


Note the "-ish". I know that they aren't actually new, and that many "new paradigms" are actually rebranded old stuff (I have been writing software since the early 1980s, and have seen these waves sweeping through the industry; often, with some amusement).

What does happen, though, is that a canon develops, based on these technologies, and [actual] new techniques get created, based on them.

Some of these are nightmares, and need to be strangled before they can crawl off the slab, but sometimes, a gem comes up.

I write about some of my experiences around that here: https://littlegreenviper.com/miscellany/concrete-galoshes/#e...

I remember writing "object-oriented" software for classic C, in the late 1980s. I called it the "faux object" pattern, and was based in state being kept in a structure that was passed around functions. I refined and formalized the idea when I encountered Apple's QuickDraw GX in the early '90s (I suspect that may be the only good thing that I ever got from that sad debacle).

I used the faux object pattern in an SDK I designed in 1994, and it's still being used to this day. Back then, you couldn't pass OOP across module connections, so we had to figure out a way to do it with C.

Nowadays, I can easily pass Swift extensions and virtual implementations across SDKs; no sweat. It's cool.


In my uninformed opinion many of these "nothing new" cases happen because the newcomer actually solved some (maybe minor) pain point in usability of the old solution. Maybe combined with some good old nostalgia.

In general I agree that people should study more the past, but I do not see any point in dismissing new trends just because the do not market their full genealogy upfront.


Sounds pretty unlikely. You don't need OOP to have fairly DRY and well organized code that is nice and small.


You do not need a lot of things to write software. Does not mean you can no benefit from those when applied with good reasons. OOP concepts can help while coding certain domains and yet other can benefit from a different style.


I have always viewed using many micro services as something that adds complexity, something to be used when necessary.

I started working remotely as a consultant in the early 2000s when my wife and I moved to a remote area. I had several development jobs that used the same monolith pattern: I would embed everything in a web app using Apache Tomcat, taking advantage of work threads for background tasks. The only external services were a database and crontab settings to frequently snapshot databases. This pattern was so easy to code to, so easy to debug and deal with any runtime problems. One customer reported that a system ran without stopping for six years (ouch, no OS upgrades??) until they restarted it on a larger server.

Micro services can be great, but not always the best choice when horizontal scaling is not required.


Are there any case studies where microservices went well?

From an end user perspective, Netflix runs in “constantly degraded” mod.

From an engineering perspective, they track “number of successful stream starts”, instead of percentage of the time 100% of their services are working. That’s a huge red flag.

As a researcher, the monitoring and fault-propagation / modeling work they’ve done to get it to stay up at all is impressive, but it’s not clear all of that tooling would be necessary if they didn’t have to reason about N^2 fault tolerance scenarios, where N = 100’s of microservices. That’s on the order of one fault tolerance scenario for each atom in the universe.


Like others have mentioned here, simply pointing out examples where microservices have failed doesn't imply that microservices can't succeed. I've attempted to bake bread twice and they both failed. I didn't conclude that baking bread can't be done, but that my skills to do it were insufficient.

There are lots of examples of successful companies using microservices, but I believe the real problem is in defining what constitutes a microservice. Most people call things "microservices" that are nothing of the sort. I can unequivocally say if you built a "service" that depends on other things being 100% available (like another "service") than you haven't built a microservice (ie: those things you built shouldn't be called services).

By that token, autonomy is a pretty important factor. The Udi Dahan teachings (https://particular.net/adsd) (currently available for free) promote this style of architecture. A concrete example of a toolkit for building true microservices can be found in Message DB (https://github.com/message-db/message-db) and/or Eventide (http://docs.eventide-project.org/)

I wouldn't suggest, however, that anyone can just watch the course, pick up these tools and succeed. Like baking a good loaf of bread, it takes a lot of skill, work and experience. Whether or not you succeed at building microservices is ultimately up to you and your team.


By sheer number of attempts somebody probably got good results with microservices somewhere.

Netflix runs quite well in practice. I think they do redundant service calls, what is the only minimally sane way to develop a distributed system. The funny thing is that I have never seen any serious discussion of redundant calls, except for it being implicit on practical designs on the anecdotal "how it works for us" articles that pop once in a while. Most times people won't even discuss redundant servers. I imagine everybody thinks it's obvious, and well, I would agree, except for the fact that most people I see do not think so.

But well, Netflix couldn't avoid having a distributed system, so they aren't really representative for nearly anybody.


Amazon does micro services (or SOA) extremely well. In fact they practically invented the concept. It’s intricately linked with the 2 pizza team and service ownership concept (you build it, you support it)


and the concept of well defined API's for each service.


Microservices working well is entirely about the team, and my current group of teams works very well with the monolith pattern. A big part of this is because (for business reasons, not technical ones) they frequently trade ownership of parts - so if a service isn't well constrained it will be very quickly. We also have mature DevOps practices and engineers handle significant parts of DevOps themselves instead of just kicking the can.

But the reason I say this is about the team is because I've seen a well built, well groomed service be passed off to an outside team and immediately turned into a disaster. Same service, same business case, just a team without the DevOps savvy and willingness to follow the patterns.

Netflix does some.. unusual things with microservices, mostly with how they treat version rollovers. It's not bad or good, but it looks different from how many other shops handle the same problem and it means asking the question "is everything working?" is extremely difficult but asking the question "how many people are able to start watching?" is pretty easy.


> As a researcher, the monitoring and fault-propagation / modeling work they’ve done to get it to stay up at all is impressive, but it’s not clear all of that tooling would be necessary if they didn’t have to reason about N^2 fault tolerance scenarios, where N = 100’s of microservices. That’s on the order of one fault tolerance scenario for each atom in the universe.

That doesn't seem true. I would imagine that at Netflix scale, you probably have request tracing libraries that can give you a graph of service dependencies. Whether it's worthwhile to consume that, or easier to just let Chaos Monkey run rampant is another question.

Also, I very rarely have issues with Netflix. Typically when I do I can just exit the stream and restart it. Anecdata, but I could count on one hand the number of times I've had a title just not play, or Netflix be down entirely.


> From an engineering perspective, they track “number of successful stream starts”, instead of percentage of the time 100% of their services are working. That’s a huge red flag.

They don’t measure how often 100% or their services are up because perfect uptime is not the goal and is too expensive (if it’s even possible). If an internally facing service being down doesn’t affect a core metric like number of stream starts by customers, then it’s foolish to treat it as needing 99.999% uptime.


> Are there any case studies where microservices went well?

You answered your own question with Netflix. While you're right that it's not clear Netflix would've needed to develop their chaos monkey tooling and the like, it's not clear at all at that an equivalent system is possible as a monolith. Even if a monolithic Netflix system were technically possible, it's not clear if a monolithic system would be organizationally feasible either (Conway's Law)


Why wouldn't it be possible? It doesn't mean you can't modularize the application. And scaling can also work well for monoliths. I don't see how Netflix' service (browsing the catalog, serving content, reencoding videos) can't be done in a monolith.


If the question "are there any case studies where microservices went well" is a valid question, then so must be, "are there any case studies where monolithic architecture went well".

My point being that if and only if we have a track record as an entire field of making decisions based on case studies, AND if case studies have a track record of being objective rather than proffered as a result of a marketing agenda, then the question is ultimately legitimate.

There are more shops by total count that fail with monolithic architecture. That's inevitable just based on the infinitesimal number of projects executed as service architectures rather than monoliths at large in the wild. But still, we carry on with monolithic architectural style as if its outcomes were assured.

It's far easier for the vast majority of developers to build a monolith because it allows development to proceed without having to have any knowledge of or practice with the tricks and traps of distributed systems - an entire body of knowledge that a developer might never get meaningful and practical exposure to for an entire career.

The trouble starts when microservices are attempted by developers who can't imagine that there are entire bodies of software development knowledge that developers aren't presently in possession of.

Microservices is just, as Adrian Cockroft used to say, "Service-Oriented Architecture with bounded contexts".

Web development is absolutely not a preparatory course in service oriented architecture. But the vast majority of web developers who attempt to take on SOA while simultaneously presuming an omniscience in all things software development due to their experiences only with monolithic web development will often fail to build a SOA. They usually end up with something that isn't quite SOA and isn't quite a monolith. And that's where the failures largely come from.

I work exclusively in microservices and SOA, and have since 2015. Before that, I worked principally as a web app developer, and did some work off-and-on in SOA implementations. And before that I spent years becoming oriented to the architecture. I don't make the mistakes that web developers typically do when they presume that web development knowledge is a sufficient prerequisite for working in SOA.

So, it's not a question of whether an architectural style works or doesn't. The majority of failures in microservices can be attributed to ignorance and to the narcissism that is permissive of it.

So, I would ask this question instead: Are there any cases where developer over-confidence and over-simplification went well?

These qualities don't tend to serve any architectural style well.

I've never heard of a well-designed SOA not going well. Every single case of microservice project remediation that I've participated in had as the most significant contributing factor an utter disregard for the body of knowledge that the microservices architectural style is built upon.

There are a lot of things that developers can get away with when doing the kinds of tinkering and wandering that typifies typical web development work. But those things don't work once we cross the line into SOA. And unfortunately, the incessant chasing after trivial resumé candy hasn't prepared the average developer for the rigorous mindset needed for SOA work.

As "microservices" became to next fad for perennial fad chasers of the software development world, they finally encountered a kind of work that they could not get away with by faking it. And so, we see a lot of failures. But the vast majority of the failures are personal failures and character failures, rather than failures of an architectural style.

The fat part of the developer bell curve was simply overreaching when it presumed to try to get away with building service architectures with the same level of disinterest in architecture and process that we can get away with in typical web development. Like a kid with copious experience building kites presuming to strap themselves to a hang glider and just "going for it". The outcomes are mostly predictable.

In the end, if a little time is invested in learning the fundamentals that have so far been eschewed for the sake of expediency, anyone can succeed with SOA and microservices. It's not that the realities of the architectural style are unlearnable, but it can't be arrived at by the level of tinkering and wandering that we can just get away with in monolithic web development.


Well, lets assume N = 999, then N^2 = 998001, so nowhere near the number of atoms in the universe, which is estimated to be about 10^80.


I think the person you're replying to meant 2^N based on the context. They're saying you have to account for every possible combination of services being down.


Why do you assume they have more fault tolerance scenarios simply because they have more deployable units?

In every service architecture I've worked with in the last few years, you could theoretically run every single "service" within the same physical process -- provided they all shared the same runtime/lang the way a monolith does. I tend to start service architectures using Ruby exclusively with the Eventide toolkit, so this is actually a viable approach for most teams I've worked with. But it's never ultimately made sense. Weighing the pros/cons, a consolidated deployment topology wouldn't add any benefit, and it would actually make it far more difficult for the operations folks, practically speaking.

I've helped put services into production that 1. carry out crucial, "core" business logic reliably and efficiently, 2. can be scaled horizontally without changing the code, 3. never raise exceptions or cause outages, and 4. don't need to be touched for years because new features are more naturally composed around them anyways (i.e. open/closed principle). Practically speaking, it's quite a bit easier for human programmers to reach this degree of precision with a smaller program than with a much larger one. And if you add up a lot of these high quality programs, you get a high quality system. Building a high quality system out of a single program is much, much harder for humans in practice.

I'll use crude terms here for a second: every good service put into production brings a net benefit to the organization relative to the same code having been entangled with an existing pile of code. That may sound like a "No True Scotsman" fallacy, but the definition of "good" in this context is precisely the net benefit being added. If you build and deploy services that have a high degree of quality, you get a corresponding benefit. If you build and deploy programs that _don't_ stand on their own, _don't_ leverage durable messaging, and take other programs down with them when they crash, then you get a rather large mess on your hands. In fact, you never had a service architecture at all; we call this failure mode a "distributed monolith."

I'll acknowledge that most attempts at microservice architectures in the wild don't seem to succeed. Anecdotally, they are particularly prone to failure when their architects don't understand the underlying principles of distributed systems all that well; they neglect important considerations like messaging idempotence, the deleterious effect of synchronous request/response messaging, and the need for deliberate, thoughtful design. In other words, they build systems out of N number of microservices, and get N^2 fault tolerance scenarios, which you adeptly called out for being foolish. They're arguably worse off than if they went with a monolith, but neither would be an architecture I'd personally want to work with.


I'm currently working with realntl on a transition to Eventide from a Rails monolith for a client project. We reached a point where the monolith was getting harder and harder to change. This is inevitable in my experience, no matter how careful you think you are.

Though the transition is still in progress, I can say that the path forward is clear and hopeful. This company had previously explored other "SOA" paths (distributed monolith) and it was clear that those were very problematic. Luckily, we were able to steer them to an actual evented architecture.

As I said, it's still early days for this particular project, but if you're feeling your monolith is getting harder and harder to develop on, you should start looking into evented architectures. Eventide (ruby)/messagedb (postgresql) are awesome technologies and would be the first I'd consider. Eventide also has a nice slack community filled with people that are learning and improving their software design skill both in the large (architecture) and the small (individual classes/etc). It's small, but they're good people.


If Netflix were a monolithic, then the whole system would collapse instead of degrading.


A monolith can run distributed and be scaled across data centers. It's not a mainframe application with one host. One process can crash without affecting system stability.


Where you can get into trouble is if the problem affects all your instances across the entire cluster. If all your organization's code is running in the same process, if any of that code has a severe memory leak or other serious issue, it could impact the stability of everything else.

Not to say that wouldn't happen for SPOF microservices (e.g. your auth servers), but the surface area is potentially larger for monoliths.


On the other hand, if a memory leak concerns one key microservice your whole operation is also likely to suffer, even if 99% of services run fine.


It seems like they introduce microservices for the wrong reason. Instead of having a service per team, the focussed on services to solve a technical problem:

"Having a code repository for each service was manageable for a handful of destination workers"

Microservices should be introduced to make teams go faster, not to decouple external api endpoints....


I mean you are right of course, but at the same time I can't knock the superficial idea of having one codebase for one domain-specific application. Applications / codebases like that are usually not the problem; it's integrating them into the larger whole where things start getting fucky.


I always felt like the biggest benefit of microservices (for the average company that just jumped on the band wagon) was simply the fact it forced them to break things up. Yes, they could achieve the same result with none of the overhead on a monolith, but it would take... discipline. It's much easier to just enforce a hard external constraint.

Realizing this and circling back is still a useful life lesson.


If your code is bad in a monolith, it’ll be bad with Microservices. If you can’t build a good monolith, you can’t build a good Microservices architecture — because it introduces even more complexity and requires even more consideration.

Discipline is fundamental to good software engineering: you can’t force it with Microservices.


I think this, approaching DDD, is the most common reason engineers push for it these days.


It isn't worth the added friction though.

And if you happen to leak concerns in your services (in a monolith), it's really easy to adjust, as opposed to having to coordinate the deployment of 5+ services.

And even then, a distributed monolith is still a risk.

Micro-services add cement to your project. Be prepared to keep boundaries you write for a long time.


Another reason is that it gives you more agency to adjust your code. Want to refactor? Cool, I don't have to talk to 15 teams about how this might impact them. Same thing with changing a schema, changing scaling strategies, etc, etc. I can do things in 2 weeks that would take 2 weeks of just talking to people on a monolith. That might be more organizational than a technical limit, but I've never seen an agile monolith before.


Without knowing more about their architecture it is difficult to comment beyond the conclusion Alexandra Noonan came to, stated at the beginning of the article. It looks like to me that the architectural assumptions were changing too quickly due to the demands of a fast growing business. Having all their code in a single repository means that they can control dependencies, versioning and deployment centrally, it gives them central control of their software development lifecycle. I can't see how they could not have had the same benefits of the monolith if their microservices existed in a single repo to begin with and had the appropriate tooling to enforce testing, versioning, deployment across all services in the repo. I guess this is the whole monorepo debate and tooling.

This article for me is more about the complexity of managing a large team across different sites where the architecture needs to change rapidly when modularity is absent. They did get a measurable benefit around performance, though. I wonder if Alexandra will comment on the challenges of running a team in an environment of this complexity?


I totally agree with you.

I think this article is more evidence against the credibility of multi-repo than against "microservices".

Anecdotally, my current place of work has grown to about 200 engineers, maintains a monorepo, and hundreds of deployed cron jobs, ad-hoc jobs, and "microservices". We have none of the problems discussed here. We invest maybe 20 eng weeks a year in monorepo-specific tooling, and perhaps another 30 eng weeks per year in "microservices"-tooling.


If the microservices are in a single repo and tested and deployed together then they are arguably no longer microservices but a "distributed monolith"!


I'm referring to having the same testing, deployment,packaging,versioning policies etc being consistently applied across projects within the same repository not deploying, testing and releasing together.

It's the drift and inconsistencies across these concerns across projects that makes deployment and operations less predictable.


My experience of moving to a microservice architecture: the most important consideration with microservices is "who will develop, maintain and operate them?". You can split them down functional lines, architectural lines, whatever you like, but if you don't have teams with definite ownership of each microservice (and that aren't swamped by maintaining lots of them like it seems happened with Segment), it will become impossible. The "operational complexity tax" is a real thing but is manageable if your engineer:service ratio is sensible and considered.


I see it like organization of large companies: you have to split it into divisions, but you can't make every single person their own division.

Don't make the divisions too large, don't make them too small.

The art is to make them the proper size for the particular company.

If you have a small company you don't need divisions. If you have a large one, you need to make divisions as you no longer can speak to every single employee.


They went to 50+ services in a few months, were applying the same policies across all services. It sounds like they didn't plan well enough and jumped straight into it, without any good DevOps or infrastructure mindset. It was a disaster waiting to happen. This shouldn't be an article that people read and say "Oh Im never using Microservices". This should be an article people read and say WOW that is exactly the right way to NOT break apart a monolith.


What's notably absent are descriptions of problems with versioning interfaces, failures from network unreliability, or problems managing connecting infrastructure, or poor delineation of service boundaries leading to undesired change dependencies--which is to say it seems like they executed well. There's no sign they fell into common pitfalls.

There are definitely some good insights here that I don't often read about. The idea that with a sufficient number of microservices (say 50+) you not only treat your instances as-cattle-not-pets you have to treat the service types en-masse as-cattle-not-pets. This requires more automation and organized management as pointed out by the need for tuned autoscaling rules. This requires continued investment into automating things you would do manually if you had 50- services.

The other thing to consider is that going to microservices and back to a monolith is not necessarily a failure. Microservices are good for periods of high change velocity, once a platform is mostly built requiring much less new development consolidation completely makes sense. At all points, we're solving for impedance mismatch, whether that's the org structure, velocity of changes, or numbers of developers vs numbers of deployed units.


How many places have gone from monolith to microservices and back to monolith? I'm sure there's been quite a few.


I bet plenty of them.

A team that fails to understand how to write modular code, is just going to write spaghetti RPC calls, while having to deal with all the traditional failures and performance issues of distributed computing.

Naturally it is a recipe doomed to fail in the large majority of cases, but it doesn't matter because whoever drove the change is no longer at the company and a new consulting team/new hire gets the money to drive everything back to the monolith.

So goes the money around on plenty of consulting gigs.


> A team that fails to understand how to write modular code, is just going to write spaghetti RPC calls

This is interesting. I always assumed we were talking about good developers here.

I wonder what's a more likely cause for a failed attempt at microservices. Is it developer incompetence and lack of discipline, or is it environmental factors related to the product and the organization?


IME it is always environmental factors.

When you have 200 developers working on the same product without well defined boundaries it will be a mess with a monolith, and a mess with microservices.

Also arbitrary rules such as "one service per team" or "one service per employee" that force engineers to jerry-rig things that don't belong together. Either allow them to make a new service, make or a new team, or admit that microservices are not single-purpose and are just a blob of multipurpose code. I've seen this way too much.

Also pure organizational inertia working against good engineering practices: sometimes a team will be severely overworked while others are overstaffed. But god forbid there's a temporary reorganization to improve the work of engineers, so people send tasks to other teams, but there's minimal communication between engineers.


For almost all works produced by more than one developer the "good developer / bad developer" dichotomy is just useless social darwinism. Talking about the team, organisation, incentives, or business is far more useful.

(My favourite example is John Romero, part of the very small team that produced Doom - but who also produced Daikatana, which keeps showing up on lists of notoriously bad games.)


That was pure hubris, foreshadowing GamerGate (and inventing the self-Pwn)! As you say, it's all about the team, not the technique. And talking about that particular team:

https://en.wikipedia.org/wiki/Daikatana

>One advert for the game became notorious; a 1997 poster containing the phrase "John Romero's About To Make You His Bitch[. Suck It Down.]". According to Mike Wilson, the advert was created by the same artist who designed the game's box art under order of their chosen advertising agency. Originally, both he and Romero thought it was funny and approved it. Romero had second thoughts soon after but was persuaded by Wilson to let it pass. Speaking ten years later, Romero said while wary of the slogan at the time, he went along with it as he had a reputation for similar crass phrases. In the same interview, he noted that reactions to the poster tarnished the game's image long before release, and continued to impact his public image and career. In a 2008 blog post concerning the recent activities of Wilson, Romero attributed him for the marketing tactic. This prompted a hostile exchange of public messages between the two at the time.

At least he apologized, though:

https://www.youtube.com/watch?v=BF_sahvR4mw

John Romero Apologizes for Trying to Make You His Bitch:

https://v1.escapistmagazine.com/news/view/100748-John-Romero...

>I'm going to quote our very own Shamus Young here for a moment: For almost a decade, Ion Storm's Daikatana has been the example of "industry waste, arrogance, and incompetence, as well as a universal punchline for things that suck." The shooter was supposed to be an epic vision, the masterpiece of John Romero - the mastermind behind genre-defining Doom and Quake.

>Then it came out in May of 2000, and it sucked. The arrogance and hubris that crippled Daikatana have been well chronicled over the years, but none of it is quite as infamous as the ad you see here to the right: "John Romero's About to Make You His Bitch. Suck it Down." It was a pretty ballsy statement in itself, but after the game's failure simply became laughable.

John Romero Is So Sorry About Trying To Make You His Bitch:

https://kotaku.com/john-romero-is-so-sorry-about-trying-to-m...

>Game designer John Romero and John Romero's hair ruled the roost during the 1990s. With titles like Doom and Quake, he not only helped popularize the first-person shooter, he defined it. Then the unthinkable happened. He made Daikatana.

>[...] Romero, who now says he is resigned to the ad, dished on the ad back in 2008, which evoked a saucy response from the marketer that spearheaded the suck-it-down campaign.

Romero Dishes on the Ad:

https://web.archive.org/web/20081225071219/http://kotaku.com...

>[...] these are the kinds of jackass stunts he pulled [...]

Suck-It-Down Campaign Marketer's Saucy Response:

https://web.archive.org/web/20081225070532/http://kotaku.com...

>[...] and ill advised breast implants strewn across this fair nation [...]


I feel this at my current work - and the worst thing is that they have some really smart folks which can understand the whole thing and make it work for a few years more.


Truth be told, no one stops you from having modular 'monolith'. It feels the name was invented just to sell books and conference tickets (and yummy consulting fees).

There is no substitute for a good model and responsibility separation, (micro)services or otherwise.


I agree.

Writer Michael Feathers has an article where he suggests that Microservices are a replacement to encapsulation, all we have to do it use encapsulation well.

https://michaelfeathers.silvrback.com/microservices-and-the-...


Just waiting for O'Reilly to drop a book called "Microservices to Monolith"


A few years ago when the topic of outsourcing/offshoring development was a hot topic for conversation, I bumped into a consultant at a bar. We got to talking about offshoring, and he said he has two folders full of notes, one about offshoring & one about bringing resources in-house. He said he whenever one approach starts to peak, he starts pitching the other one.


Like Computer Lib / Dream Machines, where you flip it over backwards to get the other book! ;)

And of course the Microservices section should be really thin, while the Monolith section is extremely thick.

Like PopeDotNinja pointed out, you can just flip the book over when one approach starts to peak.

https://computerlibbook.com/

https://www.youtube.com/watch?v=dcfwLhDDMz4

>VCF East XI -- Ted Nelson: Ted Nelson designed the Xanadu hypertext software and wrote the two-in-one personal computing book, Computer Lib / Dream Machines, in 1974. His work deeply influenced the personal computing revolution. Ted earned two Ph.D.s and penned several other well-regarded academic papers and books about ethical, historical, and moral issues in computing.

https://en.wikipedia.org/wiki/Computer_Lib/Dream_Machines

>Computer Lib/Dream Machines is a 1974 book by Ted Nelson, printed as a two-front-cover paperback to indicate its "intertwingled" nature. Originally self-published by Nelson, it was republished with a foreword by Stewart Brand in 1987 by Microsoft Press.

>In Steven Levy's book Hackers, Computer Lib is described as "the epic of the computer revolution, the bible of the hacker dream. [Nelson] was stubborn enough to publish it when no one else seemed to think it was a good idea."


We did this, but on the trip back we made some major improvements.

We didnt stop at monolithic service. We stopped at monolithic repository+organization. There is now 1 VS2019 solution that covers our entire business. All of our services can be built out from this one solution via various configurations. We've even created additional solution "views" for more focused work (i.e. so you don't have to load a bunch of projects you don't care about for a specific task).

At this point, if we ran into scalability issues with a monorepo, I'd start looking for better source control/CI/CD technologies rather than splitting things up in hopes of arbitrarily keeping git viable. The benefits of having all of your source code in 1 repository with strongly-typed models throughout are impossible to overstate. When checkbuilds complete successfully in GitHub, we know the entire business is clean. Not just 1 little aspect of the product stack.


Or they just drowned in their micro service complexity and never made it back.


Exactly, I would wager that very few organizations go from monolith to microservices and back. Getting organization buy-in to do the "big rewrite" is hard. Getting buy-in to do the second "big rewrite" after the first one didn't go well is going to be even harder.


As it is so often the case, the choice between monolith and microservices is not a binary one; rather, it is a sliding scale between two extremes.

On one end we have a real monolith: a single executable binary, with no external dependencies apart from the OS bindings. This is very rare in practice, and most commonly found in games and probably mobile apps; when it comes to Web-based services, even a traditional idea of single-codebase app usually has a SQL database as an external dependency.

On the other end of the scale there is a complex system consisting of hundreds or even thousands [0] of tiny services that require complex orchestration mechanisms such as a service hub or service mesh.

So each team (in a wide sense: could be a company, organisation, department etc) needs to consider where they fall in the continuum, considering a) which architecture will provide most benefit while b) still being maintainable by the team; both the architecture and the team need to evolve together.

[0] https://qconlondon.com/london2020/presentation/monzo


Previous discussion from her blog post on the same topic: https://news.ycombinator.com/item?id=17499137


I'm just curious if there is a middle ground somewhere?

On one end you have a giant monolith. All services rolled into one which includes your API, Middle ware and then Database.

One the other end you have microservices which bundle services into individual distinct units with each service being responsible for its API, middleware and database.

Are there any preexisting patterns which seek to combine these two and come up with an architecture which is midway between those two. A few months ago I had read an article about Data Oriented Architecture on HN which comes close though I'm wondering if there are others.


> which includes your API, Middle ware and then Database

Layered microservices are an antipattern. In most cases, functionality is best divided by domain.


This is why I struggle with microservice architectures. It seems like there is a basic contradiction. On the one hand, it's vitally important that the microservices are carved into the correct modules otherwise you get a nightmare of operational complexity where simple functional changes require coordinated changes across multiple services. But defining the correct way modules requires a bird's-eye architectural view of the entire system, which seems contradictory to idea of self-organizing, independent teams. I can see how it works when the right way to divide things up is obvious or when you are dealing with IaaS or PaaS services, but in a complex business domain who decides how to carve things up?


> In most cases, functionality is best divided by domain.

So, SOA?


Service-Oriented architecture? This node.js framework specifically implements it: https://feathersjs.com (as an example)


There's what DHH calls the 'Citadel' architecture, which is basically you carve off a few small chunks from your monolith where needed and call them 'outposts':

https://m.signalvnoise.com/the-majestic-monolith-can-become-...


Still waiting for Monzo's following blogpost on cutting down their outrageous number of 1500 microservices [0] and moving some back into monoliths. I'm not sure if I would be too excited over the number of microservices if there is a degree of complexity involved here. That is just too many here.

[0] https://monzo.com/blog/we-built-network-isolation-for-1-500-...


Why do you think/feel that there are "too many"? What's the threshold for an acceptable number of microservices? (Not asking this to be confrontational. Just curious, because it's a sentiment I've seen before, without the reasoning behind it being articulated.)


One per developer seems like a fairly loose upper bound.


Even then this is risky - if that developer is hit by a bus do you throw the service away and have another developer write it again?

We recently had an interview candidate say this when we questioned the wisdom of having over a thousand microservices: some in languages that only the one developer maintaining them used! For me this is insane, but I digress

Monzo says that they have 800 people, and 1500 services. If we're generous and say 500/800 are developers, then each developer is responsible for 3 services! A team of 6 would have 18 projects in their domain.


There is a classic tradeoff here between top down organisation dictates giving consistency vs engineering independence giving flexibility.

Two organisations that I know of who favour the latter are Spotify and Netflix. It has benefits - different languages are good for different jobs and engineers like to be able to choose their tools.

It would be bad if this was taken too far, and something was written in a language only one person knows, but that problem already exists with the technical knowledge if something only has one mantainer.


"voices of experience pointing out that most decisions are made based on the best information available at the time."

funny in my experience with digital and particularly larger corporate jabronis. The people who makes decisions are fucking McKinsey consultants who know nothing about the actual project, are only contracted for 6 months, then they are gone. Rinse and repeat, maybe one out of every 3 or 4 attempts somebody actually gets it right and the project doesnt completely fail.


> Also, a proper solution for true fault isolation would have been one microservice per queue per customer, but that would have required over 10,000 microservices.

I'm a bit confused, they seem to imply that they need a microservice per costumer/destination, but you generically have one instance (aka process) per costumer not an entire separate codebase. The article seem to use the same term for two different concepts. Or i am missing something?


Sounds like a job for Erlang


there is a related post on Segment's blog : https://segment.com/blog/goodbye-microservices/ which helps to get a bit more details and contexts

update : and related video https://www.youtube.com/watch?v=lv5o3qnQu5w


That final paragraph is pretty brutal. Are engineers really so reliably obnoxious?


In general, yes.

Everyone seems to have their preferred style of coding, and it is an easy defence mechanism, when presented with anyone who tries and finds it wanting, to say that "Well they didn't do it properly".

You find that with Microservices vs Monolith, Strong types vs Weak types, Exception Handling vs Results, Agile vs Waterfall.

People fragment into camps which turn into echo chambers and it's easy to dismiss anyone who doesn't commit to that cult as being unpure and not worthy of being in the cult anyway.


The takeaway is about trade-offs. They made a rational decision to improve fault isolation by dividing the app into smaller building blocks managed separately. After working with it for a while, they realized the higher operational overhead made the architecture a bad choice for them. So they went back to a monolith architecture and tried to do fault isolation within the boundaries of that architecture, which might have made fault isolation not as good as in the micro-services architecture but it was acceptable.

It's incredibly tough to know the full effect of a trade-off on your organization until you start going down that path.

We're early in the process of adopting a micro-service architecture. With only a handful of services so far, I can already see how a team of two is going to spend a lot more time with operational issues and debugging.


It seems like a lot of the issues were around sharing code and libraries, resulting from isolated codebases per service and the versioning hell of shared libraries.

I work in a org that migrated to microservices over time, but intentionally adopted a monorepo approach as part of it. It works quite well and seems to avoid a lot of the pain points expressed here, while also gaining the benefits of microservices.

There are definitely tradeoffs to the monorepo approach. It makes development on shared libraries more delicate and stressful, however this can be mitigated by more robust cross-service CI, and I definitely think it's a worthwhile tradeoff to the painful cycle of shared lib versions diverging across services and finding issues when some service finally gets around to upgrading its version weeks/months after shared lib changes.


Problem with micro-services is that it comes with a huge amount "that's the RIGHT way to do it" and infinite articles talking about what they are and developers fighting over if your services are too big or too small.

That usually results in abandoning the effort to actually map the use case of your particular application, model your services to the size that make sense to your project... Any big enough system will need some services or workers beyond a single monolith, it doesn't matter if they say they follow micro-services, if they follow any other type of SOA or whatever, these silver bullets are killing engineering. Every project needs to take time to be planned, thought it, refactored, analyzed. If you read a bunch of shit on HN and go applying you end up with a random monster.


Others have covered most of my thoughts but I haven't seen platform mentioned.

If you already have a platform like Kubernetes/OpenShift (preferably with a service mesh) micro services make a lot more sense to me and can be done well. It gets easy to deploy and scale independently, but still have very low latency communication with a strong security model built in.

If you are deploying everything to completely independent platforms/infrastructure, I get a lot more conservative with "what should be its own service" and what shouldn't. Building a distributed monolith (a bunch of dependent services that aren't reusable/composable) is the worst of all worlds.


> "If microservices are implemented incorrectly or used as a band-aid without addressing some of the root flaws in your system, you'll be unable to do new product development because you're drowning in the complexity."

Which is nearly verbatim what some of us have been telling you since before the term microservices was coined.

Coupling is the problem. Yes, microservices add friction to coupling, but they don't prevent it. Coupled microservices exist (boy howdy), and they're resource intensive, resistant to evolution, or both.


This older "breaking up the monolith" GraphQL talk from Prisma is interesting: https://invidio.us/watch?v=_MmyTahR9ok

Especially if you consider RedwoodJS, a new full stack JS framework that's build on Prisma technology (their stuff is an alternative to Rails Active Record ORM). My takeaway is that they provide a similar monolith like experience by acting as a glue between different services.


> One of the key takeaways was that spending a few days or weeks to do more analysis could avoid a situation that takes years to correct.

Exactly my point when I was working on a new project architecture and we had to choose between two authentication methods. Because once all your applications rely on an authentication system you can't just switch to the other one like that... Sadly we did not take the time six month ago and today we are working on a migration which could have been avoided.


Why don't you share your insights?


I wonder why they are so easily able to work with and integrate external services, but they couldn't work with and integrate with their own internal services. I think it has to do with the fact that the boundaries to the external services are well defined and enforced because there is a physical aspect to it. Perhaps they lack the will to create and enforce those boundaries between their internal services.


It doesn't seem like their end architecture is anywhere near what they had at the beginning... It's a lot smarter, and it sounds like it's a monolithic distribution system that manages hot-swappable services. So the whole thing seems to fall in the "micro services where we need them" kind of architecture.

They just went from naive monolith, to naive micro services, then to smart coupling of the two...


Poor design. Sharing code between microsercices is always a design smell.

You are just building services on top of another monolith...

Sounds like you needed to abstract the work being send to the worker, instead of abstracting the worker around the work.

Meaning don't have many workers for one payload type. Abstract the payload and have a single worker...

That's why most systems become complex and spaghetti. Poor abstraction, so you use shared code to fix it...


Might be a stupid question, it wasn’t clear to me in the article.

Did they go back to a monolith service or a monolith repo? It really just sounded like monorepo


My org is starting to migrate from a PHP Monolith to Microservices as a way of freeing ourselves from PHP.

Microservices will require more time spent writing interservice APIs, and code execution will be slower since many procedure calls will require data serialization and network requests. But we believe it will be worth the overhead to not be locked into PHP for every new component of the project.


Microservice is not an organizational problem. Microservices is a design problem. If you implement DDD (Domain Driven Design) first to your application and then start to design the application around your DDD concept, then, it might work.

But, it's extremely hard even to do that. Microservices simply complicates things if any of your domains need to share code with each other. Many DDD paradigms exist to address this, but none are practical. For example, authentication related code. IF one domain sets a cookie and the other one has to rely on that to keep the user (a shared model between the two domains) authenticated, then this means, you need to duplicate code bases across two domains or in the very least put them into some sort of shared helper/library, which DDD is kind of against.

That's why it totally makes sense to go Monolith first and really identify the parts of your application that are slowing you down either development wise, testing wise or performance wise and put them into separate contexts.

Phoenix actually does Microservices right. From all the way to scaffold generation to instructing best practices on keeping your domains properly separated. But even then, I've burnt my finger many a times trying to write simple CMS solutions into mutliple microservices then going back to monoliths again.


I think that microservice approach is good when you can share those services between many applications In most cases microservices can be the new premature optimization because you can start to lose the focus on the product itself and think about the the microservice as a product


If you're up for a humoristic take on microservices, you could have a look at this video: https://www.youtube.com/watch?v=y8OnoxKotPQ


Whatever monothlic or microservice, it's all about modularity. Monothlic or microservice is just the implementation on how you achive modularity.

So the problem here, to me, is how you design the modularity for your system, not how you implement it.


> There is now a single code repository, and all destination workers use the same version of the shared library.

What if I told you this is unrelated to microservices? You can keep all of their sources in the same repository, sharing dependencies.


Another advantage of monoliths is speed. Running on a Local L1-L3 cache will be orders of a magnitude faster than serializing network round trip and deserializing json in micro service.

Performance per watt/dollar of computing.


I don't blame them, microservices requires discipline, careful planning, in depth monitoring and debugging capabilities and whole different mindset. Very few companies can implement it successfully.


"Monolith vs Micro-services?" is philosophy at this point. It's a good brain exercise, a great starter for debate, it reveals a lot of insights... all because there is no actual answer.


That may sound cynical but if we agree that software development is a show (bet, entertainment, whatever) for investors in 99%, then microservices are good to keep people excited.


Is it about process space and programming languages? Or is it about source control and CI architecture?

How is a monolith different from a monorepo?


Before going full microservices, try writing your code in a hyper modularized way and see if this doesn’t solve your problem first.


Having read their articles a few times, the issues that they were attributing to microservice architecture were really CI problems.


I think we can have both architectures Monolith and Microservices peacefully co-existing, with their own pros and cons.


Monolith vs microservice feels like a higher level case of the expression problem.


Note that one big database can decimate productivity as well.


I saw a talk with Alexandra from Segment before. When it makes sense to go with monolith it makes sense.

When it makes sense to use microservices, it makes sense.

Doing anything for no reason whatsoever never makes sense.

That is all.


Microservice-Architecture is one of these trends where the value is unproven, the upfront costs are high and the unknowns are unknown.

There's also a clear conflict of interest with SAAS and Cloud providers benefiting from the perception that microservices are the way to go.

Under these circumstances, letting someone else figure out all the issues is the wise thing to do. Thanks to the authors for doing just that.


I have an idea. Bring the concept of microservices into software!

Within software module interfaces that can only communicate with one another via socket like serial interfaces with no type checking!

Or simply have all your software modules running as forked processes on the same hardware and have them all communicate with one another via sockets or http. That means every software module must be it's own server!

To further imitate microservices, make sure that code in one software module can never ever be moved to another software module. Make it hard to reorganize things. Also make sure teams can only ever work on one section of the code base.

Does the above make any sense to you? If it doesn't make any sense to you it's probably because code organization using microservices doesn't make any sense period because the examples above are literally doing the same thing in software that is done in hardware.

If it does make sense to you, then why are you using microservices to add extra complexity to the code? If you can do the same in software then you'd be doing the exact same thing as the hardware equivalent minus the extra complexity of multiple containers or VMs.

Don't use hardware to organize code, use code to organize code and use hardware to maximize performance.


Microservices in terms of code organization was always a redundant concept.

You can organize code with functions and namespaces, why do you need hardware to segregate code? It only makes the segregation permanent but offers nothing else beneficial in terms of code organization.

The underlying reasoning was always that developers tend to move outside of boxed software modules if it wasn't enforced by hardware so the modules will end up not being modules but everything will be blurred into monoliths.

I always figured that if you want really hard lines drawn between software modules you can still do the same stuff in software itself, like why do you need actual silicon or VMs/Containers to do it?

The only real need for different services is performance, otherwise all the benefits and downsides of microservices can be replicated in software.


I bombed an interview before because I said microservices can be really bad.


Maybe it's because you weren't able to explain why you thought microservices can be really bad, or your explanation didn't hold water. So what was the explanation you gave them, or do you not want to say?

Or perhaps there are other reasons you're simply not recognizing? Like your sarcastic tone?


[flagged]


There is zero chance I will ever communicate with you about anything ever gain. You need to back off now.


This person has been stalking me across threads and topics and deliberately trying to ignite conflict. Please warn.


I see you were being sarcastic about there being zero chance you would ever communicate with me again, because you just did one hour later, so I'll reply:

Freedom of speech doesn't give you the right to not be replied to: that's not how it works, and you aren't the dictator of what I "need". But it's 100 percent in your hands to not reply to me, after you proclaim there's zero percent chance you ever will. Since you just broke your word about not communicating with me about anything ever "gain" (sic), by replying to my same post a second time within an hour, I'm certainly under no obligation to you. You have no right to control which threads I read and who I respond to. Freedom of speech doesn't mean freedom of consequences from what you said, or censorship of people who disagree with you.


Stalking people and trying to incite conflict is against the rules on HN. Freedom of speech allows you to be racist, sexist and start verbal battles in the real world. Not here. You do not have freedom of speech on HN.

You need to back down now because not only are you violating the rules, you are deliberately stalking me to try to start conflict. This is a flagrant violation.

You are not disagreeing with me, you are trying to start a fight. Go away. I am telling you right now, whether you disagree or agree with me or not, I will not have any further conversations with you. It is pointless for you to continue unless your goal is to start shit, In which case you need to stop now because that is a violation.

You want to disagree, than disagree. Your initial comment was not a disagreement, it was an insult based off of another insult you made on another thread. Back the fuck off.

@dang, please moderate. Or please add a feature where you can block certain abusive users from replying.


TL;DR: "If microservices are implemented incorrectly ... you're drowning in the complexity."


I want to see the microservice architecture where you don't drown in complexity.


Could it be valid to say that even if microservices are implemented correctly, you're swimming (if not drowning) in complexity? But for some problems and teams, that's better than the alternative.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: