Hacker News new | past | comments | ask | show | jobs | submit login
Is there any place for monoliths in 2021? (2020) (fjrevoredo.me)
38 points by clubdorothe 74 days ago | hide | past | favorite | 65 comments



In the cons of monoliths: "High coupling between components" is listed. I think this is a misconception, and a popular one: some people apparently believe that if you take software, and introduce some RPC form at "component" boundaries, things are magically decoupled. I.e., just because execution happens in a different process, it is decoupled. And this fallacy is what leads to the distributed monolith.

Or am I missing something?


"Coupling" means too many things to be useful. E.g. coupling at the interface level (component B interface was not thought in generic way, but only to accommodate component A), coupling at the deployment level (to re-deploy B you also have to re-deploy A), coupling at a code level (component A using private fields of B) etc.

In my experience, with microservices, if you don't put much effort into following best practices, you end with components coupled at the interface level at worst, but you can still deploy independently and don't have to worry about service B subscribing to some internal event in A or that kind of things.

In a monolith, for the same amount of effort, after some time you usually naturally end up with all the kinds of coupling mentioned here, which is very hard to then get out of.


Monoliths have high coupling by default — coupling components together is the "easy" and "obvious" way to get things done in a monolithic codebase, and so it's what junior devs will inevitably do when pressed for time.

A monolithic system's architect would have to make an explicit choice at some point, to reject/restrict coupling (by e.g. building the monolith on an actor-model language/framework, where components then become "location neutral" between being in-process vs. out-of-process, making the "monolithicness" of the system just a deployment choice — choosing to deploy all the components as one runtime process — rather than a development paradigm.)

Service-oriented code, on the other hand, has low coupling by default, until coupling is introduced via explicit choices like making synchronous RPC calls between components.

Sadly, people refactoring a monolith into SOA code will often introduce such coupling, as it's seemingly the cleanest 1:1 "mapping" of the monolith's code into SOA land. But the point of refactoring into SOA isn't to do a 1:1 semantic recreation of your monolith, including its inter-component dependencies; by doing so, you just produce a "distributed monolith emulator." A proper SOA refactoring should carry over the semantics of the individual components, but not the semantics of their interdependencies.


Microservices worked on by those same developers have high coupling by default, too.

This is because coupling is a design instead of an implementation decision. "Make a synchronous RPC call between components" is still the "easy" and "obvious" way to get things done.

It's sometimes easier to spot high coupling when looking at code in a microservice world, but it still requires you to know what you're looking for and know how to avoid it, and not give in to the temptation.

Last time I was working with people trying to split up a monolith I pleaded with them to try to understand the coupling first, and figure out what a system would look like without that coupling before actually just moving all the code around and replacing direct method calls with ... other method calls that made synchronous RPC network calls.

No luck.

Error rate went up.


> "Make a synchronous RPC call between components" is still the "easy" and "obvious" way to get things done.

It all depends on relative friction. Systems designed from the ground up to "think service-oriented" will use languages/runtimes/frameworks that make low-coupling options easier and more idiomatic than synchronous RPC calls.

For example, if your SOA is built on CQRS/ES, then adding an Event to the event store, for another Command to react to, should be easier than making an RPC call.


I’m very intrigued by this comment. It probably captures the thing that has held me back from microservices. Can you explain how one can break up a monolith without keeping the interdepencies?


Say you have some unit of work A within a monolith, and it is used by service B, C, D also in the monolith.

You can carve out A as a microservice with a clean general interface, and write an adapter layer to translate calls from your old messy interface to your new one, and B, C, D call that. Then start refactoring B, C, D one by one depending on your priorities.

That way new service X that needs to use A, can directly start using the clean general API even if B, C, D are not refactored yet.


Also "High coupling between components" is, a lot of times, GOOD.

The more everything in the pipeline know about each other, the easiest is to code it and the FASTEST will be.

You can say any performance gains is possible when you exploit this facts. And the more "abstract/invisible" everything is the more impossible is to make it so.


100%. As long as two things are communicating they're coupled. Doesn't matter if communication is calling a function or making a network request.


There is a difference between "loose coupling" and "tight coupling". Components/modules should be loosely coupled - regardless if they run within the same process or not. Component/module interfaces should be carefully designed so that coupling is loose.


from your perspective, what are the differences?


I think this is too strong of a definition because then everything is a monolith.

We need a definition of what it means for two services to communicate while being “uncoupled”. I think “are versioned independently” or “don’t have to be upgraded in parallel” meets the bar.


> I think this is too strong of a definition because then everything is a monolith.

Yes... exactly. More things are monoliths than we want to admit.

> We need a definition of what it means for two services to communicate while being “uncoupled”

Take a dead simple example: an application server that talks to an in-house video encoding micro-service

Even if that video encoding service only has a single really well designed endpoint, there's STILL coupling between the application server and the video encoding service.

Just because we've replaced a method call with an HTTP request doesn't mean things aren't coupled anymore.

Sure -- you may be able to deploy certain changes to your video encoding service without changing your application service. However, you need to be keenly aware of what changes are compatible with existing application servers and which are not and that adds complexity and cognitive load. Maybe it's worth it in some cases. In many cases, it's not.


If we go with your definition then my app is a monolith with S3 and SQS which doesn't help me pinpoint where the potentially bad architecture thing is.


yeah -- your app is coupled to those services. I've definitely written code before that depended on a third party API only to have that API break on me.

Granted, this is not something to worry about with S3 or SQS.

> If we go with your definition then my app is a monolith with S3 and SQS

okay sure, I agree that calling your app a monolith with S3 and SQS is ridiculous.

The point is that we all may think that we're writing another S3 or SQS when we write our own micro-services. However, in practice maintaining a stable, backwards compatible, public API like that is quite costly and we usually end up re-implementing function calls as HTTP requests and then calling it "loosely coupled".


I don't think a monolith is defined by coupling. I think it's defined by being a single deployment of custom code covering many domains/including many kinds of functionalities, rather than many deployments.

Your definition of uncoupled makes some sense. It's a good starting place at the very least.


No no no. This is right. Inserting layers of indirection doesn't change the inherent coupling that exists between components.

Sure you may be able to call it "looser" coupling, but you may also be able to call it a rats nest of complexity!


Yep, monoliths aren’t a bad thing so long as the problem domain is actually inherently coupled.

If you break out a separate service from your monolith and it’s not independently useful to things other than your app then you probably should put it back.


Well said.

I had a misfortune to work on a project where this exact thinking was used to carve up relatively simple system into about a dozen microservices. Turns out referential integrity and transactions are really, really useful. For example, most task required calls to multiple microservices and if something broke in one of the microservices involved it would often leave other microservices in inconsistent state and retrying failed task became almost impossible. Project was scrapped after 3 years of development.


Couldn't have said it better myself.

Sometimes you gain real benefits from this approach (e.g. maybe a component needs to be scaled independently), but I find that very commonly, people write tightly coupled micro-services and end up with a distributed monolith.


If you compare making a call to a web service vs calling a function to achieve a similar outcome, it is definitely true that you can easily have business logic coupled between these two services. You also have the additional cost of having to handle network errors in the calling function.

However, scaling a monolith (as someone who does this for their day-job) is significantly harder than scaling independent services. This is mainly because performance issues in one function can hog resources that other functions share. This is particularly the case when you have a single, central database where one poorly optimized query can cause performance for all users to degrade severely.


To play Devil's advocate, how about just running multiple instances of the monolith behind a load balancer, and using database replicas/shards/load balancing?

Being able to scale individual components/services will be the most efficient solution, but if there doesn't happen to be a particular bottleneck that bogs down everything else (besides the database), it seems like the traditional monolithic load balancing approach may not be so bad.

Especially compared to the effort of trying to split an existing monolith into microservices. And if there are other bottlenecks, you can focus your effort on improving or isolating those parts.

(I only have intermediate experience in this area and certainly way less than you do; definitely not trying to claim anything authoritative.)


Your thinking is along the right lines. The "you can focus your effort on improving or isolating those parts" is the trouble. With a sufficiently complex code-base this has a very high cost, and the number of programmers capable or willing to do this sharply declines.


Your scaling argument is one of the oft cited arguments. I've cited it myself. The reality I've experienced has largely been contradictory, though. The reality is now you have bottlenecks in each of your services and you likely have a much harder time figuring out where the internal and cross-service bottlenecks actually are. You have to add a lot of modern and cutting edge tech to really get an accurately traced performance picture, and the truth is that pretty much nobody does that when they're building their microservices. Only once things start falling over do they consider adding these things that would just be available much more readily in a single process.


Most systems don't need to scale beyond "please no obvious performance bugs", so it makes sense to write most applications as monoliths for the reason that you are stating (i.e. development is faster).

It only makes sense to write services (note, "services", not "micro-services") when it is inevitable and obvious that your software is hitting scaling issues related to some subsystem, or when you have systems that need to be independently reliable. (i.e. ATM's should work even when the banks website is down)


Last time I was scaling a monolith I basically did it purely by splitting up the DB into multiple ones with different purposes and improving query perf just through that.

I would've had to do all the same work to extract it to microservices, but also would've had to do a lot more on top. Had it not been a legacy system, probably worth it - but as it was, leaving it as a monolith querying a now-split-up set of DBs was cheaper and faster.


> If we analyze most of the successful migrations into microservices, they were driven almost exclusively from necessity rather than preference.

This is basically how everything should work. There's no need to rush things if you don't need it now, especially that some that do convert are disappointed at the results (it's not the microservices themselves, it's the specific migration planning, design, and maintenance.)

> So maybe there’s no point in keep discussing microservices vs monoliths.

Wikimedia has monoliths for text-based systems and microservices for (some) media-based systems, and it make sense: transcoding is unpredictable workload (for them, I imagine this is more common for YouTube) while database operations (at Wikipedia's scale) is very predictable and doesn't warrant the additional complexity. Even in their planning for multi-datacenter operations (https://www.mediawiki.org/wiki/Wikimedia_Performance_Team/Ac...), they are more concerned with disaster recovery and slowness to logged-in users, not quick scalability.


> This is basically how everything should work. There's no need to rush things if you don't need it [right] now.

I'm really not sure why this is so difficult to grasp for so many people in IT.

I think it stems from a desire to not have to think too hard about how to solve a problem since "someone else has already solved this."

... or maybe an inability to do so.


Many IT people are tech-fetishists and only solve genuine problems incidentally.


Most softwares are monoliths in 2021. The exceptions are over hyped, because the day to day lives of administration and corporate devs are boringly standard.


microservices: let's take the hardest problem in software construction, factoring the system properly, and introduce network connections and deployment complexity into it.

yeah, there's a place for monoliths, and developers who are willing to ignore the industries ludicrous fads and faang-chasing have the advantage.


Don't forget things like routing, authentication, certificates, logging, deployments, error-tracking, monitoring, error-handling and service-meshing.

Sure, it probably gets easier when you're booting your third or fourth microservice, but there's a lot of overhead.


For some, the overhead is the point.


I feel like the hype made a lot of people (not you) forget there's a middle ground, like you can't just have "services". I therefore propose a new hype cycle "mesoservices", i.e. only split out a service from the monolith when needed and it just does as much as is necessary.


exactly, thank you


After much debate my company recently switched back to a monolithic architecture and it may be the greatest decision we've ever made.

It dramatically simplifies so many aspects of development, which means you can have developers working on more impactful features and you don't have to worry about create a meta structure for keeping things consistent across all your micro services which is one of the biggest benefits I didn't see mentioned in this article.

And in many systems with interdependent logic (which ours is), not duplicating data and not having micro services calling other micro services in loops become incredibly difficult things to avoid.

I highly recommend the monolith for anyone who is not trying to cope with epic scaling issues.

All hail the monolith!


Every project should be started as a monolith. Period.

Then as it grows and it makes sense to decouple at the edges then it makes sense.

And these micro services only make sense when there is a team responsible for each one.

Otherwise the whole thing is like a game of chess - you will forget to make a move at some point.


This seems to be a little over-simplistic. There are many good cases for microservices, and there are times when going into the project, you know it is what is needed. I think there are also a number of ways to make microservices, and it doesn't have to always be difficult. For instance, the last 4 projects I have built have been done with microsoervices, where the API Gateway endpoints such as /user or /machines each talk to their own lambda microservice.


In my experience, the architecture is not relevant at all for the success of a business.

It's alignment with the company structure is.

The same way Conway's law tells us that company structure and the communications should be in sync, the architecture should match the teams.

As a rule of thumb... the good architecture is the one that minimize the communication within the different teams.


I've worked for a very successful very high margin $200M+ revenue company with very good salaries on a (some years back) CORBA Java system.


Some would argue that if you are using corba you probably have a pretty good idea of what you are doing and you are not doing fashion/hype driven development.


It might also mean that they had the project created when CORBA was in fashion, and it survived to this day, because it was an actually useful system, and despite the architecture being - whatever it was.


"CORBA was in fashion"

Exactly this.


I feel all these black and white comparisons of monolith vs microservices tend towards being overly reductive and miss the crux of the matter.

First, "microservice" is a terrible name (we should have just stuck with SOA), it leads to all kinds of agile consultant snake-oil salesmen claiming some ideal size for a service. That's bullshit. Services should be defined by their interfaces, full stop. If you can't come up with a stable interface between two services such that the interface changes orders of magnitude than the code within each service, or if you find yourselves always having to deploy the services in pairs due to leaky abstraction on the interface, then they probably shouldn't be two services.

Second, SOA is primarily a tool for scaling teams. Yes there are some tangential benefits in terms of code base size, CI build time, etc, but those are false economies if you have a small team that has to deal with the overhead of many services. Modern web-scale architectures are really about factoring things to leverage specialists to scale very large tech platforms.

Third, and perhaps most importantly. In any rapidly growing business you need to evolve quickly. You should not expect to design a perfect architecture day one, you should plan to evolve the architecture continuously, so that every two orders of magnitude growth you look up and realize you've replaced most of what you've previously written one way or another. Small startups that focus on "microservices" before they are anywhere near 100 engineers tend to die before traction.


> Developers always want to try the new flashy things [...] On the other hand, management mostly sees risks .. mostly wants new fancy features

I have the inverse experience: management wants to brag about tech and therefore force ill-suited tech onto the developers.

I've had managers that wants a CDN because that's what all professionals do and they don't take a moment to think about added risk by adding more components to the system (or even if it adds any benefit at all).


CDNs are a great example where, if you don't have the problem they solve yet, using them correctly might not be worth your time.

You may now have separate asset domains, interactions of cache expiry headers across different servers, custom header forwarding through your front-ends, new separate asset packaging and deployment steps during shipping, and a slew of other "new stuff" to think about during every deploy, that can all break, and that you ought to have multiple people on the team really understand to use properly, or to debug if it's not working.

If you have <100 users, growing to ~500 by the year's end, you maybe don't need to spend time on any of that stuff yet.


> I've had managers that wants a CDN

I'm hoping that the purported CDN is not for an internal, company-only application.


Microservices Pros: Code reusability?

Please ELI5 how I get better code reuseability in microservices than in my monolith with separated concerns into libraries and helper modules.


Yeah I think it's better rephrased as "service reusability". A proper microservice should be able to be redeployed anywhere for anything.

If you have an authenticate() function in an auth helper library, you can move that around to new web apps to help users authenticate with different parts of your site(s).

Your authenticate() function for a microservice is probably going to be part of a broader "microservice api library" that gets reused, but I can't imagine reusing the microservice code itself - you're supposed to just spin up a new one.


If you have an “authentication” micro service. I can just reuse/deploy the service for my new app. No coding (maybe some configuration)


I find a lot of people create micro-services when a simple library would suffice. An "authentication" micro-service might make sense in some contexts, but in almost every real-world situation I've been in, this would be better as a library.


I think the idea is that if you only need for an external tool one service you don't need the whole thing.


Hearing a lot about monolith vs. microservices, can someone shed some light how "pluggable/plug-in driven monoliths" fit in? To Quote Martin Fowler on monoliths:

> Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed.

But this is not true. I worked at some C# "monolith" ~2010ish, and we used MEF framework to build pluggable .dll files that you could just "drop" into the deployment folder. As long as you stuck to the same interfaces (through a shared interfaces project) you could build and ship individual parts of the application in separate teams. Even exceptions could not harm the full system (when they were properly caught in the host) - and segfaults shouldn't happen in a managed system (but yes, they did, mostly through P/Invokes).

I always liked the good old "Winamp model" where you would just drop some dlls and got new visualization plugins enabled - each more different than the other.


> Is there any place for monoliths in 2020?

> Is there any place for monoliths in 2021?

maybe one year we'll get our answer!


It's the polar opposite of the perennial "Is it the year of Linux on the desktop?"


IMO the year of the Linux desktop was 2015, when Microsoft made their operating system available gratis.


Many of the points in this article are entirely a matter of implementation, not inherent to or ingrained in any particular architectural choice.

You can just as easily build a highly modular and decoupled monolith as you can a tightly coupled and fragile microservice. The same point holds true for many of the other pro/cons the author brought up.


It really depends on the framework being used. Some frameworks are essentially micro-in-monolith (Akka) and some encourage the microservice framework (Federated Apollo GraphQL) due to their use case.

I think it ultimately it depends on the tools being used and the team building them because there's tradeoffs to going each way.


Monoliths pros:

Way easier to debug


Decent write-up. I agree that most things should start as a monolith and only move to microservices if no other solution will solve the problem. You can scale a monolith quite effectively with a little planning.


Agreed, but I have seen numerous times (I contract and see a lot of shops inhouse) that employees suggest microservices as architecture from the start for new projects. I have a strong feeling this is CV driven development, a.k.a "its trendy and I need skills in this field to stay relevant on the job market".


I joined a shop where that was the case. We invested time in reintegrating services back into the monolith. A number of them could be replaced with ~100 lines of ruby.


I think tou can reap the benefits of both: monolith + microservices (nanoservices?) with something like Elixir / Erlang.


Yes


Yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: