For example, you can have a monolithic-repository (SVN, git, whatever) full of sub-directories housing mostly-independent projects. Conversely, you can have many repositories, with each containing a module that only works when present with all the rest in a monolithic-runtime.
Currently I'm working with some corporate internal systems that deploy new versions every couple weeks, I'm trying to work towards the former model: While we want to avoid a big-ball-of-mud, the overhead of wrangling a dozen repositories with different versions/histories/merge-operations seems wildly unnecessary.
1. The term "Monolith" has a bad rep.
2. "Majestic Monolith" means eliminating needless abstraction and avoiding
distributing your system (unless unavoidable)
3. The term also means writing "beautiful, understandable, and succinct code ...
that makes us smile when we write it ... and later have to extend or patch it".
4. Everyone has to understand all parts in the "majestic monolith".
5. The "Majestic Monolith" puts pressure on / incentivizes programmers to keep the code-base clean.
6. Most programmers will rise to the occasion.
I think most programmers would agree with the meanings/goals ascribed in 2-3 (possibly also 4, if 4 is understood as deeply understanding the structure of your software product), as just generally good rules for building software in general.
1, 5, and 6 seem highly subjective to me. I tend to think if you're building anything
stringently according to 2-3 you'd probably be fine and could call that method whatever
you wanted to.
There were some interesting ideas in here, regardless of whether or not the argument for the majestic monolith succeeded.
>an integrated system that collapses as many unnecessary
>conceptual models as possible. Eliminates as much
>needless abstraction as you can swing a hammer at.
There are ways to eliminate needless abstraction while still keeping proper separation of concerns
(But you can enforce separation of concerns through a language that lets you do that rather than needing a network connection)
Not in my experience. People will find a way to beat even the best "technical enforcement" every time. Not necessarily hack it -- just work around it to move the problem elsewhere.
6 is only true if you've hired the right people :-)
I think everyone makes this failure early on in her/his career and it's only avoidable if senior developers can teach newbies in a way that makes sense and is understandable.
In the end, most people come to the conclusion that programming languages are less important for success but bad habits and complexity can kill everything. Even with frameworks who most of us tend to rate as "easy to learn" I've seen people completely messing it up. There is no language and/or framework that prevents people from mistakes AND allow them to learn/work productivly.
There is no holy grail. It's part of our job to justify technology problems and the usage of (new/old) solutions to fix the problems. Ignoring the past and/or ignoring the future (= trend) is a mistake. Learn to question your tech things regularly without questioning yourself.
Sure, it's easier for one to "ride the wave" of a hype. Container everthing, microservice everything. Our tech landscape grows complexity even faster than a couple of years ago. Somehow many people love complexity because they don't enjoy writing "stupid business software" and want to be some mad architect that rewrites everything in his/her favorite niche language with 10 new layers of indirection just because…
These are the worst type of programmers. Complexity for the sake of complexity. They will replace a ten line Python script with a Python "app" made up of multiple classes. Each class containing its own generator implementation because using loops or recursion is for noobs...
One of the benefits to the majestic monolith is
that it basically presumes that the people who
work on it also understand it.
I'm kind of doubting that this guy writes majestic code, I would like to see him put more emphasis on separation of concerns, which you should have even if you don't use a distributed architecture.
(Also, as you point out, "everyone understands every part of the project" only works for small projects).
Well, no software is possible to (or will be) maintain(ed) indefinitely.
I agree that with a small team it's much more feasible to keep the monolith well-polished. But that polishing is more a product of professionalism and discipline than something that will come along naturally with applying the pattern--which is orthogonal to whether or not the code is monolithic. Good patterns are supposed to be non-orthogonal to good discipline--that is, they should bring along design benefits as an artifact of the pattern themselves.
Some advantages that this article seems to ignore:
- Even with a small team, microservices allow us to make small iterative changes to services without touching other parts that are working as they should be.
- It's easier to find and patch bugs when they can be pinned down to a smaller codebase and fixed in isolation (most can be).
- Local development becomes easier - you can spin up just the services you need for the feature you're developing instead of the entire system.
- It's easier to get "up to speed" when you need to add a feature when you're working with small codebases (and often only need to touch one or two at a time).
- In our case we have three distinct sets of consumers: two fairly separate groups of end-users, and automated services (m2m). It's quite common for any issues we have to only affect one of those three sets of groups because they are largely isolated in different services.
That's not to say there haven't been some headaches, but I personally much prefer our service approach to a monolith. I really think the key is removing operational friction of working with services with tooling.
Basecamp, while a great tool, is a "simple application". I don't mean it's not well built, I don't mean it's easy to build, I don't mean it was fast to build, but if you look at what it does, it's all vanilla stuff.
Many other companies don't have the luxury of building apps that look like basecamp, and hence, our architectures are more complex.
When he writes these kinds of articles, they always feel somewhat strawman-ish.
At the end of the day, each software project has it's own unique requirements that will govern success. We got paid the nice salaries we do because it is up to our discretion to understand what is the most efficient architecture for what we are trying to solve.
The thing that people fail to grasp is that (almost) everything DHH writes is in the context of "when your application looks something like Basecamp" :-)
I've been following 37signals/Basecamp since 2002, I think (long before Rails) and they've always made it clear that their opinions/advice are _not_ universal, because they only speak from experience - _their_ experience.
So, if you're not in a similar situation as theirs, then of course their arguments are going to look strawman-ish to you.
Always? How about in the OP?
What I do sometimes see as a problem is that people assume that their app necessarily needs to be more complex than Basecamp, and that arguing for simplicity and "plain old CRUD" approaches to problems is an impossible task.
The best way to manage complexity in a lot of cases is to reject the complexity in the first place. That's not always feasible, but there's a real cost to complexity even when well-managed, and if a "simple application" approach can deliver the same or even 80% of the value, it's often a trade-off worth making.
The easiest line of code to maintain is the one that doesn't exist. :)
What would "non-vanilla stuff" look like?
It's all just code, right?
Everything in this list sans real-time chat I would consider as low complexity, low volume and low velocity. It's basically request a view, hit a cache / db, display some data. Send an email now and again. Store some stuff on s3 now and again.
There are lots of sites that have more complex feature sets then what basecamp does. Things like complex transcoding of media, real-time updates on their views, streaming of complex media types, need for near-real-time integration with 3rd party APIs, real-time bidding, real-time decision making, real-time analytics, etc etc etc.
I want to be very clear that I am not criticizing Basecamp as a product. Simplicity is difficult to achieve in software and their product does a great job of doing most of what you need, and not much else. I based the software stack of the first company I founded on pre-1.0 rails monolith (mongrel had just been released), so I'm not even anti-monolith.
I just think this particular post of his is entirely too simplistic and could lead younger software designers astray.
The thing is that if you don't require multi-processing, you are adding complexity to your solution that doesn't actually exist in the problem domain. Not only that, but you are forced to nail up your API between the services. My experience has been that one of the most common causes of complex code is premature subsystem decomposition. Before you have written much code, you design the subsystems and nail up an API. If it turns out that it doesn't fit, the opportunities for refactoring are practically zero.
For some problems (and it could easily be the case for the problem you are working on), micro-services are really useful. Generally this happens when you require multi-processing. You create a service for each thing that requires a separate process and you build an API for communicating/coordinating those processes. You have to do that anyway, so it's a benefit to design the services (or whatever).
Generally speaking, though, it is beneficial to delay subsystem decomposition as long as possible (but no longer). This allows you to change your internal APIs with low cost and acquire more requirements. This significantly reduces the risk that you have to major squirrelly work-arounds. If you don't need another process, then you often split out reuse libraries as soon as their API solidifies. If you do need other processes, then micro-services (or other similar approaches) can be introduced.
On many teams, you often have developers who are incredibly confident that their initial designs will hold up over time. They are usually the person I am cursing when I show up 5-10 years later and have to wade through the jungle of bizarre work-arounds. For that reason, I tend to suggest to people that they should avoid micro services until they actually need them (which could very well be never).
It lets you live with the seam between the two systems for a while, adjust responsibilities and communication patterns cheaply as they reveal themselves as pain points, and delay or avoid paying whatever costs are associated with deploying a new microservice (which at my current company, at least, are non-zero.)
Outside of a few examples where we've identified a simple and truly orthogonal set of responsibilities up-front for a new microservice (e.g. image resizing, push notification delivery) I have always regretted prematurely building a new service instead of extracting it from an existing codebase.
Choosing an architecture based on your ease of working seems fine, until you need to engineer that architecture to be fault-tolerant, scale, etc. (not saying what you are doing isn't fault-tolerant, etc.) That is where micro-services and distributing your architecture can introduce more failure points in the underlying infrastructure than with a monolith.
It all depends on the problems being solved. My hope is people will architect a system based on the business needs as the primary factor and nice-to-haves would include making your work flow easier or trying new shiny things out.
> Even with a small team, microservices allow us to make small iterative changes to services without touching other parts that are working as they should be.
My experience is this should be the case anyway. Having the ability to "find usages" in the IDE, and lots of compile-time enforcement, gives me a lot of confidence that changes haven't affected anything unrelated.
> It's easier to find and patch bugs when they can be pinned down to a smaller codebase and fixed in isolation (most can be).
True but not something I find microservices help with. Any decent debugging/introspection tools are going to give you better information than a network sniffer.
> Local development becomes easier - you can spin up just the services you need for the feature you're developing instead of the entire system.
Don't forget the overhead of the network layer though. Particularly if you're using service discovery (and if not you have a config problem), I find running two or three microservices locally is heavier than running a monolith that contained the equivalent of ten of those services, and it's a fiddle to start the right ones during development. As long as the codebase remains a size that can be comfortably run on a single machine (and start up reasonably quickly, and all be open in the same IDE at the same time, and so on), it's much easier to just run the whole thing.
> It's easier to get "up to speed" when you need to add a feature when you're working with small codebases (and often only need to touch one or two at a time).
True up to a point - there's an intimidation factor, but I don't think it's actually any harder to understand a folder in a repo of ten folders than a repo that's separated from the other ten. And separating creates a barrier to organically expanding your understanding. If you have distinct build modules then project encapsulation is enforced just as much as for microservices.
I think the biggest stumbling block to SOA is that failure states have to be considered up-front, and the responsibility of concerns have to be designed into the system up-front. Degrees don't train you how to do this, so engineers stick with what they know will never fail (function calls).
We've been experimenting with plugin-based architecture on some work projects, which force us to address separation of concerns (at least) while still keeping things in the same process.
It's configuration files and objects that worry me.
We had a working monoloth, it was not the best but it was stable and it did the job. Then new CTO comes in and spreads the microservice hype. We built everything in microservices in such hurry and didn't even implement the most important part - the orchestration.
Now system is doing pretty much the same thing, except it's way less stable. But hey, we're microservices! (Also went from PHP to Node, which also IMO is a huge mistake)
At VideoBlocks we're slowly starting to migrate our monolith into a service-oriented architecture where it makes sense. E.g. our new search service is now a standalone microservice. But I think that we're still very far off to an architecture that consists solely of microservices.
I'm a huge fan of the hybrid approach in the interim, it allows us to migrate slowly and only when it makes sense. This means that we've had plenty of time to establish orchestration and deal with all of the small infrastructure edge cases that other commenters in this thread has been struggling with.
Wherein he describes Dread Pirate Bezos' edict:
His Big Mandate went something along these lines:
1) All teams will henceforth expose their data and functionality through service interfaces.
2) Teams must communicate with each other through these interfaces.
3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care.
5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
6) Anyone who doesn't do this will be fired.
Amazon's Jeff Bezos has a solution for this problem. He calls it the "two pizza rule": Never have a meeting where two pizzas couldn't feed the entire group.
All too often I see small, co-located teams (<10 people) adopting a microservice architecture because #reasons which are ill-defined.
Docker is awesome, microservices are awesome, but they come with a lot of taxes to pay, and tl;dr - until you know you need to pay those taxes, you should stick with a monolith.
I look forward to breaking down these monoliths again in 5 years time, same as 10+ years ago. Not every business decision is best informed as an engineering one - and vice versa.
and also (more pithily) here:
There's nothing more frustrating than people telling you 'I wouldn't start from here' when you are the one that has written the productive software.
There are tremendous advantages to monolithic software architectures, but for some reason they are seen as impure. It reminds me of the Java/object orthodoxy of ca. 2000 when everything had to be a class.
Meanwhile procedural code came back with a vengeance, because it's advantages are those that can be used by people trying to write useful software on a budget.
I unfortunately don't have the need for a distributed system right now. Does anyone with more experience care to elaborate?
And it is not that Rails don't scale, it is just very expensive scaling it.
When the business logic is getting really complicated, and you're changing it rapidly, you may need to break it up into services. But
are you really that big an operation?
Wikipedia, fifth busiest site on the web, is MySQL front-ended by ngnix with the business logic mostly in PHP.
But the post misses a major point of why micro services architectures exist, to decouple dev efforts, I can hire someone off upwork or anywhere to write a small service in whatever framework is suitable for them, as long as the service connects to the message bus. I can scale services horizontally much easier when each uses it's own data model etc.
It's not just about the size of the company but also about scale, both of developers and servers.
SOA has given us the ability to send a single part of our architecture to other people to work on (contractors, etc.) and let them understand that one part well to get their job done without too much exposure to the rest of our system. It's allowed us to also bring on new hires and get them working on code quickly and letting them deploy without total fear of the system imploding. It's also allowed us to become specialists in a small team on a certain area of the architecture. While we all write similarly we can all approach the issues in our own ways without stepping on each other.
On the other hand many things have been done which shouldn't have been. Some areas are way TOO decoupled via SOA and need to be brought back together for better stability and speed. We're learned that afterwards and only is becoming required after lots of complexity has been added which we didn't foresee because no one (programming & business) thought we would need it to be that complicated.
For some areas it'd a good idea if we merged together some repos and services in to a better single service with some stronger integration while maintaining the API's we built for other things.
> organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations
If you are a single team producing it, then build a monolith. If it is a diverse set of teams collaborating, then build microservices.
I ran a company with 4 employees. 3 developers and an idiot sales guy. SOA made perfect sense from the start. We had daemons running background tasks on our servers in Go (the best tool for that job), a separate data API we used for our main "monolithic" web app, and then our mobile and other clients all used the data API.
The developers working on the web app only had to know the API end points to get data into the "monolith" and the rest of us working on the API and daemons understood how all the other clients would use them. No issue.
I'm all about the idea that you shouldn't implement an SOA because successful companies do it but I feel like this article is recommending building a monolithic rails app or something as a reaction to how popular and talked about SOA architecture has been lately and doesn't really leave much room for the idea that small companies (even really really small ones) can use it and would it would make sense for them.
Therefore, if you build it sensibly, you can achieve more with less resources. But this also requires you to do a time-memory trade-off in that it requires more effort for any individual programmer to learn all the abstractions to eventually become (surprisingly) productive.
I decided to go with a single repo with lots of discrete NPM packages using NPM private. This allows for things like:
const logger = require('@org/utils.logger');
If each package was inside its own repo that increases complexity and costs (lots of private repos in GitHub) and I don't really see the benefit.
Reason being, it's only the rendering engine that is a monolithic basecamp app that renders both web and mobile.
But a variety of 3rd party services are used to make basecamp work, such as
- Queen Bee is an external rails app for billing
- CloudFront for CDN
- S3 for customer doc storage
All of which use services (micro services) stitched together to make basecamp work
That said, one of the reasons I like using Erlang for building services is the structural primitive of the OTP Application. It makes it pretty easy to build something self-contained that exposes a function API but can be deployed local to your other processes on the same VM or be trivially moved out to a dedicated machine/cluster as a remote service with almost no change to the programming model for interacting with it.
Deciding where a piece of functionality should live is almost purely a question of how much latency you're willing to tolerate. Though to be fair, this is because the OTP Application construct gives an incentive to not build a true monolith from the outset, despite the fact that everything that makes up your whole system might be in one big release tarball.
The company I work for has about 200 people and our system is at large scale, many thousands of events per second, so a microservice architecture is required. (It started out as a monolith actually, and grew out of that pattern when it didn't fit anymore.) But most people haven't heard of us.
I say you should be careful when you read stuff like this. It makes so much sense! Which is why you need to be careful of course.
I'm not sure if you're making a performance argument, or an organizational one. Microservices don't make things faster though. Introducing IPC where there wasn't any previously is not going to make something faster (it's going to be significantly slower 99% of the time actually) unless you also increase resources dramatically.
Pretty sure thats exactly what DHH (the author) advocates for.
My impression from a past life was that even large companies didn't frequently staff hundreds of developers on the same project. Even if they might have thousands of developers overall.