* Basic Monitoring, instrumentation, health checks
* Distributed logging, tracing
* Ready to isolate not just code, but whole build+test+package+promote for every service
* Can define upstream/downstream/compile-time/runtime dependencies clearly for each service
* Know how to build, expose and maintain good APIs and contracts
* Ready to honor b/w and f/w compatibility, even if you're the same person consuming this service on the other side
* Good unit testing skills and readiness to do more (as you add more microservices it gets harder to bring everything up, hence more unit/contract/api test driven and lesser e2e driven)
* Aware of [micro] service vs modules vs libraries, distributed monolith, coordinated releases, database-driven integration, etc
* Know infrastructure automation (you'll need more of it)
* Have working CI/CD infrastructure
* Have or ready to invest in development tooling, shared libraries, internal artifact registries, etc
* Have engineering methodologies and process-tools to split down features and develop/track/release them across multiple services (xp, pivotal, scrum, etc)
* A lot more that doesn't come to mind immediately
Thing is - these are all generally good engineering practices.
But with monoliths, you can get away without having to do them. There is the "login to server, clone, run some commands, start a stupid nohup daemon and run ps/top/tail to monitor" way. But with microservices, your average engineering standards have to be really high. Its not enough if you have good developers. You need great engineers.
Microservices necessitate the application of a more rigorous set of engineering practices to all service infrastructure components and therefore carry a greater overhead than traditional development methodologies - rigorous engineering does not come free. Whether that trade-off makes sense for any given project is a question of resources and requirements.
I feel two salient points were not mentioned: (1) Popular microservice orchestration/infrastructure management approaches are not universally applicable; their limitations should be recognized before assuming applicability. (2) The webhost is currently down; perhaps the author should have used a scalable or distributed cluster of microservices ;)
I urge everyone to use it to decide whether they really need microservices or not.
When micro services work, it's because they made it easy to verify each of these [obvious] bullet points. For some jobs, file this under premature contemplation.
When other methods work... My top two are "clarity of focus" and "[relative] lots of unnecessary labour". "Lifecycle" takes a coalition third.
Btw - the list is not an invention worth copying. All items in that are not novel or unique, they are general practices that every good engineer would have their own version of. I'd like to see the talk and try to add a few more items to this.
It just so happens that the portion of the community is the one most looked up to by the rest of the community so a sort of cargo cult mentality forms around them.
A differentiator in your productivity as a non-huge-company could well be in not using these tools. There are exceptions, of course, where the problem does call for huge-company solutions, but they're rarer than most people expect.
The whole thing could be trivially built as a monolith on Rails/Django/Express. But that's not exciting.
The best place to place a module boundary - and the best format for communication across that boundary - is rarely completely transparent from the outset. With a monolith it's relatively easy (if not actually easy) to fiddle with those details until you get them right. With a service it can be very difficult to iterate on this stuff, so unless you're very confident you'll get it right on the first try, it's best to get a bit of experience in the problem domain first.
The biggest downside is it makes shipping an on-prem version nearly impossible. The infrastructure and the software are so inextricably linked that it is not portable in the least bit.
You can bundle up those complex dependencies into deployment manifests, or use helm.
It's like a SAAS in a box
Yeah it's not new fun tooling, but boy does it feel good to ship features without it being a total pain in the arse.
Also C# is pretty nice.
Several sites I'm aware of have an Facebook-style chat service, which is basically an off the shelf node app on its own. This makes far more sense than trying to build such a thing into their legacy app. It also perfectly describes a very useful micro service, in a very different environment to yours.
ha. I wish I had this many repos. We have 1000+ git repos. To be fair, a ton are open sourced and there are reasons why it's done this way, but still.
Reality: Everything runs on two EC2 instances, regardless of load.
De-duplication is a very old concept (differentiates good "sysadmins" from bad ones, since the 90's, and good programmers from bad ones).
Thinking organization-traversal-wide is what is hard for some persons.
Currently working at consulting in a big corp... you get to this problem:
Resource: name FooBar, type int (organization view point)
app1 name: FooBar type: int
app2 name: foo_bar type: int32
app3 name: Foobar type: Meta::Foo::Bar
app4 name: foobar type: string
app5 name: fooBar type: int64
Microservices, thinking transversely, solve that. See AWS, GoogleCloud, Azure, etc "resource names" (ARNs, etc), for an example of a simple and great microservice.
Note also, that microservices experts (and I'm not one of those) recommend a monolithic and transactional core architecture, for microservices infrastructure.
This is a pithy encapsulation of something I've been thinking a lot about recently: bud off a microservice from your transactional core if it is higher leverage to do so. Any good readings you've found on this perspective?
An example that comes to mind is: I might write my core application in Ruby on Rails, but need to perform a specialized, CPU-intensive function (PDF generation). I can delegate that to a microservice, invert a CPU-bound problem into an I/O bound one (from the perspective of the Rails core), and get the job done with less hardware.
AKA SOA. Why call it microservice ?
The main thing is the size of the services, the clue being in the name. Also there's a clearer emphasis on the services being more business related concepts, SOA can often described in more techy service splits rather than business concepts.
If micro services are the bee's knees, why are you writing them in Eclipse or Emacs? Wouldn't interconnected processes make up a "better" environment? An why are you deploying on something as monolithic as Linux? Shouldn't you out-compete all of these obviously inferior solutions, as there are probably more money in that than whatever web app you are currently building?
As you get a better understanding of how you need to handle things, where the performance bottlenecks are, etc., you can start breaking out pieces that would benefit from being isolated.
It's extremely unlikely that in the short term (first year(s)) of the application being used that the engineering would benefit from a micro services architecture.
It certainly is the most vocal.
Our apps all depended on an Oracle DB. Oracle had recently introduced Advanced Queuing. So I figured I'd de-batch and decouple these things using AQ. Every program (C++) was broken into "atomic", stateless business tasks. Every task was fed by a "task queue". Tasks would take a work-item off a queue, do their thing and depending on the outcome, would look up a destination queue (destinations could only be "business state" queues; task queues could only be subscribed to state queues (topics)), dropping the task outcome onto the state queue. Being stateless and callback driven by AQ, we could run these things together and ramp them up and down as demand required.
The overall structure and dependency of the various tasks was externalised through the data-driven queue network.
The resulting solution was far more maintainable, provided "free" user-exits (by virtue of being able to plumb new tasks to existing "business state" queues), and was eminently horizontally scalable. In hindsight this was definitely not state of the art. But we were a pretty conservative business with a bunch of pretty unworldly C and PL/SQL programmers. None of us had used Java at that point. But with this approach were able to cope with a massive increase in data volume and make use of all our expensive Sun cores most of the time.
No Java, no REST, no HTML, no SOAP. But we called these queue micro services :-)
I've done little heavy lifting with Oracle, but the general pattern you describe has been my go-to methodology for north of 20 years now. I've come to call it 'message oriented programming'. But it's just one of many ways to embrace the benefits of loose coupling.
Licensing costs can go drastically up as most modern licensing is node/core based. As can deployment procedures get more complicated.
I would love to understand how this article believes that the modules in a monolithic system can be scaled horizontally if they are actually a single code base in a single system. Either the system isn't monolithic, or it they have never really done it. Sticking a load balancer in front of a micro service and scaling based on measured load requires tools and technologies, but is very scalable. It also allows you to do rolling deployments of draining/rotate out/update/rotate in that allows you to get near no planned downtime.
Distributed transactions are the devil, but you don't need to do them in a microservice design. It requires design work on the front end to clarify what the system of record is, but if each service has a domain it controls, and all other services treat it as the truth, it's rather simple. I say this having researched doing payment transactions across geographically diverse colo's and we treated that as a sharding/replication/routing issue very successfully.
Ninja edit: Starting with a microservice design is most likely overkill for a lot of systems, but either way, clear interface/boundaries in your system are good and healthy
A microservices allows you to scale up very particular components of an architecture, but there is nothing stopping a monolith from being horizontally scaled in just the same way. In AWS, I would make the monolith deployed with an AMI in an auto-scaling group with a load balancer in front.
Databases are trickier though.
It's a shame that a 'monolith' application doesn't just mean a genuine singleton, though, as that would be the perfect name for it. A bank of load-balanced monoliths should be a polylith.
By your definition, most rails/django apps are probably polyliths.
Thanks for giving it a read!
The answer to your question is simply libraries and build targets. My monolith is mostly shared code, with unique functionality at the fringes, but it all builds into a single deployable jar, minus the licensed libraries which are special cased.
I'm a huge fan of SBT, despite its dwarf fortress like learning curve.
Good way to scare me off ever attempting to learn something, haha
Some other folks have addressed the scalability questions you raised, but i'd add - I am in no way advocating for monoliths as a better approach. Rather, both have tradeoffs you need to think through before adopting.
Thanks for taking the time read.
I get the cool things about microservices: properly isolated functionalities, ability to assign a team on it, simplicity of code and considering each feature as important, not just "that thing in the codebase".
But it also have all the good parts of monolith: easy deployment and local setup, aggregation made easy, and ability to run integration tests.
For my rails projects, geminabox was of great use for me to achieve this, as it allowed me to host private gems. Lately, I've done a lot of golang, and was surprised to see how it's a natural pattern with go packages.
Only hurting part for ruby projects: keeping dependencies up to date in all those libs (since they all have their test suite, it means that I at least have to update them for test dependencies). To solve this, I've built some tooling that will update all my project automatically and create merge requests for them, running from a cron task.
There's already a term for that: modularity.
Microservices then just forces on you the modularity your language should have already given you.
That said, I believe there is a point where monoliths begin to break down.
First, It is tough to keep code well structure in a monolith, and eventually things bleed between domains. That means, as mentioned, engineers must understand the entire codebase. This isn't practical for 100k+ LOC codebases. Strict boundaries, in the form of interfaces, limit the scope of code that every engineer must understand. You probably still need gurus who can fathom the entire ecosystem, but a new eng can jump into one service and make changes.
Second, deployment is a mess with any more than a few hundred engineers on a given code base.
Third, it becomes increasingly difficult to incrementally upgrade any part of your tech stack in a monolith. Large monoliths have this tendency to run on 3-year-old releases of everything. This has performance and security implications. It also becomes difficult to changes components within your monolith without versioned interfaces.
Fourth, failure isolation is much harder in a monolith. If any portion of code is re-used between components, thats a single point of failure. If your monolith shares DBs or hardware between components, those are also points of common failure. Circuit-breaking or rate-limiting is less intuitive inside of a monolith then between services.
TLDR; start with a monolith, migrate to micro-services when it becomes too painful.
> Additionally, many of these stories about performance gains are actually touting the benefits of a new language or technology stack entirely, and not just the concept of building out code to live in a microservice. Rewriting an old Ruby on Rails, or Django, or NodeJS app into a language like Scala or Go (two popular choices for a microservice architecture) is going to have a lot of performance improvements inherent to the choice of technology itself.
Languages and tech stacks generally have tradeoffs. Considering Rails vs Go, you could consider the (massively over-simplified) tradeoff to be that rails is better for prototyping and iterating quickly, while Go is better for performance. In an ideal world, you'd write your webapp in Rails, but put the performance-intensive stuff in Go. You'd need to communicate between the two by, say, http. Suddenly you have services.
The performance gains of using a new stack aren't orthogonal to services– they're actually one of the key selling points of services: you can use whatever stack is most appropriate for the task at hand without needing to commit the entire project to it. You can use postgres for the 99% of your app that's CRUDy and, I dunno, cassandra for the 1% where it makes sense. It's difficult (although not impossible) to do that cleanly within a monolith.
For example, your point about Go vs Rails is an apt one - I would only add that I made that comparison because...
A: It was originally a golang meetup where I gave the talk
B: Go is increasingly becoming popular as a choice people move to off of Rails, for performance sensitive code (Scala being the other popular choice I see), and also for building "microservices" themselves.
I could have, and maybe should have, gone a little more in depth at that part, but the idea wasn't to be fully exhaustive (for better or worse).
But the main takeaway about the performance gains was that the idea of putting the word "micro" in front of something magically made it more performant without appreciating why. It's a response to folks simply parroting information without understanding it.
Thanks for the feedback.
If they moved from Rails to Go, these people didn't need Rails at first place given how bare bone Go is. That's the same issue with micro-services, choosing a tech or architecture because hype instead of understanding requirements. Micro-services are something that should be an exception yet it is pushed as a rule by many influential developers, who won't be their to clean up the mess when it becomes obvious it wasn't the right choice.
That sounds exactly like the last two apps I have worked on. Django / Flak based with Redis for caching. Suddenly they sound like trendy hybrid micro-service apps.
A lot of his examples are of people doing things poorly or incorrectly. I could make the same arguments about object oriented programming my saying it's bad because someone makes every function a public function.
For example, microservices are absolutely more scalable if done correctly with bulkheading and proper fallbacks and backoffs, and proper monitoring, altering, and scaling.
But those things are hard to do and hard to get right.
You're not wrong in that this article is meant to point out the pitfalls of the approach, and to advocate for understanding before diving into a particular architecture.
It's meant to give people things to consider before deciding breaking things into "microservices" is the right thing for their engineering org at that time.
I attempted to note several times that my intention was not to say "Microservices are bad", but rather "Please don't dive in before you consider the trade offs". It's not as simple as some folks might have you believe, so I felt it was valuable to have a "lessons learned" type retrospective coming from someone who has been involved in both approaches.
microservices is just decoupling by another name.... and you do not need a network-boundary to enforce this.
Monolithic code can also be nicely decoupled too.
If code is decoupled enough that it can be separated into independent processes communicating over a network, that creates additional freedom into how the components can be deployed to (real or virtual) hardware, which is itself a kind of decoupling.
If you have processes communicating by local-only IPC methods or, even moreso, components operating within the same process, there is a form of tighter coupling than exists when the components are separate networked components.
It also introduces additional failure modes.
coupling is a logical connection, and has nothing to do with calling-semantics.
Whether a local function call or an RPC, its still the same level of coupling, this is just a difference in the (equivalent of) a link-layer in a network stack.
Adding a network connection is a much more complicated calling-semantic than a function call, many more and different failure modes.
It appears, it's not so easy:
1. First, documentation as always is not the best, and you'll have to spend time figuring out how to wire together different parts of the system and build various configurations of it for local development, CI build and production.
2. Then, there's debugging issue. Once you've figured out how to work with Docker (good news, it's really easy today), you may want to do some debugging in IDE, but it becomes really painful to launch everything correctly with attached debugger if the services interact with each other.
3. Finally, it's production deployment setup and associated costs. Besides the complexity of deployment, do you really want to pay for 14-20 EC2 instances at the time of the launch of your service and burn the money on 0% CPU activity? It will take months, probably years to get user base sufficient for utilizing this power.
The better approach is to develop single server app with future scalability in mind. You can still have separate components for each part of domain, you just wire them together at packaging time. This server app still can scale in cloud, with correctly set up load balancer and database shared between nodes.
Fortunately, we spent not much time on building microservices (about 1m/w to figure out the costs and benefits) and were able to refactor the code to simpler design, but many developers should not care about them at all at early days of their company.
and he changed his mind :-)
It allows teams to work in their own world without having to coordinate as much with other teams or people.
Microservices are good for large companies. If you're small you don't need them.
A layered architecture can give you the same.
Microservices, imo, address organizational/industry deficiencies in the design and evolution of domain models. You're basically trading analytical pain for operational pain. As the top comment in this thread (with the excellent list) concludes, you will need "engineers".
> Microservices are good for large companies.
And this has nothing to do with number of developers. It has to do with inherent complexity of a unified domain model for large organizations. As an analogy, consider microservices as scripting to layered architectures compiled language.
Large companies don't have problems throwing more engineers at a problem. But they will always have a problem in coordination costs.
Microservices also allow you to use different tech stacks for different purposes more easily.
Maybe use java for one involving hadoop or some GIS library. Use erlang for some message management service, use golang for some simple API service, use nodejs for some frontend web server, etc.
Overall the advantages of microservices come for social reasons, not for a particular technical reason.
> A layered monolith can still easily have random people cut across boundaries without you knowing because there are hundreds of engineers all working in the same system.
I appreciated your final word regarding "social reasons" and I think we're in strong agreement in that regard.
In the final analysis, it seems accurate to say that Microservices approach permits runtime operational [micro] payments towards organizational and analytical debt .
The hypothetical system(/straw man?:) you posit above is indicative of organizational, not architectural, failure/deficiency.
: in the 'technical debt' sense.
I agree with this. See my reply to mahyarm c.f. "analytical debt". Keyword here is "might not".
If there exists domain level solutions, incurring the (forever) micropayments (in context of operational complexities) of a rush to embrace microservices is a systemic fail of the technical leadership.
some of us have been through this all before with soa or in my case with com.
Each individual component is simpler but the documentation between the components becomes absolutely vital.
we ended up keeping a copies of the interfaces in a central location (with documentation of all changes per version) so that everyone would know how to talk to all the other systems.
and don't think that the interfaces won't change. they will. and often across many systems/components. like a ripple.
If done poorly it is like trading one problem with another problem.
Each of the dozens of microservices gets it's very own dedicated AWS load balancer, RDS instance, and Auto-Scaling Group in multiple regions. Just the infrastructure management alone is monumental.
But as always, this is an artform, writing and designing, not laying down pavement.
There's no "right" way, and any blanket statement about anything is false.
Don't use microservices where they don't make sense, make educated decisions, and choose the best option for your situation.
It made sense in our situation, because all our services have very very very specific rules and boundaries and there's no overlap anywhere.
> However, it’s incorrect to say that you can only do this with something like a microservice. Monolithic applications work with this approach as well. You can create logical clusters of your monolith which only handle a certain subset of your traffic. For example, inbound API requests, your dashboard front end, and your background jobs servers might all share the same codebase, but you don’t need to handle all 3 subsets of work on every box.
This makes little to no sense to me, and feel like we're bending the definition of "monolith" to mean "microservice" so that we can tick the bullet point. How, exactly, do I achieve this, when my code is mashed together and all running together?
I have a monolithic app today: an internal website, which is so small that it could be served (ignoring that this would make it a SPoF) from a single machine. But it's so closely bound to the rest of the system, it is stuck alongside the main API. So, it gets deployed everywhere.
If it were discrete enough that I could run and scale that internal service separately, I wouldn't be calling it a monolith. At that point, they're separate executables, and scalable independently — that's practically the definition of microservice. And I can't do this if (where they need to) they don't talk over the network (one of the earlier bullet points).
My team has a 500k monolith written in java 1.6. I don't really want to invest in fixing it, I'm migrating stuff to the new system. So a way to keep the old one going risk-free is to create three load balancer pools, and have apache send some traffic to the three based on URL pattern
* /users goes to pool one
* /dashboards goes to pool two
everything else goes to pool three
That guarantees that /users and /dashboards can be kept to certain level of performance - by adding more machines, not by diving into the code and trying to fix stuff.
The benefit is that its the same deployable in all cases, so its very easy to push.
I guess one need to use xref tool, to find all references outside the OTP application.
I mention it mostly to assit those wanting to explore the concept of microservices itself, as opposed to assuming a network transport is always involved. Being JVM specific, "kicking the tires" on it naturally requires that environment. Perhaps, though, some of the writings discussing it would be of benefit to those using other tech stacks.
Of course, OSGi does not preclude distributed processing (and often is employed for such).
0 - https://www.osgi.org/
1 - http://www.theserverside.com/news/1363825/OSGi-for-Beginners
I will say the only clean systems I've worked in have been microservice oriented. All monolithic systems I've worked on never scaled properly and always had bugs with 1000 function deep stacktraces.
I've talked to people who have worked in excellent monoliths (rails and django). I know they exist.
Moral is: do it right and have good development practices.
Lots of people are still in denial regarding microservices...
If you build your codebase internally with service level abstractions in mind, you can gain a lot of benefit without the cost of the network or the additional errors it can introduce.
Thanks for reading!
Furthermore, there are protections against vote rings. If, for example, someone votes directly on the URL for a story, or if the referrer is often the same, those votes are discarded.
However, you're right that the algorithm has evolved over time. Visit https://news.ycombinator.com/classic to see the previous algorithm in action.
It isn't that i'm saying "Don't build microservices", but rather "Don't adopt this approach until you understand the tradeoffs involved". I've worked on teams that have done it well, and not done it well. I've worked with large codebases that are well maintained, and poorly maintained. There are tradeoffs that need to be taken into consideration before adopting any major architectural approach.
I will say that upon reflection, the original title could have been reworded a bit to better express this.
Also, I have found it much easier to onboard a new engineer, give them the requirements for a service and let them go at it. I've been using AWS Lambdas to great effect in this way.