I'm a huge proponent of microservices, having worked on one of the earliest and largest ones in the cloud. And I absolutely think that they provide huge advantages to large companies -- smaller teams, easier releases, independently scaling, separation of concerns, a different security posture that I personally think is easier to secure, and so on.
It's not a surprise that most younger large enterprises use microservices, with Google being a notable exception. Google however has spent 10s, possibly 100s of millions of dollars on building tooling to make that possible (possibly even more than a billion dollars!).
All that being said, every startup I advise I tell them don't do microservices at the start. Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.
I saw lots of churn working on microservices that were pre-production. When it’s like this, things are more tightly coupled than the microservice concept would have you believe and that causes additional work. Instead of writing a new function at a higher version, you had to go change existing ones - pretty much the same workflow as a monolith but now in separate code bases. And there wasn’t a need for any of these microservices to go to production before the front end product, so we couldn’t start incrementing the versioning for the API endpoints to avoid changing existing functions. A monolith almost doesn’t need API versioning for itself (usually libraries do that), but it’s effectively a version 1.0 contract if translated to microservices.
Yes, that is a fair distinction that I simplified over. You don't really get a lot of the gains of microservices if you're using a monorepo, so while they do have multiple binaries/services, you still have to check into a single repo and wait for all the tests/etc. To be fair I haven't visited Google in a while and maybe it's changed now, but at least decade ago it was very different from how everyone else did microservices.
> You don't really get a lot of the gains of microservices if you're using a monorepo
I think the two are completely orthogonal.
At Google, when you check in code, it tests against things it could have broken. Not all tests in the system. For most services, that means just testing the service. For infrastructure code, then you have to test many services.
It seems things have changed since I last looked at how Google does deployments. Back then, every test ran on every checkin to the mainline, and all code was checked into the mainline. It even talks about that in the Google SRE book.
I think you were misunderstanding something. Why would every code change cause a compile and test across the entire company? That is to say; not only does that not scale, it’s totally unnecessary*. Only the downstream consumers of a change are rebuilt and tested, like you’d expect (see: bazel and the monstrous makefile before that). In this sense, the fact that Google uses a monorepo is mostly an implementation detail. It has some impact on the company’s workflows and tooling, but not its software architecture.
* unless you’re changing a very common dependency, of course, and Google has tooling for this.
Hermetic builds allow you to cache your builds and your test executions in such a way that running all builds and all tests for every commit is indistinguishable from executing only the builds and tests that you could have affected
Even if that were true (and it is not true), the non-dependent tests would finish in zero time because the results are cached and hashed by dependency tree.
Each releasable unit can wait for whatever tests you want. Usually it's just the tests for that unit. Google is actually a good example of why monolith/microservices is a completely different concept to monorepo/multirepo.
I.e. you can put your monolith in multiple repos, and you can put 100,000+ services in 1 repo.
The past few companies I was at, we discussed whether we wanted a single or multiple repos. But that was a separate conversation from microservices, so I don't think its unusual to have a monorepo with microservices.
> they provide huge advantages to large companies […] every startup I advise I tell them don't do microservices at the start.
I think you nailed it. Microservices are a solution for organizational problems that arise when the company grow in size, unfortunately it's not rare to see small startups with a handful of engineers and 5 to 10 times more services…
I remember Amazon 15 year ago when a newly hired Senior Principal (Geoff something? from Sun?) complained to us about having more services than engineers.
Strongly agree with this. It's about leaning into Conway's law. How micro the services get is a variable, for sure, but it's definitely worth considering as partly technical and principally an organizational problem.
With good defaults, you can have a dev tools / platform team create a blessed path that most teams will easily adopt so you get a mostly standardized internal architecture (useful for mobility). It's harder to allow for lessons learned from one service team to transition to the org as a whole, but if the dev tools / platform team has great Principal SWEs, it'll work. It does mean that you need great people on the platform team, though, since mediocre people will attempt to freeze development to fixed toolchains and will be unable to see the big picture.
I think Amazon does a good job with their Principals here.
> Build your monolith with clean hard edges between modules and functions so that it will be easier later,
This is unfortunately very easy to override. Oh the rants I could write. If I could go back in time we would've put in a ton of extra linting steps to prevent people casually turning private things public* and tying dependencies across the stack. The worst is when someone lets loose a junior dev who finds a bunch of similar looking code in unrelated modules and decides it needs to be DRY. And of course nobody will say no because it contradicts dogma. Oh and the shit that ended up in the cookies... still suffering it a decade later.
*This is a lot better with [micro]services but now the code cowboys talk you into letting them connect directly to your DB.
>...you get big enough that microservices is actually a win.
Can you speak more about the criteria here?
You may be implying that microservices enforce Conway's law. If so then when the monolith divides, it "gives away" some of it's API to another name, such that the new node has it's own endpoints. This named set is adopted by a team, and evolves separately from that point on, according to cost/revenue. The team and its microservice form a semi-autonomous unit, in theory able to evolve faster in relative isolation from the original.
The problem from the capital perspective is that you get a bazillion bespoke developer experiences, all good and bad in their unique and special ways, which means that the personal dev experience will matter, a guide in the wilderness who's lived there for years. The more tools are required to run a typical DX, the more tightly coupled the service will be to the developers who built it. This generally favors the developer, which may also explain why the architecture is popular.
The first part of your comment is accurate (and beautifully poetic). But I don't believe the second part follows from the first.
At most companies that do microservices well, they have a dedicated platform team that builds tools specifically for building microservices. This includes things like deployment, canaries, data storage, data pipelines, caching, libraries for service discovery and connections, etc.
This leaves the teams building the services focusing on business logic while having similar developer experiences. The code might use different conventions internally and even different languages, but they all interact with the larger ecosystem in the same way, so that devs at the company can move around to different services with ease, and onboarding is similar throughout.
>they all interact with the larger ecosystem in the same way, so that devs at the company can move around to different services with ease, and onboarding is similar throughout.
But big enterprises inevitably lose "stack coherence" over time, through drift but also acquisitions. Finding the lowest common denomenator to operate and modify it all, while maintaining a high level of service (uptime, security, data integrity, privacy, value), turns out to be a tricky problem - just defining the product categories is a tricky problem!
Well I for one would love to see such a thing properly functioning. I've seen two attempts, but neither were successful.
> Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.
I'd like to see software ecosystems that make it possible to develop an application that seems like a monolith to work with (single repository, manageable within a seamless code editing environment, with tests that run across application modules) and yet has the same deployment, monitoring and scale up/out benefits that microservices have.
Ensuring that the small-team benefits would continue to exist (comparative to 'traditional' microservices) in that kind of platform could be a challenge -- it's a question of sensibly laying out the application architecture to match the social/organizational structure, and for each of those to be cohesive and effective.
Going from working at a company with an engineering team of 40 odd engineers to a company with thousands of engineers where the small company was trying to move towards microservices and it was just really slowing us down and the large company was in a hybrid mode of still having a couple of monoliths and a lot of microservices I could definitely appreciate that there is very much a scale at which it absolutely makes sense for the engineering organisation to use microservice and very much is a scale below which it's fairly counter-productive.
I grew up using unix where the philosophy is "do one thing and do it well" and I think that carries over well into microservices.
But honestly I'm not sure there is much of a line between the two. I've seen microservices that just return True/False and ones that return 100 lines of json, which are arguably more web-services than microservices.
I honestly think it's a distinction without meaning.
The "do one thing and do it well" Unix philosophy already broke down decades ago when people started adding things like "cat -v" and ability to sort ls and whatnot. People like Doug McIlroy still argue that's all "useless bloat". Pretty much the entire rest of the world disagrees. The point is that "do one thing and do it well" doesn't actually work all that well in reality.
A CLI not a service: there is no operational complexity to "keep things running" in a CLI: you just chain some things together with pipes and that's that. The nice thing about that is that the text interface is generic and you can do things the original authors never thought of. With microservices this usually isn't the case and things are extremely specific. This is also why "do one thing and do it well" doesn't really carry over very well to GUIs.
A lot of microservices I've seen are just functions calls, but with the extra steps of the network stack, gRPC, etc. Some would argue that this is "doing microservices wrong" – and I'd agree – but the reality of the matter is that this is how most people are actually using microservices, and that this is what microservices mean to many people today.
Instead of "microservices" we need to think about "event-driven logic", or something like that. Currently the industry is absolutely obsessed with how you run things, rather than how you design things.
I don't get this idea that modularity and composition can only be achieved with separate processes communicating over a network. There are features within modern programming languages for achieving the same.
I don't think anyone is saying they can only be achieved with separate processes. It's more about independent scaling of the different parts of the system that is the big advantage. That plus the organizational advantages. It's a lot easier to maintain the modularity if you have different groups of engineers working on different services.
it really is a distinction only made by people trying to sell you something or sell you on something. service-oriented architecture is leveraging the power and the curse of being able to connect computers over a network to solve foundational scaling limits of hardware. How granular you want to make things is a design decision.
The attitude. The "micro" of microservices belies a religious zeal that more and smaller services is an unmitigated awesome thing and we should always strive for more of it. "Services" is for people who think it is a necessary evil, to be deployed under the right circumstances.
I find the organizational arguments to be pretty convincing, but surely there must be a way to reap these rewards in a monolithic infra setup as well? Maybe someone should develop a "monolith microservice architecture" where all the services are essentially (and enforced to be) isolated, but once deployed is built like a single unit.
You could do it with docker-compose I guess, but optimally your end result would be a single portable application.
I suppose that's true in a way. There would have to be some serious scaffolding for it to work though. For a web service, for example, libraries would have to be able to register route handlers and such that they handle independently.
Perhaps they could all be initialized in a common way using DI or something from a base gateway application. Versioning and interop testing would have to be figured out in some clever way.
Something like this would be the architecture I imagine.
In my experience, for regular libraries, you usually want to pin version ranges and not always use the latest of every library. But when the main business logic and a whole team is dedicated to one of the "service libraries", whose responsibility is it to make sure that the version stays up to date across other service libraries? Do you leave it up to the main host app? Do you allow a multitude of service library versions? Do you skip versioning all together? In microservices you have an opaque facade in the form of an API that makes this a non-issue. Perhaps you'd want some simulacrum of this in the form of IPC between service libraries?
Do you use some sort of contract based testing between service libraries? Put all integration testing in the host application?
It's not obvious to me what the best approach would be.
Well, you have an API for libraries as well, and an even stronger one because you can have type guarantees, which you can't do with JSON (you can use gRPC etc but people usually don't).
With libraries it's easier than with microservices, because I can scan all the dependency files of all projects and immediately see which project relies on what library, which is much harder to do with microservices as a library author.
With those two things, and the fact that you can just keep using an older library version if you need to, whereas you can't easily keep using an older microservice version if it's been upgraded, I think libraries have lots of advantages in this.
I agree, there are probably a lot of advantages. It would be interesting to see a boilerplate structure for this workflow. Maybe it already exists in some form.
I can't imagine it being different from other libraries, bundle your code up in the language's supported way, publish it in some internal registry, add it to your dependencies, that should be about it.
It is definitely possible to reap a lot of those things with monoliths. Separating internal code using libraries (as the sibling poster said), using different server clusters to serve different routes, deploying those clusters independently if possible, using multiple databases, separating the applications in different "areas" maintained by different teams. The one thing microservices can do that monoliths can't, however, is allowing as many different languages.
Younger. Netflix, Dropbox, Stripe, Slack, Pinterest, Reddit is working on it, Smugmug, Thumbtack, a lot more I can't think of off the top of my head. Also I'm pretty sure Amazon has teams that maintain multiple services.
It's not a surprise that most younger large enterprises use microservices, with Google being a notable exception. Google however has spent 10s, possibly 100s of millions of dollars on building tooling to make that possible (possibly even more than a billion dollars!).
All that being said, every startup I advise I tell them don't do microservices at the start. Build your monolith with clean hard edges between modules and functions so that it will be easier later, but build a monolith until you get big enough that microservices is actually a win.